The importance of security and compliance to applications and data environments cannot be overstated. A GenAI platform deployment is different from a typical infrastructure as a service (IaaS) implementation in terms of who holds the keys and who can read the data. Research by TechTarget's Enterprise Strategy Group has found that even cloud-first organizations are deploying some workloads on premises, rather than the cloud, due to concerns related to data governance and sovereignty (cited by 42% of respondents) and security (cited by 34%). These organizations understand the real potential for data leakage associated with GenAI.
Who can access corporate data for specific purposes should be a primary concern. Data sitting behind a firewall is well protected, bolstering the value of a private cloud architecture. However, fully public models such as the consumer-grade ChatGPT can put even those protected architectures at risk. ChatGPT falls into the category of "shadow AI," in that it is not within the control of the IT department. And every employee using ChatGPT without authorization represents a potential point of security vulnerability.
Consumer-grade, online-based ChatGPT retains everything that is input into it to use in its learning process, including data that might be proprietary. Consider the recent Samsung breach: Samsung coders inadvertently fed ChatGPT confidential source code and other sensitive information as a way to check their work. That information is now permanently on the internet. Samsung's breach highlights why it is so important to have systems in place ensuring that users do not use cloud tools unwisely.
Somewhere in the middle, but still carrying secure-access concerns, are the enterprise-cloud software solutions that many hyperscalers have deployed. The bottom line is that an organization will never have more control over who has access to its data than when it maintains ownership of both the data and the service.
Security must be paramount in all IT architecture decisions. That's why it is essential to work with infrastructure partners that provide the highest levels of application and data security. The innovations Dell Technologies offers to improve the security capabilities of its PowerEdge server technology are very pertinent.
The Dell Technologies approach to security is intrinsic -- built in, not bolted on, and integrated throughout each PowerEdge server's lifecycle -- from design, to manufacturing, use and end of life. This approach of "bringing AI to your data" for the long term is what sets Dell apart. Its zero-trust architecture presumes the network is always vulnerable to compromise, so it safeguards access to critical data and resources by:
- Assuming every user and device represents a potential threat.
- Applying the principle of least privilege to restrict users and their devices.
- Applying multifactor authentication models and authorization rights that are time-based, scope-based and role-based.
- Deploying models on premises and leveraging retrieval-augmented generation.
- Leveraging a secure supply chain.
- Authenticating and authorizing each communication with the infrastructure.
- Avoiding inherently trusting any entity. Verification is required to access all assets.
Dell PowerEdge servers are also designed so that unauthorized BIOS and firmware code can't run. Basically, if the server can't verify that the BIOS is legitimate, it shuts down and adds a notification in its log so that IT can initiate a BIOS recovery process.
All new PowerEdge servers use an immutable, silicon-based root of trust to attest to the integrity of the code running. If the root of trust is validated successfully, the rest of the BIOS modules are validated by using a chain of trust procedure until control is handed off to the OS or hypervisor.
In the era of ChatGPT, all of these well-thought out safeguards are essential.