The importance of securing sensitive information can't be overstated. Fortunately, most organizations have solid cybersecurity programs in place, using diverse sets of controls to achieve defense-in-depth security. Companies have hardened corporate servers, secured enterprise endpoints and deployed monitoring tools. They've also taken steps to eradicate sensitive information from endpoint devices and consolidate the most crucial corporate data in enterprise systems and the databases that support them.
But just how strong are the protective controls around those central stores?
It is safer, without a doubt, to store information in a secure, centralized database rather than on laptops and file shares. But if enterprises overlook database security, they may unwittingly be building a treasure trove of sensitive information that is ripe for the taking.
Many enterprise databases are prone to vulnerabilities caused by configuration errors or poor implementation. From poor password hygiene to SQL injection attacks to cross-site scripting vulnerabilities, these database-related threats must be addressed.
But mitigating these threats successfully is only part of what's required to effectively secure your database. Every enterprise should implement database security best practices and then review them annually to ensure their crown jewels -- the confidential data housed in their databases -- are protected. Databases are a tempting target for any attacker; protecting them is critical.
Here are 10 database security best practices -- plus a bonus best practice -- enterprises should put into place to improve the security of their databases and the data stored within them.
1. Enforce the principle of least privilege
Limiting users' access to the smallest set of privileges necessary to carry out their job functions is usually the first piece of advice offered in any cybersecurity book. How well that theoretical goal maps to the reality of our enterprise databases is an important question to ask. To assess this, enterprises should ask themselves several questions, including the following:
- Do developers have full access to production databases?
- Do system engineers have access to the databases on the systems under their care?
- Do database administrators have full access to all databases or just those that fall within their areas of responsibility?
Limiting access as much as possible is an important safeguard against insider threats.
2. Conduct regular access reviews
It is no secret that privilege creep affects virtually every technology organization. As technical and nontechnical staff move among job roles and project assignments, they accumulate new and different permissions each time their responsibilities change. New permissions are quickly sought and approved because the lack of permissions gets in the way of work. Old and unnecessary permissions may persist for months or years because they do not cause operational issues for the employee's everyday work. They do, however, expand the scope of an attack should that user become a malicious insider or fall victim to an account compromise.
Conduct regular, scheduled reviews of database access to ensure the principle of least privilege still applies. Pay particular attention to users who have direct access to the database, as this access may bypass application-level security controls.
3. Monitor database activity
Database auditing used to be a tremendous performance burden, causing organizations to sacrifice logging for the sake of operational efficiency. Fortunately, those days are behind us, since all the major database platform players now offer scalable monitoring and logging capabilities.
Enable database monitoring on systems and ensure the logs are sent to a secure repository. Also, implement behavior-based monitoring rules that watch for unusual user activity, particularly among users with administrative access.
4. Encrypt sensitive data
Encryption is a database security best practice no-brainer. Use strong encryption to protect databases in three ways:
5. Know what data you have and where it is stored
This best practice might seem obvious, but many organizations don't fully understand how sensitive and critical their data is, yet they assume they are protecting databases appropriately.
Have an understanding -- ideally a detailed inventory -- of what data is stored where and in what formats. This is particularly key at scale, in complex environments and in legacy situations. The amount of sensitive and critical data will govern the controls implemented from both a risk management and regulatory compliance perspective. Situations will arise where security teams need to triage control deployments or choose which systems to patch first -- both of which are also informed by data criticality and sensitivity.
Adjust and tailor the approach based on what data the database is used for. Since databases evolve over time and new applications tying into existing data stores can add new data and manipulate old data in new ways, it's important to build out a data map/inventory and to also keep that information up to date and maintained over time.
6. Test backup, export and restoration
Organizations that aren't backing up data should start doing so right away. Even organizations that perform regular backups must periodically test backup and restoration capabilities to ensure they perform as expected. Confirm that intended data can be restored from backup to prevent surprise situations where teams realize their backup measures yielded nothing but terabytes of unrecoverable and garbage data.
Cloud backup and restoration poses a different challenge, particularly for managed cloud database services. In these cases, the mechanism for exporting data from the service can vary from service to service. To avoid lock-in, it's advantageous to be able to -- and know how to -- export data from the managed service. Finally, ensure databases can be imported, should they ever be migrated.
7. Harden, patch, configure
Follow good hardening and patching hygiene. This can play out in one of a few ways, depending on whether the database is managed by the organization or a service provider:
- Organizations managing their own database nodes. For example, for on-premises or workloads inside an IaaS ecosystem, ensure the OS is hardened and patched and that the database service itself is hardened and patched. Use a security technical implementation guide or other community configuration benchmarks to do this. Build for database instances you maintain -- for example, by making sure they are segmented and that they follow good practices, such as not using production data for testing purposes.
- Organizations using managed database services. The goal is the same, but the process is different. Patching and OS-level hardening are the managed service's responsibility. Ensure that the services are optimally configured from a security perspective. Enable any security posture optimization features available. This implies customers understand those features and how to enable and configure them.
8. Threat model intersection points
Regardless of whether the databases in scope are for products being built or for business applications being maintained, application security is important. One of the best ways to guarantee applications are appropriately hardened is to threat model them.
Threat model the intersection points between data stores and application components to affirm that the right protections are in place. Do this for applications being built, whether for in-house use or for customers. If it's commercial-off-the-shelf software or another tool that relies on a database back end, analyze how that database can be misused and put the appropriate safeguards in place.
9. Conduct technical testing
Validate that controls are operating effectively and are sufficient. Conduct vulnerability scans to ensure hosts are hardened and patched, and conduct penetration testing to make certain the measures in place provide the value expected.
Note: Organizations in a multi-tenant context or using any other managed service should first determine if this type of technical testing is allowed under their terms of service.
10. Secure machine accounts
Ensure nonhuman accounts are appropriately secured. We've already covered the importance of least privilege and regular review of access rights. Nonhuman accounts -- including daemons, service accounts/principals and application user accounts -- sometime get lost in the shuffle.
Protect machine accounts as you would user accounts. Know where machine accounts are located, where they're used and ensure they are monitored, tracked and audited.
Bonus best practice: Documentation
While not a database security best practice exactly, documentation is an integral component of any security strategy. In a database context, this means two types of documentation:
- Documentation implicit in the above suggestions, including artifacts such as threat models and supporting data flow diagrams, access control matrices to support least privilege enforcement, data inventory and control implementations -- for example, which columns are encrypted and using what mechanisms.
- Documentation of operational processes and procedures, as well as decision artifacts such as risk analyses.
Documentation is a bit like insurance. In the short term you might get away with not having it, but eventually you'll get burned. Documentation is required in audits and it's also mandated under certain regulatory frameworks. What's more, having precise documentation that is regularly updated increases an organization's overall maturity and the resilience of its processes.