News Stay informed about the latest enterprise technology news and product updates.

Amazon not to blame for S3 cloud storage lapses

The Amazon Simple Storage Service (S3) has been giving big businesses –and their customers — big trouble.

It was reported earlier this summer that high-profile companies left data in their S3 buckets exposed because the access control lists (ACLs) were configured to allow access from any user on the internet. The companies caught up in this misconfiguration problem included telco giant Verizon, U.S government contractor Booz Allen Hamilton, World Wrestling Entertainment and Dow Jones.

And the cloud storage security problem has not gone away.

It was reported in October that corporate consulting firm Accenture left at least four S3 cloud buckets in a similar unsecured condition, according to security firm UpGuard blog post. Accenture works with 94 of the Fortune Global 100 and more than three-quarters of the Fortune Global 500.

But experts say Amazon is not to blame for the cloud storage misconfiguration issue. Human error is to blame: Administrators who are creating the S3 buckets are failing to reconfigure them in a restricted access configuration mode, essentially leaving the barn door open for unwanted entry.

“AWS is aware of the security issue, but are not likely to mitigate it since it is caused by user misconfiguration,” according to Detectify, a company that simulates automated hacker attacks.

AWS states on its blog that “by default, all Amazon S3 resources – buckets, objects and related sub-resources…are private. Only the resource owner, an AWS account that created it, can access the resource. The resource owner can optionally grant access permissions to others by writing an access policy.”

Amazon claims it has enhanced S3 storage security. In August, the company updated the “managed rules to secure S3 buckets.” The AWS Config offers a timeline of configuration changes with two new rules. The S3 bucket-public-write-prohibited rule automatically identifies buckets that allow a global access write, so that if an S3 bucket policy or bucket ACL allows public read access then the bucket is considered non-compliant. The second rule, an S3-bucket-public-read-prohibited rule also automatically identifies that a bucket has a global access read.

“This will flag content that is public available, including web sites and documentation,” according to a blog post written by Jeff Barr, chief evangelist for AWS. “This rule also checks all buckets in the account.”

George Crump, president of IT analyst firm Storage Switzerland, said the buckets are secure when created. Trouble occurs only when IT does not do a follow through on locking down the buckets.

“It’s not (Amazon’s) fault,” Crump said. “They just provide the infrastructure. They provide the material for you to create a solution. It’s not their fault. It’s the job of IT to lock it down. It would be different if Amazon had not put the tools in place, but that clearly is not the case.”

Many of these unsecured S3 buckets are created for application development, then left open after a team pulls its compute and storage resources from AWS for the duration of the project.

“Typically, these buckets are secured when they are created so that only authenticated users can access them,” Crump wrote in a blog post. “But sometimes, especially in the initial development of an application, these buckets are left unsecured to make it easier for multiple users to test them.

“The problem is when the application moves into production, no one remembers to secure the bucket, leaving it open for anyone to gain access,” he said.

 

 

 

Disaster Recovery
Data Backup
Data Center
Sustainability
and ESG
Close