Re:Invent recap: Week 1

AWS' annual December deluge is in full swing. Check out this recap of all that happened in week one of re:Invent as you get up to speed on the biggest conference in cloud.

It was a different kind of kickoff to re:Invent this year.

There were no big cheering crowds -- though there was an applause track during the keynotes. No chicken wing eating contests or convention hall conversations amid the chaos. And nary a mention of the prior week's multi-service outage, which was one of the biggest in years and raises questions about behind-the-scenes interdependencies.

Still, there were many familiar elements at the start of what will be a three-week conference. More than two dozen services or feature updates were announced, and AWS CEO Andy Jassy put a finer point on a strategy that's evolved around changing customer demands. And while sessions were virtual, they were accessible in a way they never were before.

There's a lot to take in, so each week, we'll recap the most important developments to help you navigate all that happened. Check back at the end of week two and week three as well, but first let's recap week one.

Keynotes clarify evolving strategies

Choice was a big theme throughout the first week -- choice in how and where you build with AWS.

There was a time when AWS looked primed to go all-in on serverless as the future of app development. AWS Lambda was generating all sorts of buzz and Amazon kept expanding the event-driven compute platform's functionality. Soon after, containers and then Kubernetes caught fire. AWS was the first cloud to deliver a managed container service, but competitors like Google made it a driving force behind their cloud strategies.

But apparently AWS isn't interested in debating serverless vs. containers -- or VMs for that matter. All three will be a major part of the platform for the foreseeable future, as there were major upgrades across the board. Amazon also added a new service called AWS Proton, intended to help ops teams stitch together control of containers and serverless functions.

As Jeff Barr, AWS chief evangelist, described it to me, AWS is prescriptive about how it wants customers to use its platform, not in which compute approach they choose.

"It [sounds] really nice to paint this idealized picture of the world that looks a particular way and invite every developer and say, 'This is how you need to be to run on the cloud,'" Barr said. "Realistically, developers want to take what they're doing and apply that as much as possible."

In Jassy's keynote, he framed the company's view of hybrid cloud as one that uses distributed nodes, with AWS at the center of it all. He took an expansive view of what constitutes the non-public cloud portion of hybrid cloud. It's not just private data centers, but locations with different needs, such as restaurants or agricultural fields.

"The way that customers want to consume hybrid offering is with the same APIs, the same control plane, same tools and same hardware they're used to using in AWS regions," he said. "Effectively, they want us to distribute AWS to these various edge nodes."

New versions of Outposts and AWS' container management services expand existing efforts around hybrid, which already included networking services, portable Snow devices and its partnership with VMware. And while making AWS available anywhere -- even beyond its public cloud -- was a key part of that pitch, it's unclear how well some of these new services will interact with other public clouds.

In a keynote later in the week targeted at AWS partners, executives laid out changes intended to get SaaS vendors and other third parties to build on and integrate with AWS. There were also signs of an increasingly softer touch regarding the occasionally awkward relationship with partners, which are sometimes direct competitors with AWS or with its parent company, Twice, AWS executives namechecked Snowflake, a data warehousing competitor that relies heavily on Amazon Elastic Kubernetes Service (Amazon EKS).

A recap of the service rollouts

Jassy discussed 27 new services or features in his keynote, not to mention the partner updates on Thursday or the ones that went unmentioned on the main stage, so there's a lot to cover.

Chris Kanaracus, news director for SearchCloudComputing, wrote a nice roundup of some of the most important services, with insights from industry experts. But for the purposes of this piece, we'll provide a brief, high-level overview of the vast majority of the updates. Follow the links below for our stories that go deeper on some of these additions.

And keep in mind, this is just a first look, and many of these are in preview. IT pros probably got excited -- or confused -- when they heard about some of the services from re:Invent 2020 week one. But it will take time to evaluate the cost, functionality and integrations with these services to determine if any of them meets their needs. Ultimately, your team might find some of the more mundane updates are the ones that make the biggest difference in 2021.


  • Amazon Elastic Container Service (Amazon ECS) Anywhere and Amazon EKS Anywhere push the container management platforms beyond Amazon's public cloud in new ways. It was previously possible to use them on an AWS Outpost, but those require specific form factors and come with a high price tag.
  • In conjunction with those releases, AWS added its own open source Kubernetes distribution, Amazon EKS Distro.
  • AWS Outposts now has two additional form factors (1U and 2U) that greatly scale down the size of the appliance needed to run AWS on premises. This is geared toward settings outside the data center, such as retail, hospitals and manufacturing locations.
  • AWS Lambda functions can now support triple the memory, at 10 GB. They can also be provisioned as container images. As much as AWS continues to support the three main types of compute, this is yet another example of the melding of those approaches.
  • AWS also cut the billing rate for the time it takes for Lambda to execute your code from per-100 milliseconds to per-millisecond. This doesn't change the rates for provisioned concurrency or requests, but it could result in big savings for certain applications that rely on large volumes of short duration functions.
  • A range of Amazon EC2 instance types were added, including options for dense hard disk drive storage (D3 and D3en), for GPUs (G4dn), for low-latency networking (M5zn), for compute optimization (C6gn) and for higher Elastic Block Store (EBS) bandwidth and IOPS (R5b).
  • In 2021, AWS will integrate Habana Labs' Gaudi accelerators into new instances for deep learning, with expected 40% improvement on price performance. There's also another custom chip on the way called AWS Trainium, which is designed specifically for machine learning training.
  • The EC2 Mac instances might be the most intriguing of the new instance types, bringing macOS instances to AWS for the first time through the use of Apple Mac mini computers. This could fill a big void for teams that build apps for Apple products, but so far it appears to be best suited for test and development.
  • AWS Local Zones, which provide select Amazon cloud services in densely populated areas for ultra-low latency applications, were added in preview in Boston, Houston and Miami. Another dozen will be added across the U.S. in 2021. In addition, an AWS Wavelength Zone, which relies on 5G networks, was added in Las Vegas.
  • Amazon Elastic Container Registry (ECR) Public is an extension of the existing ECR service, and developers can use this managed registry to share container software globally. There's also an Amazon ECR Public Gallery for people to browse container images and other details.
  • Expect to see more machine learning news in week two, but this week SageMaker got tailored capabilities for CI/CD, feature repositories and data prep.

Databases and analytics

  • Babelfish for Amazon Aurora excited a lot of attendees. It translates commands from Microsoft SQL Server to Amazon Aurora PostgreSQL. This dramatically reduces the amount of application rewrites you'd need to do to convert a legacy SQL Server database to AWS' native database offering.
  • AWS Glue Elastic Views creates materialized views of data combined from disparate sources through SQL queries. This comes shortly after the addition of AWS Glue DataBrew, a no-code version of the standard AWS Glue data preparation tool.
  • Amazon Aurora Serverless v2 shortens the time it takes to scale capacity from five to 50 seconds down to fractions of a second. It also incorporates some of the standard Amazon Aurora features not found in the current version of Aurora Serverless.
  • Advanced Query Accelerator, or AQUA, for Amazon Redshift is a distributed, high-speed cache for faster queries.
  • Amazon QuickSight Q adds natural language capabilities to the BI service.


  • Amazon DevOps Guru, not to be confused with Amazon CodeGuru, is a machine-learning-backed service that looks for abnormal operational patterns used by ops teams. It identifies potential issues, their causes and -- sometimes -- prescriptive recommendations to address the problem.
  • AWS Proton essentially creates a PaaS-like environment for developers while ensuring the ops team maintains control of all the infrastructure. Engineers build templates that are used across environments to automate the full lifecycle for containerized and serverless applications. Proton confused a lot of people when it was announced, but it may end up being one of the most important services from the show, if properly executed.
  • Amazon CloudWatch Lambda Insights adds monitoring capabilities geared specifically toward functions. It creates health dashboards to track for unwanted changes that result from new function versions.


  • The latest generation of Amazon EBS volumes geared mostly toward large relational databases, io2, saw a fourfold increase in throughput, IOPS and capacity. AWS also added tiered pricing for IOPS for customers that use volumes with greater than 32,000 IOPS per month.
  • For general purpose SSD volumes in EBS, gp3 separates performance provisioning from storage capacity, which AWS says will save customers up to 20% compared to current gp2 volumes. AWS Compute Optimizer also added support for EBS volume recommendations.
  • Amazon S3 is now strongly consistent. This isn't groundbreaking, in the sense that other providers' block storage services were built to do this from the start. But given the maturity and scale of S3, many industry observers were amazed as they contemplate the effort that must have gone into this changeover.
  • Other S3 updates include two-way replication of object metadata between buckets. S3 buckets can also be replicated across multiple destinations, within a region or to another one.


  • Amazon Connect added tools to authenticate callers, gain a more comprehensive view of callers and to track customer service tasks across multiple applications.
  • Amazon Lookout is part of a larger push to do more in industrial settings. Lookout for Vision handles real-time analysis of product defects, while Lookout for Equipment is intended for predictive maintenance. It inputs data from industrial machinery and uses machine learning to predict and detect abnormalities.
  • AWS Panorama is a machine learning device and SDK that works with a customer's on-premises camera systems to monitor and inspect workplace conditions. The service is billed as a way to address manufacturing quality and employee safety, but it will be interesting to see how this service is implemented by users, given some of the controversies around other image analysis services.
  • Amazon Monitron takes many of these same industrial setting capabilities around detection and preventative maintenance and puts them in an end-to-end predictive machine learning system.
  • AWS SaaS Boost is an open source reference environment for independent software vendors to build apps on top of AWS.

Dig Deeper on Cloud provider platforms and tools

Data Center