This content is part of the Conference Coverage: A conference guide to AWS re:Invent 2023

Amazon's innovative generative AI moves at re:Invent 2023

Analyst Mike Leone takes a comprehensive look at Amazon's generative AI announcements at this year's re:Invent, including the Q assistant, Bedrock enhancements and chip updates.

AWS re:Invent marked the culmination of 2023, wrapping up a year dominated by generative AI. As usual, AWS took over the Las Vegas strip with events, sessions and keynotes across most of the hotel properties.

The event featured over 250 enterprise tech announcements, spanning data management, analytics, storage, app development, governance and security. Notably, the pervasive theme was the prevalence of generative AI.

Overall, AWS' focus on simplicity, flexibility and stakeholder empowerment was palpable. Rather than overwhelming attendees with a bewildering array of 100 new services, AWS' approach this year was simpler and more calculated. It was a more mature AWS than we've seen in the past.

Although the event catered to builders, it spotlighted the offloading of technical complexities. AWS positioned itself to start taking on some of the heavy lifting, enabling users to accomplish more with data faster and more reliably than before. Additionally, several announcements focused on performance improvements and optimizations designed to deliver cost savings while enabling customers to store, access, process and analyze data faster and more confidently.

It's remarkable that AWS, which some have perceived as behind competitors from a generative AI standpoint, has not only caught up but is introducing innovative differentiators that put the competition on notice. Collectively, I was impressed with all the announcements, integrations and the ambitious roadmap at this year's re:Invent. It was a week of drinking from a firehose, and there are no signs of a slowdown in 2024.

Continuing the path of delivering a zero-ETL future

AWS recognizes that customers need to break down data silos, and it's a big reason why the company is pushing for a future that eliminates complex, manual data integration efforts by moving away from the extract, transform and load (ETL) model.

Last year, AWS made its first zero-ETL announcements. This year saw an expansion of four new zero-ETL integrations. These included three connections to Amazon Redshift from Aurora PostgreSQL, RDS for MySQL and DynamoDB, as well as another from DynamoDB to OpenSearch.

AWS appears committed to continued investment in making it easier to connect data and removing the challenges that come with building and maintaining data pipelines. By taking on that burden, AWS enables customers to reclaim time for gaining new insights and driving innovation.

Simplifying data cataloging and discovery with generative AI and Amazon DataZone

In last year's re:Invent recap, I highlighted Amazon DataZone as the biggest announcement of the event. This year, AWS announced a preview of new automation capabilities within DataZone that use generative AI to decrease the time needed to catalog data. This is great news for customers who struggle with not only cataloging but also providing comprehensive business context for their data.

The forthcoming capability eventually aims to use a large language model (LLM) in Amazon Bedrock to generate detailed descriptions of data and the underlying schema, then suggest ways to analyze it or tie it into a specific use case. And while customers have latched on to DataZone for its ability to deliver trusted data, these new capabilities are poised to gain rapid traction, addressing customers' consistent struggle to deliver contextual data across business functions.

Delivering a comprehensive generative AI stack fueled by Amazon Bedrock

AWS dedicated considerable time to discussing the company's approach to supporting generative AI with a comprehensive infrastructure stack. The foundation is an infrastructure layer for model training and inference, followed by a middle layer that enables businesses to incorporate generative AI into their processes and products in a flexible and reliable way. The top layer focuses on building modern apps powered by the underlying layers.

Many of the AWS services that can be found across the three-layer stack have been released. But this announcement was more about framing for me. In a competitive landscape focused on delivering a complete platform, AWS' approach positions the company in the same conversation. If anything, this messaging puts AWS on the same competitive level, which was necessary given the company's slow rollout of generative AI services at the beginning of the year.

New chips and capabilities accelerate building, training and deploying generative AI

AWS announced two next-generation chips: Graviton4 and Trainium2. Graviton4 is a fourth-generation ARM-based processor, which AWS claims is 30% faster than Graviton3 with 50% more cores. Trainium2 is a second-generation training chip that AWS says is four times faster than its predecessor and can enable customers to achieve 65 exaflops of performance. While these figures might seem excessive, these advancements are predicted to reduce the training time for the largest LLMs from months to weeks.

Of course, there were new announcements in partnership with Nvidia. When it comes to hardware-specific announcements, it's really about accessibility of hardware and giving customers the flexibility to choose what works best for their particular use cases.

This flexibility extends to enabling customers interested in building and customizing their own models to create unique experiences with new capabilities in Amazon SageMaker. These capabilities are focused on helping customers accelerate model building, training and deployment. Specifically, AWS says that SageMaker HyperPods help reduce the time required to train foundation models by up to 40% and make it far more efficient to manage massive, distributed clusters. AWS also introduced some inference optimizations aimed at reducing model deployment costs by an average of 50%, achieved by delivering multiple models within the same instance type.

New Amazon Bedrock integrations and capabilities

AWS announced a number of new Bedrock capabilities -- far too many to cover in this article. However, a few highlights stood out to me.

The first was Guardrails for Bedrock, which aims to help organizations leverage generative AI in a responsible way. This feature is the first step in empowering customers to easily configure content filtering or redact certain information from responses. Another standout is Knowledge Bases for Bedrock, a feature that securely connects models in Bedrock to company data using retrieval-augmented generation.

Finally, Agents for Bedrock also aims to help accelerate generative AI development by using the reasoning capabilities of foundation models to break down user-requested tasks into multiple steps. This provides customers with improved control over the orchestration of responses as well as better visibility into the chain-of-thought reasoning process.

A new generative AI assistant: Amazon Q

With AI assistants all the rage, Amazon announced Amazon Q, a new generative AI assistant set to launch across the entire AWS ecosystem of services. I would argue this was the biggest announcement of the event because it places AWS alongside the company's top competitors, who also have intelligent, natural language assistants. Amazon Q is slated to eventually integrate with the full AWS portfolio, including Redshift, AWS Glue, Amazon Connect and Amazon QuickSight, the latter of which has featured a version of Q dedicated to business intelligence tasks for some time.

Customers can engage in conversations with Amazon Q to solve problems, generate content and take actions. The assistant can understand company information, code and systems, and can personalize interactions based on the user's role and permissions. It is also built with a strong emphasis on security and privacy. Amazon Q is poised to eventually provide guidance to all teams and stakeholders interacting across the AWS ecosystem of services, including developers, line-of-business leaders and data specialists.

Mike Leone is a principal analyst at TechTarget's Enterprise Strategy Group, where he covers data, analytics and AI.

Enterprise Strategy Group is a division of TechTarget. Its analysts have business relationships with technology vendors.

Next Steps

AWS customers grapple with generative AI shortfalls

New dev tools at AWS re:Invent shape the future of cloud

Dig Deeper on Machine learning platforms

Business Analytics
CIO
Data Management
ERP
Close