Getty Images/iStockphoto

AWS intros GenAI app studio, updates Amazon Q and Bedrock

The cloud provider introduced new features that help enterprises create applications faster, while still applying responsible practices. It also unveiled new educational resources.

NEW YORK -- AWS on Wednesday introduced a generative AI app studio and multiple updates to its various AI services, including Amazon Q, Bedrock and SageMaker.

During a keynote presentation at the cloud giant's annual AWS Cloud Summit New York, the vendor unveiled the AWS App Studio, a new service that uses natural language to create enterprise-grade applications.

Customers using the application simply describe the type of application they want to build and the data sources that should be integrated with the app and within minutes, App Studio builds it, according to AWS.

While App Studio seems like just a low-code tool, it's different from other platforms in that it uses GenAI to build the application first, said Jason Andersen, an analyst at Moor Insights and Strategy. Most other low-code platforms do the reverse; they build the application and then bring in GenAI, he said.

For customers that already have a lot of data running on AWS, App Studio can help them apply that data.

"A lot of people ... have data collection processes, but then they can't really leverage it or do much with it," Andersen said. "This is really targeted at folks who are storing that data in AWS. They can use it, put it to work."

App Studio helps citizen developers as well because it removes the barrier of only technical developers having access to sophisticated tools, said Tim Crawford, an analyst at AVOA.

"For an enterprise, that's important because I don't necessarily want to go back to only pro developers having access," Crawford said. "App Studio lowers that hurdle so more people have access to the tools."

New capabilities in Amazon Q

AWS also introduced new customization and app-building capabilities for the GenAI-based Amazon Q line of personal assistants and new customization capabilities for the Amazon Bedrock GenAI platform.

The new GenAI and Amazon Q capabilities come after AWS released the GenAI assistants in April and as the GenAI race between AWS, Google and Microsoft shows no signs of slowing down nearly two years after the release of OpenAI's ChatGPT.

Among the three tech giants, AWS appears the most enterprise-focused, according to industry analysts.

"The perception I get when I look at what they're doing with Q and what they're doing with Bedrock is they're very focused on the enterprise, and they're very focused on B2B," Andersen said. "Some of those other companies are trying to satisfy a lot more diverse customer bases. They're delivering more breadth but not depth."

AWS also added more features to Q aimed at enabling enterprises to get things done quickly.

For example, the new Amazon Q Developer customization capability for inline code recommendations helps developers use their internal code base and provides more relevant and useful recommendations for inline coding, according to AWS. This capability is now generally available.

The cloud provider also revealed that Amazon Q is now available in SageMaker, AWS' service for building, training and deploying machine learning and AI models.

SageMaker users can now get existing capabilities of Q, such as code generation, debugging and advice, as well as new capabilities, such as assistance for data preparation as well as model training and deployment.

"What they're doing now [with Q] is they're now adding features to really kind of enhance the productivity and speed things up further than when it was originally released this year," Andersen said.

Amazon Q Apps, a feature of Amazon Q Business, is also now generally available. It lets employees create apps quickly based on company data.

"What I see in my research is organizations want to release code on an hourly basis," Futurum Group analyst Paul Nashawaty said. But only a few organizations can meet this target, he noted.

"By adding Amazon Q Apps with the Amazon Q business, this allows for that acceleration of those applications to be created," Nashawaty said.

AWS' Matt Wood presenting during AWS keynote.
Matt Wood, vice president, artificial intelligence products at AWS, introduces new capabilities in Amazon Bedrock at the AWS Summit.

New capabilities in Bedrock

Beyond the Q additions, AWS added new features to Bedrock, among them the ability for to fine-tune independent GenAI vendor Anthropic's foundation model Claude 3 Haiku in preview.

AWS also introduced two new capabilities that help customers build AI agents in Amazon Bedrock. The first enables agents to remember information from different sessions. For example, when a customer is booking a flight, agents can now remember past travel plans and the food options customers choose.

Agents can also interpret code to solve difficult data-driven use cases. For instance, users can ask agents to examine historical real estate prices to find investment opportunities.

AWS also introduced more Guardrails for Amazon Bedrock. The new guardrails are aimed at enterprises concerned about large language model hallucinations and instances in which LLMs generate incorrect information.

The cloud provider now provides contextual grounding checks in Guardrails for Amazon Bedrock to find hallucinations in model responses.

Adding grounding to Bedrock is important especially because AWS lets customers use even more data sources for retrieval augmented generation (RAG), Nashawaty said.

RAG is an AI framework that uses both GenAI LLMs and regular AI systems to generate more accurate responses.

"You can go much more further with models plus RAG," said Matt Wood, vice president for artificial intelligence products at AWS, during a keynote presentation. "[RAG] is a key component to building capable cohesive applications and with guardrails, we make it safer to be able to operate these applications."

Thus, by providing contextual grounding, customers can be certain the system is using the right data source to generate the correct response.

"That's going to help put the kind of rules in place to produce the kind of results that organizations are looking for," Nashawaty said. "You want to make sure you're not having different results from different data sources. You want to make sure it's harmonized."

An AWS customer on Bedrock

Tactacam, a company that makes cellular cameras for outdoor photography, has been an AWS customer for three years.

Most recently, the company used Claude Haiku in Amazon Bedrock to introduce a new product called Feathersnap.

For Tactacam, the addition of new capabilities to Bedrock, such as fine-tuning and Guardrails, means a better user experience, director of product ownership Lindsay Bowers said in an interview with TechTarget Editorial.

"We're able to do a lot with our language learning model and what inputs and system prompts that we put in to get it out the best result. But anything that's already inherently built into the model is just going to help us get a better result all the way around," Bowers said. "It's great that those things are being improved and that the model is just going to get better for all users within Bedrock."

Model-independent services

AWS also introduced an independent API for Guardrails.

The perception I get when I look at what they're doing with Q and what they're doing with Bedrock is they're very focused on the enterprise.
Jason AndersenAnalyst, Moor Insights & Strategy

This allows companies doing business on Amazon and AWS customers to apply safeguards to all their GenAI applications across different foundation models regardless of which infrastructure they use.

With this service, AWS shows how it is abstracting AI services away from specific models, Andersen said.

In regard to safety and responsible AI, instead of building the guardrail feature into their models, AWS is giving its customers the choice to use the API on different models and cloud infrastructure. Customers are therefore not tied to one model when thinking about responsible AI, Andersen added.

"They're able to allow you to share different enterprise capabilities and enterprise needs across multiple models, which is really going to help out customers as they go on their AI journey," he said.

A challenge and new education opportunities

One hurdle AWS must overcome is that the vendor is still figuring out how to tell its story, Andersen continued.

Many enterprises still operate under the impression that by choosing AWS, they might need to exclusively be an AWS shop, which is not true.

"There are perceptions in the market that they may have to sensitize people to and say, 'Hey, look, we're not just about you putting all your workloads on us. We're about helping you do different things and maybe even helping you do different things across multiple clouds,'" Andersen said. "It's mostly an education issue, not necessarily a technology or product issue at this point."

While AWS might still be working on educating its customer base its GenAI strategy, the cloud provider has accomplished its goal of upskilling 29 million global workers in cloud skills with free training courses, according to the vendor.

AWS uses Amazon Bedrock to provide GenAI-powered simulations and hands-on training to help people translate business problems and technical solutions with a new immersive Gen AI-powered learning system named SimuLearn.

The tech giant also introduced a new generative AI role for the game-based learning option called Cloud Quest to help technical workers gain practical GenAI experience such as fine-tuning LLMs, deploying foundation models and building chatbots.

"A lot of people learn best by just trying things and trying it again and failure is a great teacher," said Kevin Kelly, director of AWS Cloud Institute. "Giving the kind of environments where learners can explore a game-based environment like Cloud Quest and solve problems, is a way that a lot of people like learning."

"They also like building things and seeing what they've built and testing what they built," he added.

AWS also introduced a training for Amazon PartyRock on the AWS Educate platform for customers to learn to build their own AI apps.


AWS also revealed several new partnerships.

AWS and Deloitte now have a multi-year strategic collaboration to help clients scale generative AI. The vendors will establish an Innovation Lab that helps clients explore future technologies like artificial general intelligence, quantum machine learning and autonomous robotics.

AWS also allied with Scale AI as its first model customization and evaluation partner. Through the collaboration, enterprises and public sector organizations can use Scale GenAI Platform and Scale Donovan to evaluate their generative AI applications as well as further customize, configure and fine tune models.

The partnership addresses some of the bottlenecks that keeps organizations from scaling their GenAI workloads and projects, Scale AI field CTO Vijay Karunamurthy told TechTarget Editorial.

Among these holdups is difficulty in evaluating applications and understanding the benchmarks that must be met before a model or application can be released.

"The biggest bottleneck ... is the ability to trust the models, outputs and trust that the model is not getting facts wrong," Karunamurthy said.

Evaluation is key for responsible deployment of GenAI models, he added.

"Without being able to evaluate how the models do security and bias on all of these different issues, you're really just flying blind when it comes to what's a responsible use of AI," Karunamurthy said.

Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems.

Dig Deeper on AI infrastructure

Business Analytics
Data Management