Sergey Nivens - Fotolia
AWS provides a variety of powerful tools to incorporate AI into applications. But how does a developer get started?
To learn the basics, experiment with Amazon AI-based projects. IT teams can apply AI throughout many facets of a business, such as machine learning models for data analytics, image and video categorization and consumer-facing Alexa skills.
Amazon suggests developers try out these three AI projects -- which range from easy to a bit more difficult -- to get their feet wet.
Build a social graph from pictures
Amazon Rekognition makes it easy to identify people and objects in photos. The service includes APIs to tag and search images in pictures and video streams, identify faces, recognize sentiment and demographic information, identify unsafe content and detect text, all of which open up many more IT possibilities.
To experiment with the service, you can use the Rekognition API to build an app that identifies celebrities in photos and automatically generates a social graph to capture connections between them. This helps developers understand how to use the Rekognition API itself, along with a combination of Jupyter Notebooks and graph databases that can result in more complex application builds.
Jupyter Notebooks are an emerging set of open source tools that manage information associated with a data science project, including live code, graphs, charts, equations and data sources. These notebooks make it easier to share insights, processes and stories with other users in an interactive format. Share these Jupyter Notebooks with other data scientists, developers and users so they can follow the insight that goes into a project, extend it for other use cases and customize it for further learning.
Graph databases can store complex data about how things relate to each other. Social networks typically use this type of data structure. In this example project with Rekognition, developers can use these graphs to see the relationships between celebrities in pictures. From there, they can build more complex social graphs to customize recommendation engines or identify relationships between customers for future Amazon AI-based projects.
Generate high-quality voice prompts
Many web- and phone-based applications use Amazon Polly to automatically generate high-quality voice prompts from text data. This feature supports a number of potential Amazon AI-based projects. For example, a developer could customize app interaction patterns for hands-free use cases, or they could build an app to deliver notifications via phone calls that might otherwise get lost in a sea of mobile notifications.
One simple project lets developers explore Polly as they create an app to deliver real-time home monitoring alerts. In this example, the Twilio API executes application logic, which can place a phone call to home owners using Polly's speech capabilities and process touch-tones.
Developers can later attempt more sophisticated implementations with voice-driven interaction via Amazon Lex, which we explore in the example below.
Play with intelligent devices
Developers can create highly customized voice and sound recognition for internet of things (IoT) hardware with Amazon Lex and a Raspberry Pi microcontroller. Developers can use this to build a customized home automation controller or a voice-controlled robot.
Developers can use these basic AI capabilities with AWS DeepLens, a deep learning-enabled video camera, to combine voice interfaces with image recognition components. For developers that want to attempt higher-level Amazon AI-based projects, download DeepLens extensions to work with deep learning models, classify images and execute AWS Lambda functions that run on Greengrass Core within the DeepLens hardware.
There are also a variety of techniques to work with Message Queuing Telemetry Transport messages that deliver IoT data and display the output on a laptop. Developers can enhance the output with Elasticsearch Service to build a dashboard, Kinesis to detect anomalies and Rekognition to identify faces and objects. Configure the Rekognition engine to identify events in the video to automatically trigger an SMS notification.