Tech Accelerator

What is generative AI? Everything you need to know

Generative AI is a type of artificial intelligence technology that can produce various types of content including text, imagery, audio and synthetic data. The recent buzz around generative AI has been driven by the simplicity of new user interfaces for creating high-quality text, graphics and videos in a matter of seconds.

The technology, it should be noted, is not brand-new. Generative AI was introduced in the 1960s in chatbots. But it was not until 2014, with the introduction of generative adversarial networks, or GANs -- a type of machine learning algorithm -- that generative AI could create convincingly authentic images, videos and audio of real people.

On the one hand, this newfound capability has opened up opportunities that include better movie dubbing and rich educational content. It also unlocked concerns about deep fakes -- digitally forged images or videos -- and harmful cybersecurity attacks on businesses, including nefarious requests that realistically mimic an employee's boss.

Two additional recent advances that will be discussed in more detail below have played a critical part in generative AI going mainstream: transformers and the breakthrough language models they enabled. Transformers are a type of machine learning that made it possible for researchers to train ever-larger models without having to label all of the data in advance. New models could thus be trained on billions of pages of text, resulting in answers with more depth. In addition, transformers unlocked a new notion called attention that enabled models to track the connections between words across pages, chapters and books rather than just in individual sentences. And not just words: Transformers could also use their ability to track connections to analyze code, proteins, chemicals and DNA.

The rapid advances in so-called large language models -- i.e., models with billions or even trillions of parameters -- have opened a new era in which generative AI models can write engaging text, paint photorealistic images and even create somewhat entertaining sitcoms on the fly. Moreover, innovations in multimodal AI enable teams to generate content across multiple types of media, including text, graphics and video. This is the basis for tools like Dall-E that automatically create images from a text description or generate text captions from images.

These breakthroughs notwithstanding, we are still in the early days of using generative AI to create readable text and photorealistic stylized graphics. Early implementations have had issues with accuracy and bias, as well as being prone to hallucinations and spitting back weird answers. Still, progress thus far indicates that the inherent capabilities of this type of AI could fundamentally change business. Going forward, this technology could help write code, design new drugs, develop products, redesign business processes and transform supply chains.

Timeline of the history of generative AI technologies

How does generative AI work?

Generative AI starts with a prompt that could be in the form of a text, an image, a video, a design, musical notes, or any input that the AI system can process. Various AI algorithms then return new content in response to the prompt. Content can include essays, solutions to problems, or realistic fakes created from pictures or audio of a person.

Early versions of generative AI required submitting data via an API or a complicated process. Developers had to familiarize themselves with special tools and write applications using languages such as Python.

Now, pioneers in generative AI are developing better user experiences that let you describe a request in plain language. After an initial response, you can also customize the results with feedback about the style, tone and other elements you want the generated content to reflect.

Generative AI models

Generative AI models combine various AI algorithms to represent and process content. For example, to generate text, various natural language processing techniques transform raw characters (e.g., letters, punctuation and words) into sentences, parts of speech, entities and actions, which are represented as vectors using multiple encoding techniques. Similarly, images are transformed into various visual elements, also expressed as vectors. One caution is that these techniques can also encode the biases, racism, deception and puffery contained in the training data.

Once developers settle on a way to represent the world, they apply a particular neural network to generate new content in response to a query or prompt. Techniques such as GANs and variational autoencoders (VAEs) -- neural networks with a decoder and encoder -- are suitable for generating realistic human faces, synthetic data for AI training or even facsimiles of particular humans.

Recent progress in transformers such as Google's Bidirectional Encoder Representations from Transformers (BERT), OpenAI's GPT and Google AlphaFold have also resulted in neural networks that can not only encode language, images and proteins but also generate new content.

How neural networks are transforming generative AI

Researchers have been creating AI and other tools for programmatically generating content since the early days of AI. The earliest approaches, known as rules-based systems and later as "expert systems," used explicitly crafted rules for generating responses or data sets.

Neural networks, which form the basis of much of the AI and machine learning applications today, flipped the problem around. Designed to mimic how the human brain works, neural networks "learn" the rules from finding patterns in existing data sets. Developed in the 1950s and 1960s, the first neural networks were limited by a lack of computational power and small data sets. It was not until the advent of big data in the mid-2000s and improvements in computer hardware that neural networks became practical for generating content.

The field accelerated when researchers found a way to get neural networks to run in parallel across the graphic processing units (GPUs) that were being used in the computer gaming industry to render video games. New machine learning techniques developed in the past decade, including the aforementioned generative adversarial networks and transformers, have set the stage for the recent remarkable advances in AI-generated content.

What are Dall-E, ChatGPT and Bard?

ChatGPT, Dall-E and Bard are popular generative AI interfaces.


Dall-E is an example of a multimodal AI application that identifies connections across multiple media, such as vision, text and audio. In this case, it connects the meaning of words to visual elements. It was built using OpenAI's GPT implementation in 2021. Dall-E 2, a second, more capable version, was released in 2022. It enables users to generate imagery in multiple styles driven by user prompts.


ChatGPT is an AI-powered chatbot application built on OpenAI's GPT-3.5 implementation. OpenAI has provided a way to interact and fine-tune text responses via a chat interface with interactive feedback. Earlier versions of GPT were only accessible via an API. GPT-4 was released March 14, 2023. ChatGPT incorporates the history of its conversation with a user into its results, simulating a real conversation. After the incredible popularity of the new GPT interface, Microsoft announced a significant new investment into OpenAI and integrated a version of GPT into its Bing search engine.


Google was another early leader in pioneering transformer AI techniques for processing language, proteins and other types of content. It open sourced some of these models for researchers. However, it never released a public interface for these models. Microsoft's decision to implement GPT into Bing drove Google to rush a public-facing chatbot, Google Bard, to market. Google suffered a significant loss in stock price following Bard's rushed debut after the language model incorrectly said the Webb telescope was the first to discover a planet in a foreign solar system.

Meanwhile, Microsoft and ChatGPT implementations have also lost face in their early outings due to inaccurate results and erratic behavior.

Dall-E language processing tool in action.
Dall-E can automatically create a painting from text prompts.

What are use cases for generative AI?

Generative AI can be applied in various use cases to generate virtually any kind of content. The technology is becoming more accessible to users of all kinds thanks to emerging innovations like GPT that can be tuned for different applications. Some of the use cases for generative AI include the following:

  • Implementing chatbots for customer service and technical support.
  • Deploying deep fakes for mimicking people or even specific individuals.
  • Improving dubbing for movies and educational content in different languages.
  • Writing email responses, dating profiles, resumes and term papers.
  • Creating photorealistic art in a particular style.
  • Improving product demonstration videos.
  • Suggesting new drug compounds to test.
  • Designing physical products and buildings.
  • Optimizing new chip designs.
  • Writing music in a specific style or tone.

What are the benefits of generative AI?

Generative AI can be applied extensively across every area of the business. It can make it easier to interpret and understand existing content and automatically create new content. Developers are exploring ways that generative AI can improve existing workflows, with an eye to adapting workflows entirely to take advantage of the technology. Some of the potential benefits to consider in implementing generative AI include the following:

  • Automating the manual process of writing content.
  • Reducing the effort of responding to emails.
  • Improving the response to specific technical queries.
  • Creating realistic representations of people.
  • Summarizing complex information into a coherent narrative.
  • Simplifying the process of creating content in a particular style.

What are the limitations of generative AI?

Early implementations of generative AI vividly illustrate its many limitations. Some of these limitations result from the specific approaches used to implement particular use cases. For example, a summary of a complex topic is easier to read than an explanation that includes various sources supporting key points. The readability of the summary, however, comes at the expense of a user being able to vet where the information comes from.

Here are some of the limitations to consider when implementing or using generative AI apps:

  • It does not always identify the source of content.
  • It can be challenging to assess the bias of original sources.
  • Realistic-sounding content makes it harder to identify inaccurate information.
  • It can be difficult to understand how to tune for new circumstances.
  • Results can gloss over bias, prejudice and hatred.

Attention is all you need: Transformers bring new capability

In 2017, Google reported on a new type of neural network architecture that brought significant improvements in efficiency and accuracy to tasks like natural language processing. The breakthrough approach, called transformers, was based on the concept of "attention."

At a high level, attention refers to the mathematical description of how things (e.g., words) relate to, complement and modify each other. The researchers described the architecture in their seminal paper, "Attention is all you need," showing how a transformer neural network was able to translate between English and French with more accuracy and in only a quarter of the training time than other neural nets. The breakthrough technique could also discover relationships, or hidden orders, between other things buried in the data that humans might have been unaware of because they were too complicated to express or discern.

Transformer architecture has evolved rapidly since it was introduced, giving rise to large language models such as GPT-3 and better pre-training techniques, such as Google's BERT.

What are the concerns surrounding generative AI?

The rise of generative AI is also fueling various concerns. These relate to the quality of results, potential for misuse and abuse, and the potential to disrupt existing business models. Here are some of the specific types of concerns being raised today:

  • It can provide inaccurate and misleading information.
  • It is more difficult to trust without knowing the source and provenance of information.
  • It can promote new kinds of plagiarism that ignore the rights of content creators and artists of original content.
  • It might disrupt existing business models built around search engine optimization and advertising.
  • It makes it easier to generate fake news.
  • It makes it easier to claim that real photographic evidence of a wrongdoing was just an AI-generated fake.
  • It could impersonate people for more effective social engineering cyber attacks.

What are some examples of generative AI tools?

Generative AI tools exist for various modalities, such as text, imagery, music, code and voices. Some popular AI content generators to explore include the following:

  • Text generation tools include GPT, Jasper, AI-Writer and Lex.
  • Image generation tools include Dall-E 2, Midjourney and Stable Diffusion.
  • Music generation tools include Amper, Dadabots and MuseNet.
  • Code generation tools include CodeStarter, Codex, GitHub Copilot and Tabnine.
  • Voice synthesis tools include Descript, Listnr and
  • AI chip design tool companies include Synopsys, Cadence, Google and Nvidia.
Chart comparing GPT-3 to other language models.
At 175 billion parameters, GPT-3 far outweighs its predecessors and other language models.

Use cases for generative AI, by industry

New generative AI technologies have sometimes been described as general-purpose technologies akin to steam power, electricity and computing because they can profoundly affect many industries and use cases. It's essential to keep in mind that, like previous general-purpose technologies, it often took decades for people to find the best way to organize workflows to take advantage of the new approach rather than speeding up small portions of existing workflows. Here are some ways generative AI applications could impact different industries:

  • Finance can watch transactions in the context of an individual's history to build better fraud detection systems.
  • Legal firms use generative AI to design and interpret contracts, analyze evidence and suggest arguments.
  • Manufacturers use generative AI to combine data from cameras, X-ray and other metrics to identify defective parts and the root causes more accurately and economically.
  • Film and media companies use generative AI to produce content more economically and translate it into other languages with the actors' own voices.
  • The medical industry uses generative AI to identify promising drug candidates more efficiently.
  • Architectural firms use generative AI to design and adapt prototypes more quickly.
  • Gaming companies use generative AI to design game content and levels.

GPT joins the pantheon of general-purpose technologies

OpenAI, an AI research and deployment company, took the core ideas behind transformers to train its version, dubbed Generative Pre-trained Transformer, or GPT. Observers have noted that GPT is the same acronym used to describe general-purpose technologies such as the steam engine, electricity and computing. Most would agree that GPT and other transformer implementations are already living up to their name as researchers discover ways to apply them to industry, science, commerce, construction and medicine.

Ethics and bias in generative AI

Despite their promise, the new generative AI tools open a can of worms regarding accuracy, trustworthiness, bias, hallucination and plagiarism -- issues that likely will take years to sort out. None of the issues are particularly new to AI. Microsoft's first foray into chatbots in 2016, called Tay, for example, had to be turned off after it started spewing inflammatory rhetoric on Twitter.

What is new is that the latest crop of generative AI apps sounds more coherent on the surface. Modern generative AI apps like ChatGPT could easily pass the Turing Test. One Google engineer was even fired after publicly declaring the company's generative AI app, Language Models for Dialog Applications (LaMDA), was sentient.

The convincing realism of generative AI content makes it harder to detect when things are wrong. This can be a big problem when we rely on generative AI results to write code or provide medical advice. Many results of generative AI are not transparent, so it is hard to determine if, for example, they infringe on copyrights or if there is problem with the original sources from which they draw results. If you don't know how the AI came to a conclusion, you cannot reason about why it might be wrong.

Example of a Turing Test to distinguish human from machine.
Generative AI apps like ChatGPT could easily pass the Turing Test.

Generative AI vs. AI

Generative AI produces new content, chat responses, designs, synthetic data or deep fakes. Traditional AI has focused on detecting patterns, making decisions, honing analytics, classifying data and detecting fraud.

Generative AI, as noted above, often uses neural network techniques such as transformers, GANs and VAEs. Other kinds of AI, in distinction, use techniques including convolutional neural networks, recurrent neural networks and reinforcement learning.

Generative AI often starts with a prompt that lets a user or data source submit a starting query or data set to guide content generation. This can be an iterative process to explore content variations. Traditional AI algorithms process new data to return a simple result.

Generative AI history

The Eliza chatbot created by Joseph Weizenbaum in the 1960s was one of the earliest examples of generative AI. These early implementations used a rules-based approach that broke easily due to a limited vocabulary, lack of context and overreliance on patterns, among other shortcomings. Early chatbots were also difficult to customize and extend.

Diagram of a GAN training method.
Generative adversarial network (GAN) training can generate realistic human faces, synthetic data or facsimiles of humans.

The field saw a resurgence in the wake of advances in neural networks and deep learning in 2010 that enabled the technology to automatically learn to parse existing text, classify image elements and transcribe audio.

Ian Goodfellow introduced GANs in 2014. This provided a novel approach for organizing competing neural networks to generate and then rate content variations. These could generate realistic people, voices, music and text. This inspired interest in -- and fear of -- how generative AI could be used to create realistic deep fakes that impersonate voices and people in videos.

Since then, progress in other neural network techniques and architectures has helped expand generative AI capabilities. Techniques include VAEs, long short-term memory, transformers, diffusion models and neural radiance fields.

Best practices for using generative AI

The best practices for using generative AI will vary depending on the modalities, workflow and desired goals. That said, it is important to consider essential factors such as accuracy, transparency and ease of use in working with generative AI. The following practices help achieve these factors:

  • Clearly label all generative AI content for users and consumers.
  • Vet the accuracy of generated content using primary sources where applicable.
  • Consider how bias might get woven into generated AI results.
  • Double-check the quality of AI-generated code and content using other tools.
  • Learn the strengths and limitations of each generative AI tool.
  • Familiarize yourself with common failure modes in results and work around these.

The future of generative AI

The incredible depth and ease of ChatGPT have shown tremendous promise for the widespread adoption of generative AI. To be sure, it has also demonstrated some of the difficulties in rolling out this technology safely and responsibly. But these early implementation issues have inspired research into better tools for detecting AI-generated text, images and video. Industry and society will also build better tools for tracking the provenance of information to create more trustworthy AI.

Furthermore, improvements in AI development platforms will help accelerate research and development of better generative AI capabilities for text, images, video, 3D content, drugs, supply chains, logistics and business processes. As good as these new one-off tools are, the most significant impact of generative AI will come from embedding these capabilities directly into versions of the tools we already use.

Grammar checkers are going to get better. Design tools will seamlessly embed more useful recommendations directly into workflows. Training tools will be able to automatically identify best practices in one part of the organization to help train others more efficiently. And these are just a fraction of the ways generative AI will change how we work.

Generative AI FAQs

Below are some frequently asked questions people have about generative AI.

Who created generative AI?

Joseph Weizenbaum created the first generative AI in the 1960s as part of the Eliza chatbot.

Ian Goodfellow demonstrated generative adversarial networks for generating realistic-looking and -sounding people in 2014.

Subsequent research into large language models from Open AI and Google ignited the recent enthusiasm that has evolved into tools like ChatGPT, Google Bard and Dall-E.

How could generative AI replace jobs?

Generative AI has the potential to replace a variety of jobs, including the following:

  • Writing product descriptions.
  • Creating marketing copy.
  • Generating basic web content.
  • Initiating interactive sales outreach.
  • Answering customer questions.
  • Making graphics for webpages.

Some companies will look for opportunities to replace humans where possible, while others will use generative AI to augment and enhance their existing workforce.

How do you build a generative AI model?

A generative AI model starts by efficiently encoding a representation of what you want to generate. For example, a generative AI model for text might begin by finding a way to represent the words as vectors that characterize the similarity between words often used in the same sentence or that mean similar things.

Recent progress in large language model research has helped the industry implement the same process to represent patterns found in images, sounds, proteins, DNA, drugs and 3D designs. This generative AI model provides an efficient way of representing the desired type of content and efficiently iterating on useful variations.

How do you train a generative AI model?

The generative AI model needs to be trained for a particular use case. The recent progress in large language models provides an ideal starting point for customizing applications for different use cases. For example, the popular GPT model developed by OpenAI has been used to write text, generate code and create imagery based on written descriptions.

Training involves tuning the model's parameters for different use cases and then fine-tuning results on a given set of training data. For example, a call center might train a chatbot against the kinds of questions service agents get from various customer types and the responses that service agents give in return. An image-generating app, in distinction to text, might start with labels that describe content and style of images to train the model to generate new images.

How is generative AI changing creative work?

Generative AI promises to help creative workers explore variations of ideas. Artists might start with a basic design concept and then explore variations. Industrial designers could explore product variations. Architects could explore different building layouts and visualize them as a starting point for further refinement.

It could also help democratize some aspects of creative work. For example, business users could explore product marketing imagery using text descriptions. They could further refine these results using simple commands or suggestions.

What's next for generative AI?

ChatGPT's ability to generate humanlike text has sparked widespread curiosity about generative AI's potential. It also shined a light on the many problems and challenges ahead.

In the short term, work will focus on improving the user experience and workflows using generative AI tools. It will also be essential to build trust in generative AI results.

Many companies will also customize generative AI on their own data to help improve branding and communication. Programming teams will use generative AI to enforce company-specific best practices for writing and formatting more readable and consistent code.

Vendors will integrate generative AI capabilities into their additional tools to streamline content generation workflows. This will drive innovation in how these new capabilities can increase productivity.

Generative AI could also play a role in various aspects of data processing, transformation, labeling and vetting as part of augmented analytics workflows. Semantic web applications could use generative AI to automatically map internal taxonomies describing job skills to different taxonomies on skills training and recruitment sites. Similarly, business teams will use these models to transform and label third-party data for more sophisticated risk assessments and opportunity analysis capabilities.

In the future, generative AI models will be extended to support 3D modeling, product design, drug development, digital twins, supply chains and business processes. This will make it easier to generate new product ideas, experiment with different organizational models and explore various business ideas.

What are some generative models for natural language processing?

Some generative models for natural language processing include the following:

  • Carnegie Mellon University's XLNet
  • OpenAI's GPT (Generative Pre-trained Transformer)
  • Google's ALBERT ("A Lite" BERT)
  • Google BERT
  • Google LaMDA  

Will AI ever gain consciousness?

Some AI proponents believe that generative AI is an essential step toward general-purpose AI and even consciousness. One early tester of Google's LaMDA chatbot even created a stir when he publicly declared it was sentient. Then he was let go.

In 1993, the American science fiction writer and computer scientist Vernor Vinge posited that in 30 years, we would have the technological ability to create a "superhuman intelligence" -- an AI that is more intelligent than humans -- after which the human era would end. AI pioneer Ray Kurzweil predicted such a "singularity" by 2045.

Many other AI experts think it could be much further off. Robot pioneer Rodney Brooks predicted that AI will not gain the sentience of a 6-year-old in his lifetime but could seem as intelligent and attentive as a dog by 2048.

This was last updated in March 2023

Continue Reading About What is generative AI? Everything you need to know

Dig Deeper on AI technologies

Business Analytics
Data Management