synthetic data Exploring GPT-3 architecture
X
Definition

generative modeling

What is generative modeling?

Generative modeling is the use of artificial intelligence (AI), statistics and probability in applications to produce a representation or abstraction of observed phenomena or target variables that can be calculated from observations.

Generative modeling is used in unsupervised machine learning as a means to describe phenomena in data, enabling computers to understand the real world. This AI understanding can be used to predict all manner of probabilities on a subject from modeled data.

Generative models are a class of statistical models that generate new data instances.

How generative modeling works

Generative models are generally run on neural networks. To create a generative model, a large data set is typically required. The model is trained by feeding it various examples from the data set and adjusting its parameters to better match the distribution of the data.

Once the model is trained, it can be used to generate new data by sampling from the learned distribution. The generated data can be similar to the original data set, but with some variations or noise. For example, a data set containing images of horses could be used to build a model that can generate a new image of a horse that has never existed but still looks almost realistic. This is possible because the model has learned the general rules that govern the appearance of a horse.

Generative models can also be used in unsupervised learning to discover underlying patterns and structure in unlabeled data as well as many other applications, such as image generation, speech generation and data augmentation.

Types of generative models

The following are three prominent types of generative models:

Generative adversarial network (GAN). This model is based on machine learning and deep neural networks. In this model, two unstable neural networks -- a generator and a discriminator -- compete against each other to provide more accurate predictions and realistic data.

A GAN is an unsupervised learning technique that makes it possible to automatically find and learn different patterns in input data. One of its main uses is image-to-image translation, which can change daylight photos into nighttime photos. GANs are also used to create incredibly lifelike renderings of a variety of objects, people and scenes that are challenging for even a human brain to identify as fake.

Variational AutoEncoders (VAEs). Similar to GANs, VAEs are generative models based on neural network autoencoders, which are composed of two separate neural networks -- encoders and decoders. They're the most efficient and practical method for developing generative models.

A diagram showing how a GAN is trained.
The GAN training process.

A Bayesian inference-based probabilistic graphical model, VAE seeks to understand the underlying probability distribution of the training data so that it can quickly sample new data from that distribution. In VAEs, the encoders aim to represent data more effectively, whereas the decoders regenerate the original data set more efficiently. Popular applications of VAEs include anomaly detection for predictive maintenance, signal processing and security analytics applications.

Autoregressive models. Autoregressive models predict future values based on historical values and can easily handle a variety of time-series patterns. These models predict the future values of a sequence based on a linear combination of the sequence's past values.

Autoregressive models are widely used in forecasting and time series analysis, such as stock prices and index values. Other use cases include modeling and forecasting weather patterns, forecasting demand for products using past sales data and studying health outcomes and crime rates.

Generative modeling vs. discriminative modeling

Machine learning models are typically classified into discriminative models and generative models.

Generative modeling contrasts with discriminative modeling, which identifies existing data and can be used to classify data. Generative modeling produces something whereas discriminative modeling captures the conditional probability, recognizes tags and sorts data. A generative model can be enhanced by a discriminative model and vice versa. This is done by having the generative model try to fool the discriminative model into believing the generated images are real. Through successions of training, both become more sophisticated at their tasks.

The following is a brief rundown of major differences between the two models:

  • Generative models are used in unsupervised machine learning problems, whereas discriminative models are used for supervised learning.
  • When given an input, discriminative models estimate the likelihood of a particular class label. In contrast, generative models produce fresh data samples that are similar to the training data. Simply put, discriminative models concentrate on label prediction, whereas generative models concentrate on modeling the distribution of data points in a data set.
  • Generative models are typically more flexible than discriminative models in expressing dependencies in complex learning tasks, but they can be more computationally expensive and could require more data to prevent overfitting. On the other hand, discriminative models are simpler and easier to train, and they typically outperform generative models when there's a distinct boundary between classes.
  • Compared to discriminative models, generative models might be less accurate even though they use less data to train, since they're more biased due to the higher assumptions they make. The low accuracy levels also stem from the fact that generative models need to learn about the distribution of data, whereas discriminative models only need to learn about the relationship between inputs and outputs.

Challenges of generative modeling

Generative models provide several advantages, but they also have certain drawbacks. The following are a few challenges of generative modeling:

  • Computational requirements. Generative AI systems often need a large amount of data and computational power. Some organizations might find this to be prohibitively expensive and time-consuming.
  • Quality of generated outputs. Generated outputs from generative models might not always be accurate or free of errors. This could be caused by a number of things, including a shortage of data, inadequate training or an overly complicated model.
  • Lack of interpretability. It might be challenging to comprehend how predictions are being made by generative AI models, as these models can be opaque and complicated. Ensuring the model is making impartial and fair decisions can be challenging at times.
  • Overfitting. Overfitting can occur in generative models, resulting in poor generalization performance and incorrectly generated samples. Overfitting happens when a model is unable to generalize and instead fits too closely to the training data set. This can happen due to a variety of reasons, including the training data set being too small and lacking enough data samples to adequately represent all potential input data values.
  • Security. Generative AI systems can be used to disseminate false information or propaganda by generating realistic and convincing fake videos, images and text.

Deep generative modeling

A subset of generative modeling, deep generative modeling uses deep neural networks to learn the underlying distribution of data. These models can develop novel samples that have never been seen before by producing new samples that are similar to the input data but not exactly the same. Deep generative models come in many forms, including VAEs, GANs and autoregressive models. These models have shown to be promising in a wide range of applications, including text-to-image synthesis, music generation and drug discovery.

However, deep generative modeling still remains an active area of research with many challenges, such as evaluating the quality of generated samples and the prevention of mode collapse, which occurs when the generator starts producing similar or identical samples, leading to a collapse in the modes of the data distribution.

Large-scale deep generative models are increasingly popular. For example, BigGAN and VQ-VAE are used to generate images and can have hundreds of millions of parameters. Jukebox is another large generative model for musical audio that has billions of parameters. OpenAI's third-generation Generative Pre-trained Transformer (GPT-3) and its predecessors, which are autoregressive neural language models, also contain billions of parameters. But the recently released GPT-4 outshines all the previous versions of GPT in terms of dependability, originality and the capacity to comprehend complex instructions. It can process up to 32,000 tokens as opposed to the 4,096 tokens processed by GPT-3.5, enabling it to handle more complex prompts.

Generative modeling history and timeline

Generative models have been a mainstay of AI since the 1950s. Early models at the time, including Hidden Markov models and Gaussian mixture models, provided simple data. However, the field has experienced a significant rise in popularity in recent years, thanks to the development of powerful generative models such as GANs and VAEs.

A screenshot showing the timeline and evolution of generative AI.
The evolution and timeline of generative AI.

GANs, first proposed by Ian Goodfellow in 2014, are a type of generative model that uses a two-part architecture consisting of a generator and a discriminator. The generator creates new data, while the discriminator tries to distinguish between the generated data and real data. The generator learns to improve its output by attempting to fool the discriminator.

In 2017, the transformer, a deep learning architecture that underpins large language models, including GPT-3, Google LaMDA and DeepMind Gopher, was introduced. The transformer can generate text, computer code and even protein structures.

In 2021, OpenAI introduced a technique called Contrastive Language-Image Pre-training (CLIP) that text-to-image generators now heavily rely on. By using image-caption pairs gathered from the internet, CLIP is particularly successful at discovering shared embeddings between images and text.

Currently, the recent AI generative services are aiding generative AI's quick and unparalleled rise to fame. Examples include Midjourney as well as OpenAI's Dall-E and ChatGPT.

These models have been applied in various fields such as computer vision, natural language processing and music generation. Generative modeling has also seen advancements in the fields of quantum machine learning and reinforcement learning. In general, the rise of generative modeling has opened up many new possibilities for AI and has the potential to transform a wide range of industries, from entertainment to healthcare.

GANs and VAEs are two popular approaches of generative AI. Analyze the benefits and drawbacks of each method and discover how GANs and VAEs stack up against each other.

This was last updated in March 2023

Continue Reading About generative modeling

Dig Deeper on Machine learning platforms

Business Analytics
CIO
Data Management
ERP
Close