An explanation of foundation models

In this video, TechTarget editor Sabrina Polin talks about foundation models.

The evolution of AI starts with foundation models.

Foundation models are at the forefront of AI innovation. Why? They're highly adaptable AI models that can perform a wide variety of tasks.

You might be familiar with the term large language models, or LLMs; these are a type of foundation model.

So, let's explore foundation models, how they work and the opportunities and risks they present.

In the past, AI was trained for specific tasks, limiting its range of functions. A foundation model, on the other hand, is a versatile machine learning model trained on a vast data set. This data set can then be adapted and fine-tuned for a wide variety of applications and tasks, giving it exceptional generality and adaptability.

Foundation models are characterized by their scale. This is achieved through:

  • Hardware improvements, like more powerful GPUs.
  • A transformer model architecture that powers various language models.
  • The availability of massive amounts of data for training.

Foundation models use a combination of traditional training methods and transfer learning, where knowledge learned from one task is applied to another.

The applications of foundation models are diverse -- take GPT-3 and 4, for instance. These models have laid the groundwork for various applications, including ChatGPT.

There are opportunities and risks that come with these models. On the one hand, foundation models open doors to innovative possibilities in industries like healthcare, law and education. They could revolutionize drug discovery, legal document generation and even enhance problem-solving in education. A recent study even suggested that about 15% of all worker tasks in the U.S. could be completed more efficiently using a foundation model.

However, foundation models pose risks, such as perpetuating biases present in their training data, becoming potential targets for cyberattackers and contributing a large environmental footprint due to extensive training requirements.

Foundation models are poised to revolutionize how AI is applied in the enterprise, so it's crucial to harness their potential while addressing their inherent risks.

What do you think? Is the potential of foundation models worth the risks? Share your thoughts in the comments below, and remember to like and subscribe, too.

Kaitlin Herbert is a content writer and former managing editor for the Learning Content team. She writes definitions and features.

Networking
Security
CIO
HRSoftware
Customer Experience
Close