Context engineering takes prompting to a higher business level

In the care and feeding of AI models and chatbot interfaces, prompting alone can be a fool's errand in strategic business planning without the proper context to interpret prompts.

When deriving value from AI models, the greatest challenge businesses encounter isn't the models they choose, how they host them or which security features they include. It's more about what the models are asked to do and what information they need to do it with. A good model that receives poor prompts or lacks the context to properly interpret prompts will struggle to perform adequately and fail to provide the correct information.

That's where context engineering comes in. Context engineering is the practice of optimizing the information that AI models receive in the format and structure they need to interpret prompts accurately and respond effectively. Small improvements to AI models using context engineering can pay dividends in the reliability, accuracy, security and cost of AI models and their chatbot interfaces. But it takes a broad group of stakeholders, from machine learning developers to end users, and requires the support and direction of business leaders.

Context engineering mainly consists of the following three components:

  1. Training data. The information machine learning developers use to train a model must accurately reflect the context in which the model needs to work. A model that interprets financial reports, for example, will need to be trained partly on financial reporting data.
  2. Augmented data sources. After training the model, businesses can use techniques like retrieval-augmented generation (RAG) to give the model access to additional data and expand the context in which it can work. A company's model trained on generic financial data, for instance, could access a specific financial report through RAG.
  3. Prompting. Users interact with models by sending them prompts. Each prompt should include the information necessary for a model to understand the context of the user's request.

Beyond supplying models with the right data, context engineering also provides them with high-quality data. Although models typically tolerate minor data imperfections, such as minor typos in a prompt, they perform best when the information they access is complete and error-free.

Why context engineering matters

When generative AI first appeared in enterprise environments in late 2022, prompt engineering became something of a buzzword. The term refers to the practice of carefully curating prompts to ensure they contain accurate instructions and context for models. While prompt engineering focuses on managing data within prompts, context engineering is more holistic, providing models with the right information across all stages of the AI development, training and inference lifecycle.

Most out-of-the-box AI models that power enterprise applications are general-purpose and not able to understand the context of user requests. They're trained using vast quantities of data, but they aren't designed for any specific use case. As a result, a model might struggle to know what a user wants them to do or how to find the information to do it because it doesn't inherently understand the context of a request.

Lack of contextual awareness can lead to the following types of problems with AI models:

  • Hallucinations. When models can't effectively work within the context that a user requests, they're prone to generate false or irrelevant information.
  • Security risks. If a model can't identify contextual information, such as a user's identity or role, it could inadvertently share sensitive data.
  • Higher costs. Unclear model instructions or incomplete data sources might force models to work harder when processing requests, leading to higher model hosting or token costs.
Graphic listing 12 AI benefits for business.
Armed with context awareness, AI models can support numerous business benefits.

Where context engineering provides business value

There are several scenarios where context engineering can be used to improve security, customer experience and business operations, as the following examples show.

Fraud detection

If a system uses an AI model to detect fraudulent purchases without sufficient data on purchasing trends, the model might not distinguish between fraud and normal operations. It might flag high-value transactions as fraudulent simply because it was trained solely on data correlating higher-value purchases with higher rates of fraud.

Using context engineering practices, the model can be fed a range of data on valid and invalid purchasing trends at the company. For example, RAG could be used to grant a pretrained model access to more relevant transaction logs, improving the model's accuracy and value.

More efficient customer interactions

When most customers log into an online account, they assume they're connected to a chatbot. However, their accounts are managed through a system separate from the AI model powering the chatbot, so the model can't inherently identify customers. A customer, for example, could ask questions like "summarize my purchasing history" or "track the delivery status of the order I placed last Tuesday" and the model might provide information associated with a different customer.

Context engineering practices ensure customer account systems integrate with the chatbot. Whenever a new customer logs in, the AI model receives a silent prompt invisible to the customer. The silent prompt could say "You're talking to customer John Smith, whose account number is 123456." With this information, the model would understand the context necessary to accurately address that customer's requests.

Improved business processes

When company employees prompt a chatbot internally to locate information or ask questions about a process, the AI model powering the chatbot might lack the information necessary to identify the employee and their department. The chatbot might provide an answer to an HR department prompt that's more relevant to the accounting department.

Context engineering techniques can ensure the AI model has access to the data necessary to identify and address prompts from individual employees and their departments.

Graphic listing 12 steps for successfully managing AI projects.
Selecting and operationalizing effective language models is a key step in AI project management.

Context engineering stakeholders

The following three core groups of stakeholders contribute to the context engineering process:

  1. Third-party vendors or developers who build general-purpose AI models thatbusinesses buy and modify rather than build their own.
  2. A company's internal AI and data engineering teams that are responsible for connecting pretrained models to additional data sources using techniques like RAG.
  3. Users with the ability to write specific, well-constructed prompts.

To drive context engineering initiatives, businesses should turn to their engineering teams. Engineers ensure third-party models have access to the data sources necessary to customize a model to an organization's needs. They can also implement tools that mediate end-user interactions with models and, where necessary, supply key contextual information that users don't include in their prompts. Prompts can also be modified to correct issues like misspellings or grammatical mistakes, which could negatively impact users who interact with AI-powered systems.

Chris Tozzi is a freelance writer, research adviser, and professor of IT and society. He has previously worked as a journalist and Linux systems administrator.

Next Steps

AI and machine learning trends to watch

The future of AI: What to expect

AI regulation: What businesses need to know

Will AI replace jobs? Job types that might be affected

The history of artificial intelligence: Complete AI timeline

Dig Deeper on AI business strategies