Nabugu - stock.adobe.com

Agentic AI speeds curriculum drafting at General Assembly

Education and training company General Assembly built an agentic AI system to scale curriculum development. The tool cut first-draft time by 90%.

As demand for AI and technical training accelerates, General Assembly found itself facing a common enterprise problem: growth was outpacing capacity.

The education provider sets aggressive targets for new program launches each year, driven by both consumer interest and enterprise customers. However, hiring more people to scale curriculum development proved costly, slow and impractical.

Instead, General Assembly turned to agentic AI -- not as a replacement for human talent, but as a way to remove friction from the earliest stages of content creation. Under the direction of chief technology officer Danielle Chircop, the company built a proprietary system called GAIA -- the General Assembly Intelligence Application. Built on the CrewAI framework, GAIA orchestrates multiple AI agents that mirror real roles in General Assembly's curriculum development process.

The goal was straightforward: help learning experience designers and subject matter experts move from a blank page to a solid first draft in minutes, freeing them to focus on judgment, quality and client-specific context.

The result has been a significant shift in how General Assembly develops training at scale. By pairing human expertise with agentic workflows, the company reduced first-draft development time by roughly 90%, expanded the number of programs its teams can manage simultaneously and maintained a human-in-the-loop approach to quality and consistency.

In the following Q&A, Chircop explains how the system works, what surprised her during implementation and what other organizations should consider before adopting agentic AI for content development.

Editor's Note: The following transcript was edited for length and clarity.

What problem were you trying to solve in curriculum development before you introduced the AI agents?

Danielle Chircop: At General Assembly, we create programs for consumers and enterprises. We have very lofty goals when it comes to how many programs we're going to create annually. Some of that is driven by consumer demand, some by enterprise demand. We have clients who are constantly buying programs from us, so we need to move fast.

Our learning experience designers have been a bottleneck for us, because they're only human. Constantly adding human capacity is a lot to manage from an overhead perspective, and it gets expensive.

We had to find a way to rapidly expand our work capacity while still keeping our humans doing the things that they do best. That's why we decided to implement an agentic system and workflow into our learning content creation system.

Could you tell me about the four agents and explain what they do?

Chircop: We created a proprietary agentic AI system called GAIA -- General Assembly Intelligence Application. Rather than being a single model generating content, GAIA orchestrates multiple specialized agents that are all aligned to real roles that we have within our curriculum development process, previously performed by humans.

So, for the initial workflow, we implemented four agents. We have the instructional architect, QA architect, subject matter expert and learning experience designer. Each of those agents is trained on our company's existing content, frameworks, instructional standards and ways of working. The idea is that they produce outputs that are explicitly designed for human review and refinement -- not final publication.

The goal was to move our teams from a blank page to a strong first draft within a matter of minutes, because getting started is very hard. Offloading that part to an agent has been extraordinarily helpful. We can now create first drafts in minutes instead of hours. Then, our human experts can spend their time on judgment, context and quality rather than repetitive drafting.

How has the tool altered your team's daily activities?

Chircop: Because we are so busy, we need our learning experience designers to manage multiple deliveries simultaneously. Previously, creating multiple programs at once wasn't possible because they were so busy executing one.

Now we can give employees a portfolio of programs to deliver, and they can ensure consistency across the entire portfolio. This allows them to spend more time with clients, ensuring that our programs are really meeting the mark.

Which parts of the content development process were best fit for AI and which did you keep for humans?

Chircop: There's a lot of judgment calls that need to be made that only our people can know. For example, my team has a lot of pattern recognition from delivering a wide range of programs to enterprise clients. So, a learning experience designer might look at a draft output and say, 'Yeah, that makes sense on paper, but when we delivered a similar program to a company three months ago, this part didn't work that well. Let's rethink that.'

They're the people who are hearing feedback from clients daily, so they're able to make those judgment calls around where we may want to refine content and where certain things may not land well in the classroom.

How does your team ensure instructional quality and consistency while using AI?

Chircop: It's about making sure we have those human checkpoints. This is still a very human-in-the-loop process. There is no piece of content that is going in front of our customers that our team is not working with hands-on and vetting.

Why did you choose to create a proprietary AI tool as opposed to simply having your team use a free tool like ChatGPT?

Chircop: We work with Fortune 500 companies and offer a premium service. Therefore, we want to ensure that all our content is handled with white-glove service. Producing content with a free version of ChatGPT -- or even a paid version -- and shipping it out as our own is not what we want to do.

We have a team of highly skilled subject matter experts, and our goal is to get as much expertise from them and create content at scale, versus just relying on a generic AI tool to do the work. We also wanted to design and manage a complex agent workflow without requiring our learning teams to write code.

The tool has helped us create structured technical content, such as labs and walkthroughs. We're not just producing slide decks or marketing copy. Our programs are extremely interactive and very practical in nature. Although there is a lecture component to them, our goal is to have our clients be really hands-on with what they're learning. So, we needed to design an agent workflow system that could help us produce content in that way.

Additionally, we carefully consider our enterprise-grade security requirements. We had to set up a system that fit within those requirements and passed our internal vendor review process.

Were there any challenges or adjustments that you needed to make after rolling out the agents?

Chircop: My team and I had the misconception that the agentic tool -- CrewAI -- was going to be like a turnkey SaaS product. I thought to myself, 'Perfect. We can just turn on the agents be on our way, right?' But that wasn't true. We quickly learned that CrewAI is better understood as a developer tool. So, we had to invest more upfront time in the workflow, design, evaluation and iteration than we initially anticipated.

There was a little bit more reliance on engineering than I had thought at the beginning, which was fine. I just needed to plan for it a little bit better. One of the key challenges was evaluation -- determining what was good enough for AI-generated instructional content. Unlike traditional automation, agentic systems require ongoing human judgment, feedback loops and reinvestment from users to maintain quality. We addressed those issues by slowing down a bit, tightening our evaluation criteria and reinforcing human review checkpoints.

What measurable benefits have you seen since implementing agentic AI?

Chircop: When we went into this, we had a strong hypothesis that the tool was going to make us faster. We projected that we were going to see about a 33% reduction in time and cost to first draft. But in practice, we achieved about a 90% reduction in first draft development time. It took us a little bit of time to get there, but that's where we are right now.

What advice would you give to other organizations considering AI for content development?

Chircop: Just go for it, and don't be afraid -- that's number one. Number two: invest in training for your people. Don't underestimate the change management and transformation mindset involved here. People are still a little bit afraid of AI. Initially, there's this reaction from people, like, 'Oh my gosh, is this thing going to take my job?' No. I can confidently say this tool did not take anybody's job. It actually led to multiple promotions on our team because people can now take on a larger portfolio of work.

That's been very positive, but it took a minute for people to see and trust that. Additionally, I wish I had gotten our engineering team trained a little bit better.

The other thing that worked well for us was having a tiger team -- trying not to stretch this across a million people, but instead get a small, tight team together who are responsible for redesigning workflows, implementing the tool and designing those quality standards. Organizations should let that team own it, get the initial pilot moving and prove that it's working. Then, they can grow and expand it from there.

Tim Murphy is site editor for Informa TechTarget's IT Strategy group.

Dig Deeper on CIO strategy