The IT industry is experiencing an exciting time of upheaval and transformation due to the rapid growth of AI and automation. But at AnsibleFest 2023, industry experts urged organizations to consider trust, responsibility and how to measure success when working with the tools.
Hype around generative AI continues to grow as more organizations and vendors jump on the bandwagon -- and AnsibleFest, held as part of Red Hat Summit 2023, was no exception. AI and automation were a major focus at this year's conference with the unveiling of Ansible Lightspeed and the Event-Driven Ansible tool.
Whereas automation was once considered a nice addition to the IT toolbox, today it has become a necessity to keep up with the increased complexity and scale that organizations and developers face. By 2027, IDC predicts, AI and automation will reduce the need for manual intervention and improve service-level objectives such as performance, costs and security by 50%.
"The future is going to be automated," said Kaete Piccirilli, Red Hat director of product marketing, in the AnsibleFest keynote presentation "The Automation Moment -- and Beyond."
Achieving success with AI and automation
Automation has typically required some level of human interaction, but more sophisticated AI could reduce the amount of manual work needed to deliver software -- for example, by generating code automatically and performing repetitive testing work.
With automation and AI, organizations can enhance existing resources and bridge gaps between teams. This promotes better collaboration between developers and IT operations teams, which can in turn increase productivity and availability. But despite recent rapid growth and numerous benefits, AI and automation -- especially generative AI -- aren't without their challenges.
In the same keynote presentation, Craig Brandt, principal technical marketing manager at Red Hat, and Ruchir Puri, chief scientist at IBM Research, addressed the ethical aspects organizations must consider to get the most out of generative AI.
Brandt and Puri mentioned the importance of establishing trust multiple times during the presentation -- not in reference to making AI itself more trustworthy, but in regard to attribution. To create trust surrounding generated content, developers must continue to communicate as a team and keep the human aspect in the process with content source matching and acknowledging attribution.
Puri said his work with IBM Project Wisdom, Red Hat and IBM Research has focused on domain quality, deployment efficiency and ensuring trust in AI. "Companies should really be thinking about trust," he said. "Generative AI is a very powerful force percolating across every aspect of our society, but the trust of generative AI is fundamental to its success."
Development has always been a team sport, Puri explained, and introducing generative AI and automation to organizations shouldn't change that. Continuing to communicate and collaborate within IT teams to ensure accuracy and productivity helps foster a community of trust.
Puri emphasized the importance of acknowledging where code comes from and being clear about attribution. For example, if a developer uses a generative AI model to write code, it's ethical and responsible to disclose that information to other developers when discussing potential modifications to the code and determining whether it should move forward.
In addition to fostering trust, Puri explained that organizations can benefit the most from generative AI by understanding how it can drive success in terms of productivity.
Although no one can say for sure what the coming years of generative AI will look like, Puri said he's most excited to see how code and content generation continue to evolve, such as generating roles and playbooks as introduced with Ansible Lightspeed. He also mentioned use cases for AI such as content discovery, management, explanation and optimization.
For example, AI can help bridge the skills gap and break down barriers to entry to careers in IT by generating code automatically. Hard proof in numbers, such as time, money and effort spent, helps organizations measure what processes AI can streamline, enabling developers to focus on other projects or identify where they are struggling with implementation.
And overall, developers still need to maintain coding best practices, such as reviewing and testing code before pushing it to production. "You shouldn't blindly trust large language models," said Matthew Jones, chief architect of Ansible Automation for Red Hat, in the same presentation. Although generative AI can be a helpful tool, developers must see it as just that -- a tool.