What is the current state of AI?
Artificial intelligence technologies are transforming business processes and society at large. What are the AI trends in 2022 that enterprises should be paying attention to?
Success stories tend to focus on the achievements and evolution of the algorithms. Google's BERT transformer neural network is an example of a new type of algorithm that promises to revolutionize natural language processing.
Equally impressive -- and worthy of enterprise attention -- are the new tools being invented to automate machine learning pipelines and greatly accelerate the development process.
In addition, the field of AI is moving into various new domains such as conceptual design, smaller devices and multi-modal applications -- innovations that will expand AI's repertoire in many industries. It's also important for companies to keep an eye on the bleeding edge AI technologies that show tremendous promise and are now available for experimentation via the cloud -- quantum AI is an example.
What are AI and machine learning trends for 2022?
To take full advantage of the benefits of AI and machine learning trends, IT and business leaders will need to develop a strategy for aligning AI with employee interests and business goals. The following issues should be on the agenda:
- how to streamline and democratize access to AI;
- how to address rising concerns about ethical and responsible AI; and
- how to tie AI compensation to business goals to ensure AI implementations actually deliver on the hype.
Here are 10 top 2022 trends IT leaders should prepare for now.
1. Automated machine learning (AutoML)
Two promising aspects of automated machine learning will be improved tools for labelling data and the automatic tuning of neural net architectures, said Michael Mazur, CEO of AI Clearing, which is using AI to improve construction reporting.
- The need for labelled data had created a labelling industry of human annotators based in low-cost countries like India, Central Eastern Europe and South America, Mazur said. The risks associated with using offshore labor "pushed the market to look at different ways of avoiding or minimizing this part of the process." Improvements in semi- and self-supervised learning are helping companies keep the amount of manually labelled data to a minimum.
- By automating the work of selecting and tuning a neural network model, AI will become cheaper and new solutions will take less time to reach market.
Going forward, Gartner predicts a focus on improving the various processes required to operationalize these models: PlatformOps, MLOps and DataOps. Gartner collectively calls these new capabilities XOps.
2. AI-enabled conceptual design
Historically, AI was mostly applied to streamline processes related to data, image and linguistic analytics.
This is ideal for usage in financial, retail or healthcare industries and for clearly defined repetitive tasks. But recently OpenAI developed two new models called DALL·E and CLIP (Contrastive Language-Image Pre-training) that combine language and images to generate new visual designs from a text description.
Early work shows how the models can be trained to make novel designs. Examples included an avocado-shaped armchair that was designed by giving the AI the caption "avocado armchair." Mazur believes the new models will facilitate production-scale implementation of AI into creative industries. "Soon we can expect something similar disrupting fashion, architecture and other creative industries," Mazur said.
3. Multi-modal learning
AI is getting better at supporting multiple modalities within a single ML model, such as text, vision, speech and IoT sensor data. Developers are starting to find innovative ways to combine modalities to improve common tasks like document understanding, said David Talby, founder and CTO of John Snow Labs, an NLP tools provider.
For example, patient data collected and processed by healthcare systems can include visual lab results, genetic sequencing reports, clinical trial forms and other scanned documents. The layout and presentation style of this information, if done right, can help doctors better understand what they're looking at. AI algorithms trained using multi-modal techniques such as machine vison and optical character recognition could optimize the presentation of results, improving medical diagnosis. Getting the most out of multi-modal techniques will require hiring or training data scientists with cross-domain skills such as natural language processing and machine vision techniques.
4. Multi-Objective Models
Commonly, AI models are given one objective that targets a particular business metric such as maximizing revenue. As early efforts mature, expect more companies to invest in multi-task models that consider multiple objectives, said Justin Silver, AI strategist, PROS, an AI-powered sales management platform. Multi-task models are different from multi-modal learning (above), which aims to learn a joint representation of various data types.
Targeting a single business metric without consideration of other objectives can produce suboptimal results. For example, if a product recommendation engine only targets customer conversion rate, the company may miss out on revenue opportunities related to new or different products that a customer may not have bought in the past. In addition, the rising importance of environmental, social and governance (ESG)-related goals means CIOs need to plan for models that balance sustainability goals like carbon reduction and circularity with traditional business goals such as reducing inventory, delivery time, and costs.
5. Tiny ML
Tiny ML is a rapidly growing approach for developing AI and ML models that run on hardware-constrained devices such as the microcontrollers used for powering cars, refrigerators and utility meters. Jason Shepherd, vice president of Ecosystem at Zededa, expects Tiny ML algorithms to be increasingly used for localized analysis of simple voice and gesture commands; common sounds such as a gunshot or baby crying; asset location and orientation; environmental conditions; and vital signs. Teams will need to adopt new approaches for the development, security and management of Tiny ML.
6. AI-enabled employee experience
IT leaders are starting to confront concerns about the potential for AI to steal or dehumanize jobs. This is driving interest in using AI to enhance and augment the employee experience, said Howard Brown, founder and CEO of Revenue.io, a call center tools provider. AI assistance could be especially useful in overburdened departments that are struggling to hire people, such as sales and customer success teams.
Combined with robotic process automation, AI could help automate mundane tasks to free up sales teams a for more meaningful conversation with customers. It could also be used to improve employee coaching and training.
"Everyone talks about delivering great customer experience, but the best way to do that is to deliver a great employee experience first," Brown said. IT leaders will need to think about how AI can be provisioned in a way that helps employees stay engaged, happy and successful at work.
7. Quantum ML
Quantum computing shows tremendous promise for creating more powerful AI and machine learning models. The technology is still beyond practical reach, but things are starting to change with Microsoft, Amazon and IBM making quantum computing resources and simulators easily accessible via cloud models.
"This could set us up for huge breakthroughs in late 2022 and 2023 as quantum computers become more powerful and intersect with the increased interest in and experimentation by the ML community," said Scott Laliberte, managing director and leader, emerging technology consulting, at Protiviti, a digital transformation consultancy.
The intersection of quantum computing and ML could create tremendous benefits for companies, enabling them to potentially solve problems that are unsolvable today. Laliberte recommends that organizations start looking now at the potential impact of quantum computing on their industry and adapt their AI strategies to enable resources to explore quantum computing and ML when the platforms mature in the next two to three years.
8. Democratized AI
Improvements in AI tooling are lowering the level of expertise required to build AI models. This will make it easier to include subject matter experts in the AI development process. Democratized AI will not only speed up AI development, it will also ensure the level of accuracy provided by subject matter experts, Talby said. Frontline experts can see where new models can provide the most value and where they can create problems or need to be worked around.
Doug Rank, senior data scientist at Saggezza, predicts the trend will mirror the trajectory of technologies like computers and networks, which evolved from being usable by only a few experts to wide adoption across the enterprise. The big challenge will be cleaning up the data and providing access with appropriate guardrails.
"With careful planning, IT leaders can ensure their data remains accurate and complete throughout cloud migrations, so they can realize the value of accessible AI," Rank said.
9. Responsible AI
Early AI work operated in a greenfield when it came to regulations, ethics and explainability. The first substantive efforts at addressing this absence of oversight have focused on protecting data privacy and security through new legislation like GDPR and CCPA. The laws included some guidelines on AI transparency, particularly when personally identifiable information was used to make substantive decisions. Now regulators in Europe and the Biden Administration in the U.S. are turning the heat up on the AI algorithms themselves.
Trustworthy AI is growing in importance, not just to appease regulators and consumers but also to help business users understand where and how AI makes mistakes.
Thanneer Malai, senior technical program manager at Saggezza, predicts enterprises will have to invest in training programs for trustworthy AI. Improved training will help humans identify and rectify problems that automated tools may miss.
10. Digital twins grow up
The use of digital twins, virtual models that simulate reality, has been widespread across all industry sectors over the past couple of years, but now their use is starting to accelerate and expand, said Anand Rao, global AI lead at PwC. Digital twins are viewed as important for businesses' 2022 strategies by 88% of CIOs, according to PwC data.
Their complexity has grown from simple synthetic or real data-based digital twins, to asset-based digital twins powered by internet of things (IoT), to customer-based and ecosystem-based digital twins. Digital twins are also now used to model and simulate human behaviors and to evaluate alternative scenarios of the future -- paving the way, said experts, to the convergence of digital twins with traditional industrial simulations and AI-based agent-based simulation.
"The next stage of this evolution is the convergence of scientific computing, industrial simulation and artificial intelligence to create simulation intelligence where foundational simulation elements are built into operating systems," Rao said.
The possibilities for digital twins are vast and provide businesses with new ways to leverage and forecast data. With more complex and versatile digital twins, we can use simulation intelligence to predict real-world scenarios like disease progression, customer behaviors and economic impact of the pandemic. Digital twins will also become a critical technology for organizations working on or expanding into ESG modeling, smart cities, drug design and other applications.
Digital twin pilots are being scaled and operationalized today. CIOs should consider how to incorporate them as part of the business's overall analytics architecture and cloud/IT-stack. Companies need to provide both a development environment and a production environment for running simulations. Simulation workloads are also compute-intensive requiring on-demand compute on-prem or in the cloud.
It's also an important technology for CIOs to begin upskilling employees on. In addition, companies should have a well-defined process for scoping, building, calibrating, deploying, and monitoring digital twins. Digital twins can help CIOs transform a business, but only if the business and its employees are prepared.