What is the current state of AI?
Artificial intelligence technologies are transforming business processes and society at large. What are the AI trends in 2023 that enterprises should be paying attention to?
Success stories focus on the achievements and evolution of the algorithms. ChatGPT is a new AI language model that could disrupt modern search engines.
Equally impressive and worthy of enterprise attention are the new tools automating machine learning pipelines and greatly accelerating the development process.
In addition, the field of AI is moving into various new domains such as conceptual design, smaller devices and multi-modal applications -- innovations that will expand AI's repertoire in many industries. It's also important for companies to keep an eye on the bleeding edge AI technologies that show tremendous promise and are now available for experimentation via the cloud -- quantum AI, for example.
What are AI and machine learning trends for 2023?
To take full advantage of the benefits of AI and machine learning trends, IT and business leaders will need to develop a strategy for aligning AI with employee interests and business goals. The following issues should be on the agenda:
- how to streamline and democratize access to AI;
- how to address rising concerns about ethical and responsible AI; and
- how to tie AI compensation to business goals to ensure AI implementations deliver on the hype.
Here are 10 top 2023 trends IT leaders should prepare for.
1. Automated machine learning (AutoML)
Two promising aspects of automated machine learning will be improved tools for labelling data and the automatic tuning of neural net architectures, said Michael Mazur, CEO of software company AI Clearing, which uses AI to improve construction reporting.
- The need for labelled data created a labelling industry of human annotators based in low-cost countries like India, Central Eastern Europe and South America, Mazur said. The risks associated with using offshore labor "pushed the market to look at different ways of avoiding or minimizing this part of the process." Improvements in semi- and self-supervised learning are helping companies keep the amount of manually labelled data to a minimum.
- By automating the work of selecting and tuning a neural network model, AI will become cheaper, and new solutions will take less time to reach market.
Going forward, Gartner predicts a focus on improving the various processes required to operationalize these models: PlatformOps, MLOps and DataOps. Gartner collectively calls these new capabilities XOps.
2. AI-enabled conceptual design
Historically, AI was mostly applied to streamline processes related to data, image and linguistic analytics.
This is ideal for usage in financial, retail or healthcare industries and for clearly defined repetitive tasks. But recently OpenAI developed two new models named DALL·E and CLIP (Contrastive Language-Image Pre-training) that combine language and images to generate new visual designs from a text description.
Early work shows how the models can be trained to make novel designs. Examples included an avocado-shaped armchair that was designed by giving the AI the caption "avocado armchair." Mazur believes the new models will facilitate production-scale implementation of AI into creative industries. "Soon we can expect something similar disrupting fashion, architecture and other creative industries," Mazur said.
3. Multi-modal learning
AI is getting better at supporting multiple modalities within a single ML model, such as text, vision, speech and IoT sensor data. Google DeepMind made headlines with Gato, a multimodal AI approach that can perform visual, language and robotic movement tasks.
Meanwhile, developers are finding innovative ways to combine modalities to improve common tasks like document understanding, said David Talby, founder and CTO at John Snow Labs, an NLP tools provider.
For example, patient data collected and processed by healthcare systems can include visual lab results, genetic sequencing reports, clinical trial forms and other scanned documents. The layout and presentation style of this information, if done right, can help doctors better understand what they're looking at. AI algorithms trained using multi-modal techniques, such as machine vison and optical character recognition, could optimize the presentation of results, improving medical diagnosis. Getting the most out of multi-modal techniques will require hiring or training data scientists with cross-domain skills such as natural language processing and machine vision techniques.
4. Models that can achieve multiple objectives
Commonly, AI models are given one objective that targets a particular business metric such as maximizing revenue. As early efforts mature, expect more companies to invest in multi-task models that consider multiple objectives, said Justin Silver, AI strategist and manager of data science at PROS, an AI-driven sales management platform. Multi-task models are different from multi-modal learning, which aims to learn a joint representation of various data types.
Targeting a single business metric without consideration of other objectives can produce suboptimal results. For example, if a product recommendation engine only targets customer conversion rate, the company may miss out on revenue opportunities related to new or different products that a customer may not have bought in the past. In addition, the rising importance of environmental, social and governance (ESG) goals means CIOs need to plan for models that balance sustainability goals like carbon reduction and circularity with traditional business goals, such as reducing inventory, delivery time, and costs.
5. AI-based cybersecurity
New AI and machine learning techniques will play a growing role in detecting and responding to cybersecurity threats. Ed Bowen, advisory AI Leader and managing director at Deloitte, said one big driver is that adversaries have been weaponizing AI and machine learning to find vulnerabilities.
He expects more enterprises to use AI defensively and proactively to detect anomalous behavior and new attack patterns. Organizations that fail to integrate AI are at risk of falling behind the security curve and suffering a higher rate of negative impacts.
"AI-supported cyber programs are typically more able to manage multi-faceted, dynamic risks through both improved detection efficacy as well as improved agility and resiliency amidst increased disruption," Bowen said.
Organizations that fail to integrate AI are at risk for falling behind the security curve and suffering a higher rate of negative impacts, he added.
6. Improved language modeling
ChatGPT demonstrated a new way to think about engaging with AI in an interactive experience that is good enough for a wide range of use cases in many fields, including marketing, automated customer support and user experiences.
In 2023, expect to see a growing demand for quality control aspects of these improved AI language models. There has already been a backlash against inaccurate results in coding. Over the next year, companies will face pushback on inaccurate product descriptions and dangerous advice, for example. This will drive interest in finding better ways to explain how and when these tools generate errors.
7. Computer vision in business expands but ROI a challenge
Cheaper cameras and new AI will drive an explosion of computer vision for analytics and automation in 2023.
"Access to compute, sensors, data and state-of-the-art vision models are creating opportunities to automate processes that require humans to visually inspect and interpret objects in the real world," said Scott Likens, innovation and trust technology leader at PwC.
In back office operations, improved machine vision will help streamline document workflows. On the front lines, computer vision adoption will digitize the physical elements of business operations.
Likens expects CIOs will find it challenging to generate an ROI from these efforts. Identifying the appropriate use cases is critical. He predicts a growing demand for "bilinguals," or people who can bridge the technical and business space and identify new opportunities for computer vision.
Implementing computer vision requires specialized skills. High-performing systems require thousands of labeled examples that may not naturally exist within a company and need to be manually curated at a high cost, creating an economic barrier to entry. Computer vision implementations also present hurdles that aren't necessarily encountered with deep learning models used for language tasks and forecasting. Some applications may require specific camera hardware and edge compute capabilities to address the use case, introducing new operational and infrastructure skills for organizations that are not already actively managing this type of infrastructure as part of their technology ecosystem.
8. Democratized AI
Improvements in AI tooling are lowering the level of expertise required to build AI models. This will make it easier to include subject matter experts in the AI development process. Democratized AI will not only speed up AI development but also improve the level of accuracy with the involvement of subject matter experts, Talby said. Frontline experts can see where new models provide the most value and create problems or need to be worked around.
Doug Rank, data scientist at PS AI Labs, predicts the trend will mirror the trajectory of technologies like computers and networks, which evolved from being usable by only a few experts to wide adoption across the enterprise. The big challenge will be cleaning up the data and providing access with appropriate guardrails.
"With careful planning, IT leaders can ensure their data remains accurate and complete throughout cloud migrations so they can realize the value of accessible AI," Rank said.
Efforts to simplify AI tools could also drive the adoption of AI deployments outside of existing IT services, said Pini Solomovitz, head of innovation at Run:ai, a GPU orchestration platform. Shadow AI mirrors other types of shadow IT that are driven largely by low-cost cloud services.
AI democratization has cost, ethics and data privacy omplications for the enterprise. CIOs will increasingly need to audit newer uses of AI to help consolidate costs, identify new risks and streamline AI workflows.
9. Bias removal in ML
As AI adoption in the enterprise accelerates, affecting more users daily, the challenge of AI bias and fairness becomes a genuine concern. The goal is to ensure that AI makes predictions objectively, ensuring people aren't discriminated against when applying for loans, buying products online or receiving medical treatment.
"With reputations on the line, bias mitigation is of the utmost importance for businesses to build trust in their ML products," said Liran Hason, co-founder and CEO of Aporia, an AI explainability platform.
In 2023, CIOs will be challenged with governing their data science practices and ML models, due to the complex nature of these systems. Implementing responsible AI practices and equipping the organization with the proper tooling will take on more urgency. Hason expects to see increased interest in tools for monitoring and mitigating bias in production AI to help catch and explain the exact data points and feature that led to a biased prediction.
10. Digital twins drive the industrial metaverse
Over the past year, leading industrial design and AI vendors have connected the dots between digital twins -- virtual models that simulate reality -- and the metaverse. Nvidia and Siemens have partnered to create an industrial metaverse. Meanwhile, construction giant Bentley has adopted the term infrastructure metaverse.
These advances may mark a turning point for digital twins from an obscure technology to a cornerstone of IT strategy, said Anand Rao, global AI lead at PwC. While digital twins have been deployed across all industry sectors over the past couple of years, he sees adoption accelerating and expanding in 2023.
The complexity of digital twins has also grown, from relatively simple synthetic- or real data-based digital twins to asset-based digital twins powered by IoT to customer-based and ecosystem-based digital twins. Digital twins are also now used to model and simulate human behaviors and to evaluate alternative scenarios of the future, paving the way, said experts, to the convergence of digital twins with traditional industrial simulations and AI-based agent-based simulation.
"The next stage of this evolution is the convergence of scientific computing, industrial simulation and artificial intelligence to create simulation intelligence, where foundational simulation elements are built into operating systems," Rao said.
The possibilities for digital twins are vast, he continued, providing businesses with new ways to leverage and forecast data. With more complex and versatile digital twins, enterprises can use simulation intelligence to predict real-world scenarios like disease progression, customer behaviors and economic impact of the pandemic. Digital twins will also become a critical technology for organizations working on or expanding into ESG modeling, smart cities, drug design and other applications.
Pilots of digital twin projects are being scaled and operationalized today. CIOs should consider how to incorporate them as part of the business's overall analytics architecture and cloud/IT-stack. Companies need to provide both a development environment and a production environment for running simulations. Simulation workloads are also compute-intensive requiring on-demand compute on prem or in the cloud.
It's also an important technology for CIOs to use for upskilling employees. In addition, companies should have a well-defined process for scoping, building, calibrating, deploying and monitoring digital twins. Digital twins can help CIOs transform a business, but only if the business and its employees are prepared.