sdecoret - stock.adobe.com
After OpenAI's launch of ChatGPT late last year, 2023 turned out to be a watershed year for AI, characterized by generative AI advancements, intense competition, and mounting ethics and safety concerns.
The field's rapid growth this year brought both technological innovations and significant challenges. From leadership changes at OpenAI to challenges from new players such as Google's Gemini and Anthropic's Claude, the year saw a number of major shifts in the generative AI landscape. Alongside these developments, the industry grappled with cybersecurity risks and debated the ethical implications of fast-paced AI advancement.
OpenAI fires and rehires CEO Sam Altman
In one of the most surprising news stories of the year, OpenAI co-founder and CEO Sam Altman was abruptly ousted on Nov. 17 by the company's board, which cited a lack of candor in his communications. Shortly after Altman's departure, Microsoft announced it would hire him and Greg Brockman, OpenAI's president and co-founder, for a new AI research division.
Altman's exit and the tumultuous period that followed provoked widespread backlash at OpenAI, with 95% of employees threatening resignation in protest of the board's decision. Within a week of Altman's initial dismissal, OpenAI reinstated him as CEO, a decision influenced by extensive board negotiations and the outpouring of employee support.
Although Altman's return came as a relief to many, the events exposed underlying challenges within OpenAI -- namely, the tension between its dual incentives as a profit-driven and mission-focused organization, and the extent to which the company's viability is tied to Altman himself. These dynamics, together with the appointment of new business-oriented board members including former Harvard president Larry Summers and former Salesforce co-CEO Bret Taylor, have raised questions about the company's future direction.
Competitors to ChatGPT emerge
ChatGPT kicked off the generative AI hype in November 2022. And although OpenAI continued to dominate headlines this year, 2023 also saw the rise of a number of competitors.
Although its DeepMind lab has historically been an AI pioneer, Google initially lagged behind in generative AI as its Bard chatbot struggled with inconsistencies and hallucinations after its launch early in the year. But the company's prospects could shift in 2024 following its release of the multimodal foundation model Gemini earlier this month. Gemini, which Google says will power Bard and other Google applications, integrates text, image, audio and video capabilities, potentially revitalizing Google's standing in the generative AI field.
Meanwhile, Anthropic, an AI startup founded by ex-OpenAI personnel, unveiled Claude 2, a large language model aiming to address security and data privacy concerns while performing at a level competitive with ChatGPT. Features including the ability to analyze large files and Anthropic's safety-centric focus set Claude apart from rivals such as ChatGPT and Bard.
And IBM, rebranding its longstanding Watson AI system, entered the fray with Watsonx, a generative AI platform targeting enterprise needs with an emphasis on data governance, security and model customization. Despite its differentiated approach and hybrid cloud focus, however, IBM will need to navigate challenges related to market speed and competition from both startups and established tech giants.
Open source AI becomes increasingly viable
In addition to the wide array of commercial options, the open source AI landscape is also expanding. Open source AI models offer an alternative to generative AI services from major cloud providers, enabling enterprises to customize models with their own data. Although training and customizing open source models offers greater control and potential cost savings, it can also pose challenges for enterprises.
In February, AWS partnered with Hugging Face, a prominent hub for open source AI models. This collaboration, which made training, fine-tuning and deploying LLMs and vision models more accessible, marked Amazon's strategic response to generative AI moves by competitors Microsoft and Google. The partnership also gave Hugging Face access to AWS' extensive infrastructure and developer ecosystem.
Also in February, Meta ventured into the generative AI market with its own LLM, Llama, initially intended for research use under a noncommercial license and designed to be a smaller, more manageable foundation model. However, Llama was leaked online shortly after its release, despite Meta's plans to restrict access to academics, government bodies and other research organizations.
In July, Meta's release of the upgraded Llama 2 marked a significant development in the generative AI market as an open source LLM available for both research and commercial purposes. In partnership with Microsoft, Meta made Llama 2 available in Azure's AI model catalog and optimized the model for Windows, strengthening its enterprise appeal.
OpenAI expands its offerings and commercial footprint
After ChatGPT's resoundingly successful release in 2022, OpenAI introduced several new offerings in 2023. Some of the most notable included the following:
- Introduction of paid tiers, with ChatGPT Plus in February targeting individual users and small teams, and an Enterprise tier in August aimed at larger organizations. Both offer improved service availability and advanced features such as plugins and internet browsing.
- An upgrade to OpenAI's flagship LLM in March. GPT-4 is a multimodal version of the GPT model with superior performance compared with the preceding GPT-3.5, which powers the free version of ChatGPT.
- New data privacy features for ChatGPT in April -- namely, the option for users to disable chat history to prevent OpenAI from using their conversations to retrain its AI model.
- Integration of OpenAI's image generation model, Dall-E 3, into ChatGPT Plus and Enterprise in October.
- Several announcements at OpenAI's inaugural Dev Day conference in November. These included GPT-4 Turbo, which is a cheaper version of GPT-4 with a larger context window, and the launch of GPTs -- customizable versions of ChatGPT that users can tailor for specific tasks without writing any code.
Concerns emerge around AI safety and security
As generative AI gained traction in 2023, AI security and safety debates intensified. Popular media often highlighted fears about artificial general intelligence (AGI), an as-yet-hypothetical form of AI capable of matching or even surpassing human intelligence and abilities.
Turing Award winner Geoffrey Hinton retired from Google citing AI safety concerns. "Things like GPT-4 know much more than we do," he said at MIT Technology Review's EmTech Digital 2023 conference in May. His statements echoed similar apprehensions in a widely circulated March letter advocating for an AI development pause, which questioned whether developing "human-competitive" AI would "risk loss of control of our civilization."
However, many other AI researchers and ethicists have argued that these existential risk concerns are hyperbolic, as AGI remains speculative; it's currently unclear whether the technology can ever be created. In this view, focusing on AGI diverts attention from current, tangible issues such as algorithmic bias and the generation of harmful content using existing AI systems. There's also a competitive element, in that the AGI discourse serves the interests of massive AI companies by presenting AI as a technology so powerful that access can't safely be extended to smaller players.
Among existing AI dangers, a clear outstanding risk is cybersecurity vulnerabilities, such as ChatGPT's ability to increase the success and prevalence of phishing scams. In an interview earlier this year, Chester Wisniewski, director and global field CTO at security software and hardware vendor Sophos, explained how easily ChatGPT can be manipulated for malicious purposes.
"[ChatGPT is] significantly better at writing phishing lures than real humans are, or at least the humans who are writing them," he told TechTarget Editorial's Esther Ajao in January. "Most humans who are writing phishing attacks don't have a high level of English skills, and so because of that, they're not as successful at compromising people. My concerns are really how the social aspect of ChatGPT could be leveraged by people who are attacking us."
Lev Craig covers AI and machine learning as the site editor for TechTarget Enterprise AI.