Tip

What seeing AI as normal technology means for IT leaders

Viewing AI as a "normal" technology enables IT leaders to apply proven governance, integration and risk strategies. It removes the hype to drive real business outcomes.

Thinking of AI as normal technology empowers IT leaders to draw on their experience in managing the risks and maximizing the benefits associated with it through strategy, policies and regulations.

AI is often described as an unprecedented technology. Futurists -- or AI boosters -- declare it to be a revolutionary force with the power to operate autonomously to achieve incredible results. On the flipside are those who see it as posing an existential threat to humanity.

Those in the camp that view AI as normal technology recognize familiar patterns from past major innovations, which means it will not literally transform everything overnight, as the hype would have us believe.

What "normal" means in this context

Arvind Narayanan and Sayash Kapoor made the case for that perspective in "AI as Normal Technology." In a later article published on Substack, they clarified that normal doesn’t mean inconsequential or predictable. They concede that AI has tremendous potential to reshape processes and that we can’t foresee exactly what the effect will be.

Their argument is that AI will generally follow the patterns of other major technology breakthroughs like electricity, computing and the internet. That is to say that it is also subject to "the slow and uncertain nature of technology adoption and diffusion." This gives IT leaders a frame of reference to draw on when planning how to tap into the value AI can bring to their organization.

Think deployment rather than development

Nearly every day, there’s some report about AI that feeds the hype of an unprecedented rate of advancement. However, these achievements are experiments in lab-like conditions and not in the context of real-world problems and organizations.

The difference between AI hype and its applications in the real world reflects the gap between innovation on the developer side and use in business environments. As Narayanan and Kapoor clarify in their later article, "Benefits and risks are realized when AI is deployed, not when it is developed." That is the real test of value and usability.

Just because an AI provider creates a tool and markets it with claims about its theoretical effects, does not mean that those benefits and risks are realistic.

One recent example of this dynamic is Anthropic's release of the cybersecurity model Mythos. Anthropic published a lengthy -- 245 pages -- technical breakdown of its capabilities but deemed the model itself too dangerous for public release. As such, the potential dangers Anthropic divulges are based on controlled experiments conducted by the tool's developer, not observations of real-world deployment.

Strategy matters more than speed

To develop a realistic strategy, IT leaders have to focus not on the hyped capabilities of the AI, but on how they will integrate it with both the organization’s existing infrastructure and its human workforce. Simply jumping on the AI bandwagon because you’ve been told that anyone who fails to do so will be left behind is not acting strategically. 

In fact, the businesses that boast of being first at something are often topped by those who come after them and learn from their mistakes. Bent Flyvbjerg and Dan Gardner explain this pattern on page 85 of their book, "How Big Things Get Done". The advantage of being the "first mover" is not as great as the benefit of other people’s experience. The "fast follower" can learn from "first mover" mistakes to achieve greater success. One notable example of this dynamic was Apple, which introduced its iPhone after BlackBerry launched its device and ended up eclipsing it.

IT leaders who stick to the perspective of AI as normal technology take the long-term view. They’re willing to wait and learn from the mistakes early adopters make. Businesses had to do this when they decided to adopt cloud solutions or SaaS. They first had to establish the business case and a plan for integration with existing systems to ensure success.

A resilient approach to managing risk

Narayanan and Kapoor acknowledge that it is impossible to accurately predict what specific effects AI will have on the business. That is even more reason for being prepared to "react nimbly" when red flags come to light. This is the only feasible approach, because attempting to avert all possible risks in advance is futile.

Seeing AI as normal empowers IT leaders to apply what they know from their experience with other technologies to deal with potential AI risks through governance and controlled adoption. In practice, most IT failures are not caused by rogue AI but by poorly defined tasks or inadequate safeguards.

Accordingly, IT leaders who integrate AI into their systems need to apply the usual IT security controls, monitoring systems and data protections they have already established in their organization. Layering safeguards across systems for what the authors call a "defense in depth" approach also offers the advantage of scalability and greater resilience in managing risk.

Aim for augmentation rather than complete automation

That AI can take the place of human labor is the dream of some and the nightmare of others. The truth is that in its current state, AI can’t simply operate on its own without human oversight and defined goals. Like cloud computing or SaaS, AI initiatives can only succeed if there is a plan and clearly defined goals for its integration into business processes.

Consequently, the IT leaders who will get the most value out of the technology are the ones who plan for AI augmentation rather than total automation. Tasks might be reconfigured to enable automation that is still managed and monitored by humans who understand the larger organizational context and goals.

IT leaders who see AI as normal technology -- rather than as a superforce that is beyond human control -- can deploy the technology strategically. By prioritizing integration, maintaining human oversight and applying proven management practices, they are set to minimize risk and maximize returns.

Ariella Brown is a technology journalist with experience covering AI, blockchain, IoT, and cybersecurity.

Dig Deeper on Enterprise architecture management