How to build and organize a machine learning team anomaly detection
X
Tip

7 machine learning challenges facing businesses

Machine learning challenges cover the spectrum from ethical and cybersecurity issues to data quality and user acceptance concerns. Read on to learn about seven common obstacles.

Machine learning promises insights that can help businesses boost customer support, combat fraud and anticipate the demand for products or services.

But deploying the technology -- and realizing the anticipated benefits -- can prove difficult, to say the least. The thorny issues of introducing any new tool come into play: Insufficient investment and lack of user acceptance rank among the obstacles. But organizations deploying machine learning (ML) must address an even broader set of concerns, from ethics to epistemic uncertainty.

Here are seven ML challenges businesses should consider:

1. Dealing with risk, from ML bias to legal peril

Organizations take on a certain amount of risk when pursuing emerging technologies. In the case of ML, the potential hazards loom large and tend to be multidimensional.

"The biggest challenges that we are seeing, that all of the organizations are working through, are really related to ethical concerns, security concerns, economic concerns and legal considerations," said Zakir Hussain, Americas data leader at consultancy EY. "Those four are typically the ones a lot of our clients are constantly asking about."

Bias in ML models ranks among the top ethical issues. For example, the data used to train such models might not include data representative of all the groups of people within a given population. The resulting model will produce systemically prejudiced results.

As for security, ML adopters must deal with several issues. Those include data scientists potentially downloading malware along with the open source models they plan to customize, as well as threat actors creating malicious prompts, said David Frigeri, a managing director who leads East Coast AI strategy at Slalom, a business and technology consulting company.

He also cited data poisoning, an attack in which a threat actor infiltrates a company's training data to influence analytics outcomes or model output.

Security issues dovetail with broader trust concerns, especially with the content-creation aspect of generative AI. "In some cases, you can't always trust the content that [AI] has created for you," Frigeri said. "There has to be some checks and balances in place to figure out what you can trust and what you can't trust."

Economic concerns, meanwhile, revolve around workplace issues and the estimated 300 million jobs AI is expected to affect, Hussain said. He noted that some companies have already stopped hiring people for positions in which they believe AI can do the job.

And in the legal field, Hussain pointed to the case of a New York City lawyer who relied on ChatGPT to create a brief. The AI tool made up legal citations, which underscores how the technology can hallucinate and introduce errors.

Organizations deploying ML must address such issues head-on, Hussain said. Measures that help mitigate risk include establishing clear ethical guidelines and governance, prioritizing data quality, emphasizing fairness in model development and ensuring the explainability of models, he added.

Too many times, people get enamored with the solution before they fully understand, qualitatively and quantitatively, what the problem is.
David FrigeriManaging director and East Coast AI strategy leader, Slalom

2. Framing the ML problem

In a rush to build models, organizations might bypass the tricky task of framing a problem that ML can address.

"We start the conversation with our clients with the simple notion of loving the problem," Frigeri said. "Too many times, people get enamored with the solution before they fully understand, qualitatively and quantitatively, what the problem is."

Projects evaporate when organizations fail to select a strong problem-candidate for ML. If the chosen problem doesn't move the proverbial needle, proofs of concept become underresourced and fall short of delivering "learnings or operationalization," Frigeri noted.

Organizations that struggle to frame the problem will also find it difficult to come up with appropriate use cases, hindering deployment. More than half of the 200 corporate strategists Gartner interviewed cited "establishing a clear use case" as the top obstacle when implementing emerging technologies. The market research firm's July 2023 report noted that only 20% of the strategists used AI-related tools such as ML.

Identifying use cases "traditionally isn't in the wheelhouse of corporate strategy," said David Akers, a research director in Gartner's strategy research group.

3. Investing in data quality

Data must be prepared, cleansed and structured before organizations can build effective ML tools. But many businesses want to skip this data engineering step and jump into model development, said Matt Mead, CTO at SPR, a technology modernization company in Chicago. He cited organizational resistance to data investment as a key ML challenge.

"Every company needs to make that upfront investment," Mead said. "But it's sort of an investment without any specific tangible business value."

Data engineering deliverables, as critical as they are, might not initially impress top managers outside of the IT field. For that reason, it's important for ML project leaders to ensure C-suite executives recognize the need to invest in quality data before embarking on the data science journey, Mead noted.

Neglecting data preparation often leads to problems later on in a project.

"What we are seeing is almost 90% of the work being done to figure out why an AI model or ML model isn't working is really related to data," Hussain said.

He listed a few data considerations: Where is the data coming from and how? What is its current level of quality? And how do organizations manage data quality?

Graphic showing leading ML challenges and associated pitfalls.
Businesses must overcome technical and risk mitigation challenges when they take on ML.

4. Ensuring ML adoption

A well-planned ML deployment is, of course, important. But a single-minded pursuit of that project phase could hamper the ultimate goal of technology adoption.

Most organizations focus strictly on implementation, Mead said. But they will fail to get results if employees don't use the ML tools as intended -- or at all. Mead cited the example of a client that introduced ML in a call center to guide agents in their conversations with customers. But the ML function was buried three or four clicks deep in the call center's software stack.

"It was implemented beautifully from a data science perspective," Mead said. "From an integration perspective, it was atrocious. It had zero adoption, and therefore, the tool never provided the business value."

Frigeri emphasized employee change management, enablement and adoption as critical for realizing the value of ML.

"If employees leave it on the shelf, you're not going to get the return," he said.

Accessibility and usability top the list of adoption drivers, Frigeri said. He pointed to the example of ChatGPT, crediting the chat aspect for boosting AI's prevalence among the general public.

Within organizations, accessibility stems from user involvement. Mead recommended bringing users and "champions" into the ML process -- and getting feedback from them "early and often." Champions could include managers motivated to spur ML adoption as well as employees with influence in the organization, regardless of title, who can help ensure users embrace an implementation, he said.

Cross-functional project teams also help ensure ML is effectively integrated and presented within an organization, Mead added.

"Data science teams are fantastic at the math and the models," he said. "But they don't have a background in user experience and interface design and how to build resilient enterprise systems," Mead said.

5. Addressing data literacy

Another adoption issue: Employees might balk at using ML if they haven't touched statistics at a sophisticated level since their college days, according to Frigeri. Indeed, organizations must acknowledge data literacy as a consideration when deploying ML.

"Sometimes employees just don't feel confident in trying to use [ML] because they may be found out -- they don't really understand what it is," Frigeri said.

Organizations can bring in third-party change management services to help with employee acceptance. But the primary business stakeholder should be the party accountable for adoption -- with support from IT and internal change management offices, Frigeri noted.

"What we've found most effective is when the business actually owns the accountability for adoption," he added.

6. Optimizing ML models

The ML journey doesn't end with adoption. Organizations must continually monitor and update models to ensure performance and accuracy over time. Poorly designed models might gobble excessive amounts of compute resources or take too long to make a prediction. In addition, model drift can hamper a model's ability to accurately identify trends. This problem occurs when the data used to train a model begins to deviate from the real-world data the model encounters.

ML engineers, however, use various techniques to improve a model's performance. Model optimization, for example, might involve modifying a model's underlying code to reduce memory and CPU usage. Another optimization approach is retraining a model on new data to address drift. ML regularization approaches, meanwhile, help models generalize better and prevent overfitting.

In this context, machine learning operations (MLOps) practices can help organizations manage the entire ML lifecycle -- including monitoring models and retraining them to boost performance. Capital One, a financial services company, is among the enterprises that have adopted MLOps to handle the design, deployment and ongoing management of ML models.

7. Accepting uncertainty

Businesses must be willing to accept the risk of investing in an ML project that never bears fruit.

Traditionally, software development projects have been deterministic, Mead said. Organizations often need to overcome delays and cost overruns, but most of the time they end up with software that implements their requirements. "For the most part, I think people get to where they need to go," he said.

The same degree of certainty, however, doesn't exist with ML.

"You don't know for sure that you're going to be able to do what you set out to do," Mead said. "And that's just the nature of data science."

A business could expend a third of a project's budget before it realizes its envisioned use case isn't suitable or that a predictive model won't achieve at least 75% accuracy, he said.

"I think a lot of people don't understand the risks," Mead said. "You've got to do some initial preliminary model development to see where things are at and whether or not you can get the intended business value."

John Moore is a writer for TechTarget Editorial covering the CIO role, economic trends and the IT services industry.

Dig Deeper on Machine learning platforms

Business Analytics
CIO
Data Management
ERP
Close