freshidea - stock.adobe.com

Machine learning on edge devices solves lack of data scientists

Enterprises are having a hard time hiring data scientists, who are scarce and high-priced. But as edge devices get smarter, they may not need them as badly.

The current approach to AI and machine learning is great for big companies that can afford to hire data scientists. But questions remain as to how smaller companies, which often lack the hiring budgets to bring in high-priced data scientists, can tap into the potential of AI. One potential solution may lie in doing machine learning on edge devices.

Gadi Singer, vice president of the Artificial Intelligence Products Group and general manager of architecture at Intel, said in an interview at the O'Reilly AI Conference in New York that even one or two data scientists are enough to manage AI integration at most enterprises. 

But will the labor force supply adequate amounts of trained data scientists to cover all enterprises' AI ambitions? Big tech companies like Netflix and Microsoft have enough money to source talent. But experts aren't sure that smaller companies with tighter budgets can find the talent they need.

Simon Crosby, CTO of San Jose, Calif., intelligent device company Swim, said it's impossible to expect there to be enough trained workers to oversee and manage AI implementations at all enterprises. One way to combat that is to develop edge devices that have intelligence embedded in them that allow them to automatically learn and adjust.

"Things are going to get smart because they process their own data and formulate theories about the world," Crosby said.

"The rate at which we are generating computational capability combines very well with this notion that objects in the real world -- traffic lights and so on -- can digitally form theories about how they will behave in the future, then prove them or disprove them and adjust."

How edge-based learning works

Experts at the conference said edge-based learning could play an important role in helping AI tools become even more useful to enterprises. Machine learning on edge devices -- which removes the need to upload data to the cloud to be analyzed by an algorithm -- theoretically allows for the reduction of errors. Most problems in AI development are caused by poor training data and unrepresentative models, Crosby said. Edge learning is different.

"One of the interesting things about edge-based learning is that the notions of overfitting and underfitting just don't apply," he said.

"You end up with [edge devices] that have a good way of understanding and predicting their own behavior, and they may be biased because of their particular situation but they're just individual things. There isn't systematic, or system-wide bias."

Thinking about how bias plays out in AI will only grow more important as we start seeing more intelligent edge devices. This is what director of the Robotics Institute at Carnegie Mellon University, Martial Hebert, called the second type of AI. Here, models train themselves while collecting data, rather than be trained off collected data.

Opposite of low-risk, first-level machine learning and deep learning -- think Netflix's recommendation engine -- second level AI involves algorithms that can cause physical damage, or loss of life upon failure.

In second type AI use cases, the self-learning and local capabilities of edge-based trained algorithms allows for a more immediate response time, as the technology thinks in independent situations and responds accordingly. While speed and independence may be beneficial in certain situations, it also creates risk due to the potential of algorithms malfunctioning.

The need for an algorithmic code of conduct

If you panic thinking about autonomous devices, you're not alone in that fear. With the potential to work without human interaction, Crosby noted that perhaps models themselves need to adhere to a code of conduct in order to universally address bias, risk and successful implementation.

The code of conduct then, would include a measure of transparency from both AI devices and their learning methodology and would ensure that bots and algorithms that perform machine learning on edge devices operate according to an industry-wide set of guidelines.

"There ought to be a set of principles whereby we as a society have certain principles which everybody agrees to abide by, which minimize risk," Crosby said.

Dig Deeper on AI infrastructure