freshidea - stock.adobe.com
SAN FRANCISCO -- A warped piece of glass separated the video camera from the computer monitor, to which it pointed. Successive images appeared on the monitor, and the camera captured each one, relaying them back to a laptop.
Yet, the images the camera captured weren’t distorted as they appeared to the humans seeing the monitor through the warped glass.
That’s because the laptop ran the images through a neural network explicitly trained to eliminate distortion, highlighting one of the potential real-world uses of AI. The system is called SharpWave.
“SharpWave uses AI to remove distortion from data and allow machines to operate basically in the real world,” explained Tim Ensor, director of artificial intelligence at Cambridge Consultants.
Cambridge Consultants demos
The U.K.-headquartered product development and technology consulting firm showcased SharpWave, along with three other AI-driven technologies, at a limited event days before the AI Summit conference in San Francisco on Sept. 25-26.
Revealed in 2018, SharpWave, like the other technologies showcased, is not a commercial product in its demo-ed form and was made more to show the capabilities and uses AI than anything else.
Still, the technology is functional and able to work in real-time.
“We’ve trained an AI, in this case, to understand what this glass does,” how it distorts the image, Ensor said.
As it understands what the glass does to an image, the system “can create this view of what actually should be behind the glass,” he said.
This type of technology isn’t new. Adobe, for example, pioneered commercial, AI-driven products that can help eliminate basic blur and distortion in videos and images.
Ensor said Cambridge Consultants has been in talks with companies in the medical field – SharpWave technology could help correct an anomaly on a medical scope, for example – and with developers of autonomous vehicles.
During the demo event, which Cambridge Consultants held in conjunction with NetApp and Nvidia on Sept. 23 at The Box SF meeting space here, Ensor ran through three other AI-driven technologies:
- An AI system trained on thousands of pieces of artwork that can take things drawn on a tablet and apply different art styles to them
- A system that can, in real-time, identify the genre of music played on a piano
- An AI tool that can help identify tuberculosis in patients
AI for TB
Cambridge Consultants’ BacillAi is a proof of concept system that can process large amounts of microscope slides of sputum samples and scan them for TB.
The system relies on a standard, medical-grade microscope, and a smartphone to captures images of the slides, and can then process dozens of images at a time via a connected laptop.
BacillAi scans for the presence of TB cells, which show up as purple after the sample has been stained.
Typically, a physician would have to manually count the number of TB cells visible across a hundred or so slides, a lengthy, time-consuming process, Ensor noted.
However, Luke Smith, senior AI engineer at Cambridge Consultants, said using BacillAi, a doctor could instead move the microscope around a sample, take a photo at every new view, and then send those photos from a phone to a laptop. Results, come back almost immediately, he said.
“We get the count much more quickly,” Smith said.
Healthcare providers could use the health IT AI system , to help identify other diseases if the system were trained on them, Ensor said.
While the AI technologies showcased are not available in their current demo forms, the demos showcased the potential uses of AI.