Meta AI's new model focuses on making computer vision tools and applications more accessible to enterprises.
On April 5, the AI research lab, a part of Meta Platforms, introduced its Segment Anything Model (SAM). Part of the research lab's Segment Anything project, SAM can remove an object from any image with a single click. The project also includes a new segmentation data set.
SAM cannot only identify images that it hasn't been trained on, but it can also segment images automatically within seconds, according to Meta. In addition, the model can extrapolate segments that have yet to be trained.
A challenge in computer vision
Many in the computer vision market have found it challenging to access a well-labeled set of images that they can use to train a model.
"This basically gives you a way to get after that problem," said Rowan Curran, an analyst at Forrester Research. He said it is a much better solution than manually tagging things in images or having an algorithm identify the images with a limited number of classifications.
Rowan CurranAnalyst, Forrester Research
"That ability to segment arbitrary pieces of any image and do it in a way that makes semantic sense -- that is quite powerful," Curran added. "This further democratizes a lot of computer vision applications by making it easier to create label data in order to train traditional models."
SAM aligns with Meta AI's purpose of releasing research and moving the market forward.
"They want to improve AI technology for better social graph recommendations," said Kashyap Kompella, an analyst at RPA2AI Research. In addition, Meta needs AI for content moderation, translation, flagging fake news, targeting ads, image processing and its metaverse strategy, he said.
"Segment Anything Model is a building block that can be used in several [augmented reality] applications and fits well with their stated metaverse strategy," Kompella said.
Though SAM has numerous uses -- such as image and video editing, medical imaging, and scientific research -- narrower models might be better, Kompella said. For example, AI trained on many medical images might be better than AI trained on a large data set such as SAM.
"For high-stakes applications, we will still need domain-specific segmentation models," he continued.
According to Meta, SAM was tested for performance with no sign of bias on different genders, skin tones and ages.
The model, data set and paper are now available under the open source Apache License for users who want to experiment and build on top of them quickly.
Esther Ajao is a news writer covering artificial intelligence software and systems.