In AI and machine learning, supervised learning systems provide both input and desired output data, which are labeled for classification. The classification provides a learning basis for future data processing. Support vector machines are used to sort two data groups by like classification. The algorithms draw lines (hyperplanes) to separate the groups according to patterns.
An SVM builds a learning model that assigns new examples to one group or another. By these functions, SVMs are called a non-probabilistic, binary linear classifier. In probabilistic classification settings, SVMs can use methods such as Platt Scaling.
Like other supervised learning machines, an SVM requires labeled data to be trained. Groups of materials are labeled for classification. Training materials for SVMs are classified separately in different points in space and organized into clearly separated groups. After processing numerous training examples, SVMs can perform unsupervised learning. The algorithms will try to achieve the best separation of data with the boundary around the hyperplane being maximized and even between both sides.
SVMs were invented by Vladimir N. Vapnik and Alexey Ya. Chervonenkis in 1963. Since then, the systems have been used in text, hypertext and image classification. SVMs can work with handwritten characters and the algorithms have been used in biology labs to perform tasks like sorting proteins. Supervised and unsupervised learning systems are used in chatbots, self-driving cars, facial recognition programs, expert systems and robots, among other things.