What is stemming?
Stemming is the process of reducing a word to its stem that affixes to suffixes and prefixes or to the roots of words known as "lemmas". Stemming is important in natural language understanding (NLU) and natural language processing (NLP).
Stemming is a part of linguistic studies in morphology as well as artificial intelligence (AI) information retrieval and extraction. Stemming and AI knowledge extract meaningful information from vast sources like big data or the internet since additional forms of a word related to a subject may need to be searched to get the best results. Stemming is also a part of queries and internet search engines.
Recognizing, searching and retrieving more forms of words returns more results. When a form of a word is recognized, it's possible to return search results that otherwise might have been missed. That additional information retrieved is why stemming is integral to search queries and information retrieval.
When a new word is found, it can present new research opportunities. Often, the best results can be attained by using the basic morphological form of the word: the lemma. To find the lemma, stemming is performed by an individual or an algorithm within an AI system. Stemming uses a number of approaches to reduce a word to its base from whatever inflected form is encountered.
It can be simple to develop a stemming algorithm. Some simple algorithms will simply strip recognized prefixes and suffixes. However, these simple algorithms are prone to error. For example, an error can reduce words like laziness to lazi instead of lazy. Such algorithms may also have difficulty with words whose inflectional forms don't perfectly mirror the lemma, such as saw and see.
Examples of stemming algorithms include:
- Lookups of inflected word forms. This approach requires all inflected forms be listed.
- Suffix stripping. Algorithms recognize known suffixes on inflected words and remove them.
- Lemmatization. This algorithm collects all inflected forms of a word in order to break them down to their root dictionary form or lemma. Words are broken down into a part of speech by way of the rules of grammar.
- Stochastic models. This algorithm learns from tables of inflected word forms. By understanding suffixes, and the rules by which they are applied, an algorithm can stem new words.