Overfitting is the incorrect optimizing of an artificial intelligence (AI) model, where the seeking of accuracy goes too far and may result in false positives.
Overfitting contrasts with underfitting, which can also result in inaccuracies. Overfitting is often referred to as overtraining and underfitting as undertraining. Overfitting and underfitting both ruin the accuracy of a model by leading to trend observations and predictions that don’t follow the reality of the data.
False positives from overfitting can cause problems with the predictions and assertions made by AI. Underfitting, on the other hand, can miss data that should be included due to omissions resulting from an over-specific model. In unseen data, an overfit model will make errors reflecting those in its training data. This inaccuracy is often a result of the model beginning to try to memorize results instead of accurately predicting previously unseen data.
For example, an AI hunting is for the number 1 in handwritten data. Depending on the clarity of the handwriting, a false positive might be grouping some 7s as 1s, which would be an overfit. Conversely, an omission that might result could be the failure to recognize 1s in some styles of handwriting, which would be an underfit.
Overfitting can be the result of overtraining, a lack of validation, improper validation or adjustment of weights and attempts at optimization after final testing. Overfitting can also be the result of using training data that is too noisy, containing unsuited information.