
dimensionality reduction
What is dimensionality reduction?
Dimensionality reduction is a process and technique to reduce the number of dimensions -- or features -- in a data set. The goal of dimensionality reduction is to decrease the data set's complexity by reducing the number of features while keeping the most important properties of the original data.
Dimensionality reduction is advantageous to AI developers or data professionals working with massive data sets, performing data visualization and analyzing complex data. It aids in the process of data compression, allowing the data to take up less storage space as well as reducing computation times. The technique is commonly used in machine learning (ML).
Different techniques, such as feature selection and feature extraction, are used to complete dimensionality reduction. Along with this, each technique uses several methods that simplify the modeling of complex problems, eliminate redundancy and reduce the possibility of the model overfitting.
Why is dimensionality reduction important for machine learning?
Machine learning requires large data sets to properly train and operate. Dimensionality reduction is a particularly useful way to prevent overfitting and to solve classification and regression problems.
This process is also useful to preserve the most relevant information while reducing the number of features in a data set. Dimensionality reduction removes irrelevant features from the data, as irrelevant data can decrease the accuracy of machine learning algorithms.
What are different techniques for dimensionality reduction?
There are two common dimensionality reduction techniques: feature selection and feature extraction.
- In feature selection, small subsets of the most relevant features are chosen from a larger set of dimensional data to represent a model by filtering, wrapping or embedding. The goal here is to reduce the data set's dimensionality while keeping its most important features.
- Feature extraction combines and transforms the data set's original features to create new features. The goal is to create a lower-dimensional data set that still has the data set's properties.
Feature selection uses different methods, such as the following:
- The filter method. Filters a data set into a subset that only has the most relevant features of the original data set.
- The wrapper method. Feeds features into an ML model to evaluate if a feature should be removed or added.
- The embedded method. Evaluates the performance of each feature by checking training iterations of the ML model.
Feature extraction uses methods such as the following:
- Principal component analysis (PCA). A statistical process that identifies smaller units of features from larger data sets. These small units are called principal components.
- Linear discriminant analysis (LDA). A method that finds features that separate different classes of data the best.
- T-distributed stochastic neighbor embedding (t-SNE). An unsupervised, nonlinear dimensionality reduction method that creates a probability distribution over pairs of objects and then creates a probability distribution over the points in a low-dimensional map.
Other methods used in dimensionality reduction include the following:
- Factor analysis.
- High correlation filter.
- UMAP.
- Random forest.

Benefits and challenges of dimensionality reduction
Dimensionality reduction has benefits, such as the following:
- Improved performance. Dimensionality reduction reduces the complexity of data, which reduces irrelevant data and improves performance.
- Increase in visualization. High dimensional data is more difficult to visualize when compared to lower/simplified dimensional data.
- Prevents overfitting. Higher dimensional data can lead to overfitting in ML models, which dimensionality reduction helps prevent.
- Reduced storage space. Reduces require storage space as the process eliminates irrelevant data.
The process does come with downsides, however, such as the following:
- Data loss. Dimensionality reduction should ideally have no data loss, as data can be recovered. However, the process might still result in some data loss, which can impact how training algorithms work.
- Interpretability. It might be difficult to understand the relationships between original features and the reduced dimensions.
- Computational complexity. Some reduction methods might be more computationally intensive than others.
- Outliers. If not detected, data outliers might trouble the dimensionality reduction process.
To improve the performance of an ML model, dimensionality reduction can also be used as a data preparation step. Learn more data preparation steps for ML.