backpropagation algorithm automated machine learning (AutoML)

transfer learning

What is transfer learning?

Transfer learning is a machine learning method where a model already developed for a task is reused in another task. Transfer learning is a popular approach in deep learning, as it enables the training of deep neural networks with less data compared to having to create a model from scratch.

Typically, training a model takes a large amount of compute resources and time. Using a pre-trained model as a starting point helps cut down on both.

Machine learning algorithms are typically designed to address isolated tasks. Through transfer learning, methods are developed to transfer knowledge from one or more of these source tasks to improve learning in a related target task. Knowledge from an already trained machine learning model must be similar to the new task to be transferable. For example, the knowledge gained from recognizing an image of a dog in a supervised machine learning system could be transferred to a new system to recognize images of cats. The new system will filter out images it already recognizes as a dog.

A chart showing how transfer learning works
Transfer learning takes the knowledge from an already pre-trained model to help in a new model's task.

Transfer learning theory

During transfer learning, knowledge is used from a source task to improve learning in a new task. If the transfer method decreases the performance of the new task, it's called a negative transfer. A major challenge when developing transfer methods is ensuring positive transfer between related tasks while avoiding negative transfer between less related tasks.

When applying knowledge from one task to another, the original task's characteristics are usually mapped onto those of the other task to specify correspondence. A human typically provides this mapping, but there are evolving methods that perform the mapping automatically.

To measure the effectiveness of transfer learning techniques, use the following three common indicators:

  • The first indicator measures if performing the target task is achievable using only the transferred knowledge.
  • The second indicator measures the amount of time it takes to learn the target task using knowledge gained from transferred learning versus how long it would take to learn without it.
  • The third indicator determines if the final performance of the task learned via transfer learning is comparable to the completion of the original task without the transfer of knowledge.

How to use transfer learning

Transfer learning can be accomplished in several ways. One way is to find a related learned task -- labeled as Task B -- that has plenty of transferable labeled data. The new model is then trained on Task B. After this training, the model has a starting point for solving its initial task -- Task A.

Another way to accomplish transfer learning is to use a pre-trained model. This process is easier as it involves the use of an already trained model. The pre-trained model should have been trained using a large data set to solve a similar task as task A. Models can be imported from other developers who have published them online.

A third approach, called feature extraction or representation learning, uses deep learning to identify the most important features for Task A, which then serves as a representation of the task. Features are normally created manually, but deep learning automatically extracts features. Data scientists then must choose which features to include in the model. The learned representation can be used for other tasks as well.

Transfer learning examples

Transfer learning can be used in areas such as neural networks, natural language processing (NLP) and computer vision.

In machine learning, knowledge or data gained while solving one problem is stored, labeled and then applied to a different but related problem. For example, the knowledge gained by a machine learning algorithm to recognize cars could later be transferred for use in a separate machine learning model being developed to recognize other types of vehicles.

Transfer learning is also useful during the deployment of upgraded technology, such as a chatbot. If the new domain is similar enough to previous deployments, transfer learning can assess which knowledge should be transplanted. Using transfer learning, developers can decide what knowledge and data is reusable from the previous deployments and transfer that information for use when developing the upgraded version.

In NLP, for example, a data set from an old model that understands the vocabulary used in one area can be used to train a new model whose goal is to understand dialects in multiple areas. An organization could then apply this for sentiment analysis.

A neural network might be used to search through medical images with the goal of recognizing potential illnesses or ailments. In this case, transfer learning could be used to help identify these ailments using pre-trained models in cases where there's insufficient data to train the network on.

Learn the difference between convolutional neural networks and generative adversarial networks and how both deep learning models are used.

This was last updated in September 2023

Continue Reading About transfer learning

Dig Deeper on Digital transformation

Cloud Computing
Mobile Computing
Data Center
and ESG