Transfer Learning: Leveraging Pre-Trained Models for New Tasks in Python (+Keras).
Transfer Learning is a technique in Deep Learning that enables a pre-trained model to be reused on a new task that is similar to the original task. Transfer Learning can save time and computational resources by leveraging the knowledge gained from the original task. The pre-trained model can be fine-tuned or used as a feature extractor for the new task.
Using Pre-Trained Models in Keras
Keras is a popular Deep Learning library that supports several pre-trained models that can be used for Transfer Learning. These pre-trained models are trained on large datasets and can recognize patterns that are useful for many different tasks.
We will start by importing the necessary libraries, including Keras for loading the pre-trained model and NumPy for numerical computations.
import numpy as np from keras.applications import VGG16
Load Pre-Trained Model
Next, we will load a pre-trained model, VGG16, using Keras.
# Load pre-trained model model = VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3))
In this example, we load the VGG16 model pre-trained on the ImageNet dataset, excluding the top layer, and specifying the input shape.
Next, we will freeze the layers in the pre-trained model to prevent them from being updated during training.
# Freeze layers for layer in model.layers: layer.trainable = False
Add New Layers
Next, we will add new layers on top of the pre-trained model for the new task. We will add a Flatten layer to convert the output of the pre-trained model into a 1-dimensional array, a Dense layer with 256 neurons, and a final Dense layer with the number of output classes.
# Add new layers x = Flatten()(model.output) x = Dense(256, activation='relu')(x) predictions = Dense(num_classes, activation='softmax')(x)
In this example, we add a Flatten layer to convert the output of the pre-trained model into a 1-dimensional array, a Dense layer with 256 neurons, and a final Dense layer with the number of output classes.
Next, we will compile the new model and specify the loss function, optimizer, and evaluation metric.
# Compile model model = Model(inputs=model.input, outputs=predictions) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
In this example, we use categorical cross-entropy loss, Adam optimizer, and accuracy as the evaluation metric.
Next, we will train the new model on the new task.
# Train model model.fit(X_train, y_train, epochs=10, batch_size=32)
In this example, we train the model for 10 epochs with a batch size of 32.
In this tutorial, we covered the basics of Transfer Learning and how to use pre-trained models in Keras. We also showed how to freeze layers, add new layers, compile the new model, and train the new model on a new task. Transfer Learning is a powerful technique that can save time and computational resources and is useful for many different applications.
I hope you found this tutorial useful in understanding Transfer Learning in Python. Please check out my book: A.I. & Machine Learning — When you don’t know sh#t: A Beginner’s Guide to Understanding Artificial Intelligence and Machine Learning (https://a.co/d/98chOwB)