View file src/colab/building_a_transformer_with_pytorch.py - Download
# -*- coding: utf-8 -*-
"""Building a Transformer with PyTorch.ipynb
Automatically generated by Colab.
Original file is located at
https://colab.research.google.com/drive/1zBHzozBzSAfDmO00CA_HHGk7gLQC7u-y
https://www.datacamp.com/tutorial/building-a-transformer-with-py-torch
Building a Transformer with PyTorch
Learn how to build a Transformer model using PyTorch, a powerful tool in modern machine learning.
Aug 3, 2023 · 26 min read
The aim of this tutorial is to provide a comprehensive understanding of how to construct a Transformer model using PyTorch. The Transformer is one of the most powerful models in modern machine learning. They have revolutionized the field, particularly in Natural Language Processing (NLP) tasks such as language translation and text summarization. Long Short-Term Memory (LSTM) networks have been replaced by Transformers in these tasks due to their ability to handle long-range dependencies and parallel computations.
The tool utilized in this guide to build the Transformer is PyTorch, a popular open-source machine learning library known for its simplicity, versatility, and efficiency. With a dynamic computation graph and extensive libraries, PyTorch has become a go-to for researchers and developers in the realm of machine learning and artificial intelligence.
For those unfamiliar with PyTorch, a visit to DataCamp's course, Introduction to Deep Learning with PyTorch is recommended for a solid grounding.
Background and Theory
First introduced in the paper Attention is All You Need by Vaswani et al., Transformers have since become a cornerstone of many NLP tasks due to their unique design and effectiveness.
At the heart of Transformers is the attention mechanism, specifically the concept of 'self-attention,' which allows the model to weigh and prioritize different parts of the input data. This mechanism is what enables Transformers to manage long-range dependencies in data. It is fundamentally a weighting scheme that allows a model to focus on different parts of the input when producing an output.
This mechanism allows the model to consider different words or features in the input sequence, assigning each one a 'weight' that signifies its importance for producing a given output.
For instance, in a sentence translation task, while translating a particular word, the model might assign higher attention weights to words that are grammatically or semantically related to the target word. This process allows the Transformer to capture dependencies between words or features, regardless of their distance from each other in the sequence.
Transformers' impact in the field of NLP cannot be overstated. They have outperformed traditional models in many tasks, demonstrating superior capacity to comprehend and generate human language in a more nuanced way.
For a deeper understanding of NLP, DataCamp's Introduction to Natural Language Processing in Python course is a recommended resource.
Setting up PyTorch
Before diving into building a Transformer, it is essential to set up the working environment correctly. First and foremost, PyTorch needs to be installed. PyTorch (current stable version - 2.0.1) can be easily installed through pip or conda package managers.
For pip, use the command:
"""
!pip3 install torch torchvision torchaudio
"""For using pytorch with a cpu kindly visit the pytorch documentation.
Additionally, it is beneficial to have a basic understanding of deep learning concepts, as these will be fundamental to understanding the operation of Transformers. For those who need a refresher, the DataCamp course Deep Learning in Python is a valuable resource that covers key concepts in deep learning.
Building the Transformer Model with PyTorch
To build the Transformer model the following steps are necessary:
Importing the libraries and modules
Defining the basic building blocks - Multi-head Attention, Position-Wise Feed-Forward Networks, Positional Encoding
Building the Encoder block
Building the Decoder block
Combining the Encoder and Decoder layers to create the complete Transformer network
1. Importing the necessary libraries and modules
We’ll start with importing the PyTorch library for core functionality, the neural network module for creating neural networks, the optimization module for training networks, and the data utility functions for handling data. Additionally, we’ll import the standard Python math module for mathematical operations and the copy module for creating copies of complex objects.
These tools set the foundation for defining the model's architecture, managing data, and establishing the training process.
"""
import torch
import torch.nn as nn
import torch.optim as optim
import torch.utils.data as data
import math
import copy
"""2. Defining the basic building blocks: Multi-Head Attention, Position-wise Feed-Forward Networks, Positional Encoding
Multi-head Attention
The Multi-Head Attention mechanism computes the attention between each pair of positions in a sequence. It consists of multiple “attention heads” that capture different aspects of the input sequence.
To know more about Multi-Head Attention, check out this Attention mechanisms section of the Large Language Models (LLMs) Concepts course.

"""
class MultiHeadAttention(nn.Module):
def __init__(self, d_model, num_heads):
super(MultiHeadAttention, self).__init__()
# Ensure that the model dimension (d_model) is divisible by the number of heads
assert d_model % num_heads == 0, "d_model must be divisible by num_heads"
# Initialize dimensions
self.d_model = d_model # Model's dimension
self.num_heads = num_heads # Number of attention heads
self.d_k = d_model // num_heads # Dimension of each head's key, query, and value
# Linear layers for transforming inputs
self.W_q = nn.Linear(d_model, d_model) # Query transformation
self.W_k = nn.Linear(d_model, d_model) # Key transformation
self.W_v = nn.Linear(d_model, d_model) # Value transformation
self.W_o = nn.Linear(d_model, d_model) # Output transformation
def scaled_dot_product_attention(self, Q, K, V, mask=None):
# Calculate attention scores
attn_scores = torch.matmul(Q, K.transpose(-2, -1)) / math.sqrt(self.d_k)
# Apply mask if provided (useful for preventing attention to certain parts like padding)
if mask is not None:
attn_scores = attn_scores.masked_fill(mask == 0, -1e9)
# Softmax is applied to obtain attention probabilities
attn_probs = torch.softmax(attn_scores, dim=-1)
# Multiply by values to obtain the final output
output = torch.matmul(attn_probs, V)
return output
def split_heads(self, x):
# Reshape the input to have num_heads for multi-head attention
batch_size, seq_length, d_model = x.size()
return x.view(batch_size, seq_length, self.num_heads, self.d_k).transpose(1, 2)
def combine_heads(self, x):
# Combine the multiple heads back to original shape
batch_size, _, seq_length, d_k = x.size()
return x.transpose(1, 2).contiguous().view(batch_size, seq_length, self.d_model)
def forward(self, Q, K, V, mask=None):
# Apply linear transformations and split heads
Q = self.split_heads(self.W_q(Q))
K = self.split_heads(self.W_k(K))
V = self.split_heads(self.W_v(V))
# Perform scaled dot-product attention
attn_output = self.scaled_dot_product_attention(Q, K, V, mask)
# Combine heads and apply output transformation
output = self.W_o(self.combine_heads(attn_output))
return output
"""In summary, the MultiHeadAttention class encapsulates the multi-head attention mechanism commonly used in transformer models. It takes care of splitting the input into multiple attention heads, applying attention to each head, and then combining the results. By doing so, the model can capture various relationships in the input data at different scales, improving the expressive ability of the model.
Position-wise Feed-Forward Networks
"""
class PositionWiseFeedForward(nn.Module):
def __init__(self, d_model, d_ff):
super(PositionWiseFeedForward, self).__init__()
self.fc1 = nn.Linear(d_model, d_ff)
self.fc2 = nn.Linear(d_ff, d_model)
self.relu = nn.ReLU()
def forward(self, x):
return self.fc2(self.relu(self.fc1(x)))
"""In summary, the PositionWiseFeedForward class defines a position-wise feed-forward neural network that consists of two linear layers with a ReLU activation function in between. In the context of transformer models, this feed-forward network is applied to each position separately and identically. It helps in transforming the features learned by the attention mechanisms within the transformer, acting as an additional processing step for the attention outputs.
Positional Encoding
Positional Encoding is used to inject the position information of each token in the input sequence. It uses sine and cosine functions of different frequencies to generate the positional encoding.
"""
class PositionalEncoding(nn.Module):
def __init__(self, d_model, max_seq_length):
super(PositionalEncoding, self).__init__()
pe = torch.zeros(max_seq_length, d_model)
position = torch.arange(0, max_seq_length, dtype=torch.float).unsqueeze(1)
div_term = torch.exp(torch.arange(0, d_model, 2).float() * -(math.log(10000.0) / d_model))
pe[:, 0::2] = torch.sin(position * div_term)
pe[:, 1::2] = torch.cos(position * div_term)
self.register_buffer('pe', pe.unsqueeze(0))
def forward(self, x):
return x + self.pe[:, :x.size(1)]
"""Summary
The PositionalEncoding class adds information about the position of tokens within the sequence. Since the transformer model lacks inherent knowledge of the order of tokens (due to its self-attention mechanism), this class helps the model to consider the position of tokens in the sequence. The sinusoidal functions used are chosen to allow the model to easily learn to attend to relative positions, as they produce a unique and smooth encoding for each position in the sequence.
3. Building the Encoder Blocks

"""
class EncoderLayer(nn.Module):
def __init__(self, d_model, num_heads, d_ff, dropout):
super(EncoderLayer, self).__init__()
self.self_attn = MultiHeadAttention(d_model, num_heads)
self.feed_forward = PositionWiseFeedForward(d_model, d_ff)
self.norm1 = nn.LayerNorm(d_model)
self.norm2 = nn.LayerNorm(d_model)
self.dropout = nn.Dropout(dropout)
def forward(self, x, mask):
attn_output = self.self_attn(x, x, x, mask)
x = self.norm1(x + self.dropout(attn_output))
ff_output = self.feed_forward(x)
x = self.norm2(x + self.dropout(ff_output))
return x
"""Summary:
The EncoderLayer class defines a single layer of the transformer's encoder. It encapsulates a multi-head self-attention mechanism followed by position-wise feed-forward neural network, with residual connections, layer normalization, and dropout applied as appropriate. These components together allow the encoder to capture complex relationships in the input data and transform them into a useful representation for downstream tasks. Typically, multiple such encoder layers are stacked to form the complete encoder part of a transformer model.
4. Building the Decoder Blocks

"""
class DecoderLayer(nn.Module):
def __init__(self, d_model, num_heads, d_ff, dropout):
super(DecoderLayer, self).__init__()
self.self_attn = MultiHeadAttention(d_model, num_heads)
self.cross_attn = MultiHeadAttention(d_model, num_heads)
self.feed_forward = PositionWiseFeedForward(d_model, d_ff)
self.norm1 = nn.LayerNorm(d_model)
self.norm2 = nn.LayerNorm(d_model)
self.norm3 = nn.LayerNorm(d_model)
self.dropout = nn.Dropout(dropout)
def forward(self, x, enc_output, src_mask, tgt_mask):
attn_output = self.self_attn(x, x, x, tgt_mask)
x = self.norm1(x + self.dropout(attn_output))
attn_output = self.cross_attn(x, enc_output, enc_output, src_mask)
x = self.norm2(x + self.dropout(attn_output))
ff_output = self.feed_forward(x)
x = self.norm3(x + self.dropout(ff_output))
return x
"""Summary:
The DecoderLayer class defines a single layer of the transformer's decoder. It consists of a multi-head self-attention mechanism, a multi-head cross-attention mechanism (that attends to the encoder's output), a position-wise feed-forward neural network, and the corresponding residual connections, layer normalization, and dropout layers. This combination enables the decoder to generate meaningful outputs based on the encoder's representations, taking into account both the target sequence and the source sequence. As with the encoder, multiple decoder layers are typically stacked to form the complete decoder part of a transformer model.
Next, the Encoder and Decoder blocks are brought together to construct the comprehensive Transformer model.
5. Combining the Encoder and Decoder layers to create the complete Transformer network

"""
class Transformer(nn.Module):
def __init__(self, src_vocab_size, tgt_vocab_size, d_model, num_heads, num_layers, d_ff, max_seq_length, dropout):
super(Transformer, self).__init__()
self.encoder_embedding = nn.Embedding(src_vocab_size, d_model)
self.decoder_embedding = nn.Embedding(tgt_vocab_size, d_model)
self.positional_encoding = PositionalEncoding(d_model, max_seq_length)
self.encoder_layers = nn.ModuleList([EncoderLayer(d_model, num_heads, d_ff, dropout) for _ in range(num_layers)])
self.decoder_layers = nn.ModuleList([DecoderLayer(d_model, num_heads, d_ff, dropout) for _ in range(num_layers)])
self.fc = nn.Linear(d_model, tgt_vocab_size)
self.dropout = nn.Dropout(dropout)
def generate_mask(self, src, tgt):
src_mask = (src != 0).unsqueeze(1).unsqueeze(2)
tgt_mask = (tgt != 0).unsqueeze(1).unsqueeze(3)
seq_length = tgt.size(1)
nopeak_mask = (1 - torch.triu(torch.ones(1, seq_length, seq_length), diagonal=1)).bool()
tgt_mask = tgt_mask & nopeak_mask
return src_mask, tgt_mask
def forward(self, src, tgt):
src_mask, tgt_mask = self.generate_mask(src, tgt)
src_embedded = self.dropout(self.positional_encoding(self.encoder_embedding(src)))
tgt_embedded = self.dropout(self.positional_encoding(self.decoder_embedding(tgt)))
enc_output = src_embedded
for enc_layer in self.encoder_layers:
enc_output = enc_layer(enc_output, src_mask)
dec_output = tgt_embedded
for dec_layer in self.decoder_layers:
dec_output = dec_layer(dec_output, enc_output, src_mask, tgt_mask)
output = self.fc(dec_output)
return output
"""Summary:
The Transformer class brings together the various components of a Transformer model, including the embeddings, positional encoding, encoder layers, and decoder layers. It provides a convenient interface for training and inference, encapsulating the complexities of multi-head attention, feed-forward networks, and layer normalization.
This implementation follows the standard Transformer architecture, making it suitable for sequence-to-sequence tasks like machine translation, text summarization, etc. The inclusion of masking ensures that the model adheres to the causal dependencies within sequences, ignoring padding tokens and preventing information leakage from future tokens.
These sequential steps empower the Transformer model to efficiently process input sequences and produce corresponding output sequences.
Training the PyTorch Transformer Model
Sample data preparation
For illustrative purposes, a dummy dataset will be crafted in this example. However, in a practical scenario, a more substantial dataset would be employed, and the process would involve text preprocessing along with the creation of vocabulary mappings for both the source and target languages.
"""
src_vocab_size = 5000
tgt_vocab_size = 5000
d_model = 512
num_heads = 8
num_layers = 6
d_ff = 2048
max_seq_length = 100
dropout = 0.1
transformer = Transformer(src_vocab_size, tgt_vocab_size, d_model, num_heads, num_layers, d_ff, max_seq_length, dropout)
# Generate random sample data
src_data = torch.randint(1, src_vocab_size, (64, max_seq_length)) # (batch_size, seq_length)
tgt_data = torch.randint(1, tgt_vocab_size, (64, max_seq_length)) # (batch_size, seq_length)
"""Hyperparameters:
These values define the architecture and behavior of the transformer model:
src_vocab_size, tgt_vocab_size: Vocabulary sizes for source and target sequences, both set to 5000.
d_model: Dimensionality of the model's embeddings, set to 512.
num_heads: Number of attention heads in the multi-head attention mechanism, set to 8.
num_layers: Number of layers for both the encoder and the decoder, set to 6.
d_ff: Dimensionality of the inner layer in the feed-forward network, set to 2048.
max_seq_length: Maximum sequence length for positional encoding, set to 100.
dropout: Dropout rate for regularization, set to 0.1.
Creating a Transformer Instance:
"""
transformer = Transformer(src_vocab_size, tgt_vocab_size, d_model, num_heads, num_layers, d_ff, max_seq_length, dropout)
"""This line creates an instance of the Transformer class, initializing it with the given hyperparameters. The instance will have the architecture and behavior defined by these hyperparameters.
Generating Random Sample Data:
The following lines generate random source and target sequences:
src_data: Random integers between 1 and src_vocab_size, representing a batch of source sequences with shape (64, max_seq_length).
tgt_data: Random integers between 1 and tgt_vocab_size, representing a batch of target sequences with shape (64, max_seq_length).
These random sequences can be used as inputs to the transformer model, simulating a batch of data with 64 examples and sequences of length 100.
Summary:
The code snippet demonstrates how to initialize a transformer model and generate random source and target sequences that can be fed into the model. The chosen hyperparameters determine the specific structure and properties of the transformer. This setup could be part of a larger script where the model is trained and evaluated on actual sequence-to-sequence tasks, such as machine translation or text summarization.
Training the Model
Next, the model will be trained utilizing the aforementioned sample data. However, in a real-world scenario, a significantly larger dataset would be employed, which would typically be partitioned into distinct sets for training and validation purposes.
"""
criterion = nn.CrossEntropyLoss(ignore_index=0)
optimizer = optim.Adam(transformer.parameters(), lr=0.0001, betas=(0.9, 0.98), eps=1e-9)
transformer.train()
for epoch in range(100):
optimizer.zero_grad()
output = transformer(src_data, tgt_data[:, :-1])
loss = criterion(output.contiguous().view(-1, tgt_vocab_size), tgt_data[:, 1:].contiguous().view(-1))
loss.backward()
optimizer.step()
print(f"Epoch: {epoch+1}, Loss: {loss.item()}")
"""
Summary:
This code snippet trains the transformer model on randomly generated source and target sequences for 100 epochs. It uses the Adam optimizer and the cross-entropy loss function. The loss is printed for each epoch, allowing you to monitor the training progress. In a real-world scenario, you would replace the random source and target sequences with actual data from your task, such as machine translation.
Transformer Model Performance Evaluation
After training the model, its performance can be evaluated on a validation dataset or test dataset. The following is an example of how this could be done:
"""
transformer.eval()
# Generate random sample validation data
val_src_data = torch.randint(1, src_vocab_size, (64, max_seq_length)) # (batch_size, seq_length)
val_tgt_data = torch.randint(1, tgt_vocab_size, (64, max_seq_length)) # (batch_size, seq_length)
with torch.no_grad():
val_output = transformer(val_src_data, val_tgt_data[:, :-1])
val_loss = criterion(val_output.contiguous().view(-1, tgt_vocab_size), val_tgt_data[:, 1:].contiguous().view(-1))
print(f"Validation Loss: {val_loss.item()}")
"""Evaluation Mode:
transformer.eval(): Puts the transformer model in evaluation mode. This is important because it turns off certain behaviors like dropout that are only used during training.
Generate Random Validation Data:
val_src_data: Random integers between 1 and src_vocab_size, representing a batch of validation source sequences with shape (64, max_seq_length).
val_tgt_data: Random integers between 1 and tgt_vocab_size, representing a batch of validation target sequences with shape (64, max_seq_length).
Validation Loop:
with torch.no_grad(): Disables gradient computation, as we don't need to compute gradients during validation. This can reduce memory consumption and speed up computations.
val_output = transformer(val_src_data, val_tgt_data[:, :-1]): Passes the validation source data and the validation target data (excluding the last token in each sequence) through the transformer.
val_loss = criterion(...): Computes the loss between the model's predictions and the validation target data (excluding the first token in each sequence). The loss is calculated by reshaping the data into one-dimensional tensors and using the previously defined cross-entropy loss function.
print(f"Validation Loss: {val_loss.item()}"): Prints the validation loss value.
Summary:
This code snippet evaluates the transformer model on a randomly generated validation dataset, computes the validation loss, and prints it. In a real-world scenario, the random validation data should be replaced with actual validation data from the task you are working on. The validation loss can give you an indication of how well your model is performing on unseen data, which is a critical measure of the model's generalization ability.
For further details about Transformers and Hugging Face, our tutorial, An Introduction to Using Transformers and Hugging Face, is useful.
Conclusion and Further Resources
In conclusion, this tutorial demonstrated how to construct a Transformer model using PyTorch, one of the most versatile tools for deep learning. With their capacity for parallelization and the ability to capture long-term dependencies in data, Transformers have immense potential in various fields, especially NLP tasks like translation, summarization, and sentiment analysis.
For those eager to deepen their understanding of advanced deep learning concepts and techniques, consider exploring the course Advanced Deep Learning with Keras on DataCamp. You can also read about building a simple neural network with PyTorch in a separate tutorial.
"""