KarpathyLLMChallenge

The content below is generated by LLMs based on the Tokenization video by Andrej Karpathy


Understanding the Complexities of Tokenization in Language Models

Tokenization is a foundational process in natural language processing (NLP), serving as the bridge between raw text and the numerical representations that language models (LMs) understand. The approach to tokenization can significantly impact the performance and capabilities of LMs. This article navigates through the complexities of tokenization, examining its importance, methods, and the challenges it poses in the development and functioning of state-of-the-art language models.

Table of Contents

The GPT Development Journey

In our journey to understand and build a GPT model, we always begin with a dataset for training. For this purpose, we’ve chosen the Tiny Shakespeare dataset, which is a collection of selected works by Shakespeare and serves as an excellent starting point due to its rich vocabulary and complex sentence structures.

# Downloading the Tiny Shakespeare dataset  
!wget https://raw.githubusercontent.com/karpathy/char-rnn/master/data/tinyshakespeare/input.txt

# Output:  
--2023-01-17 01:39:27--  https://raw.githubusercontent.com/karpathy/char-rnn/master/data/tinyshakespeare/input.txt  
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, ...  
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.  
HTTP request sent, awaiting response... 200 OK  
Length: 1115394 (1.1M) [text/plain]  
Saving to: ‘input.txt’

input.txt           100%[===================>]   1.06M  --.-KB/s    in 0.04s   

2023-01-17 01:39:28 (29.0 MB/s) - ‘input.txt’ saved [1115394/1115394]  

Once we have our dataset, it’s time to read it and perform a preliminary inspection to understand its structure. The dataset’s content is a large string, and here’s how to read it:

# Read the dataset to inspect it  
with open('input.txt', 'r', encoding='utf-8') as f:  
    text = f.read()

# Printing the length of dataset in characters  
print(f"Length of dataset in characters: {len(text)}")

# Let's look at the first 1000 characters  
print(text[:1000])  

The output will display the first 1000 characters from the Shakespeare dataset, giving us a glimpse into the text we will be tokenizing and feeding into our large language model.

Understanding the Character-Level Tokenization Process

The character-level tokenization process starts by identifying all unique characters in our dataset. We then create two lookup tables: one for converting characters to integers (stoi) and another for converting integers back to characters (itos). The encode function maps a string to a list of integers, while the decode function reverses the process.

# Unique characters in the dataset  
chars = sorted(list(set(text)))  
vocab_size = len(chars)  
print(''.join(chars))  
print(vocab_size)

# Create a mapping from characters to integers  
stoi = {ch: i for i, ch in enumerate(chars)}  
itos = {i: ch for i, ch in enumerate(chars)}

# Encoder: take a string, output a list of integers  
encode = lambda s: [stoi[c] for c in s if c in stoi]  
# Decoder: take a list of integers, output a string  
decode = lambda i: ''.join([itos[i] for i in i])

# Example usage of encode and decode functions  
encoded_string = encode("Hello World")  
print(encoded_string)  
decoded_string = decode(encoded_string)  
print(decoded_string)

# Output will show a list of integers for the encoded string  
# and the original string after decoding  

After encoding the first 1000 characters of our dataset, we receive a sequence of tokens represented by integers. Each character from the text is now associated with a unique integer, allowing us to work with numerical representations of the textual data.

Preparing the Data for the Language Model

The next step involves converting the entire text dataset into a tensor representation using PyTorch, which is particularly useful for feeding the data into neural network models.

import torch

# Encode the entire text dataset  
data = torch.tensor(encode(text), dtype=torch.long)  
print(data.shape, data.dtype)

# The first 1000 characters encoded will look like this to GPT  
print(data[:1000])  
# Output will show the torch.Size and the first 1000 tokens as integers  

With the data ready, we move on to the language model itself. In this case, we are discussing a Bigram Language Model, a simplified version of more complex language models like GPT.

The Bigram Language Model

The Bigram Language Model is a basic neural network structure that we can use to understand how language models work at a fundamental level.

import torch.nn as nn  
from torch.nn import functional as F

torch.manual_seed(1337)

class BigramLanguageModel(nn.Module):  
    def __init__(self, vocab_size):  
        super().__init__()  
        # Token embedding table where each token corresponds to a unique vector  
        self.token_embedding_table = nn.Embedding(vocab_size, vocab_size)

    def forward(self, idx, targets=None):  
        # idx and targets are both (B, T) tensors of integers  
        logits = self.token_embedding_table(idx)  # (B, T, C)

        if targets is None:  
            loss = None  
        else:  
            B, T, C = logits.shape  
            logits = logits.view(B * T, C)  
            targets = targets.view(-1)  
            loss = F.cross_entropy(logits, targets)

        return logits, loss

    def generate(self, idx, max_new_tokens):  
        # idx is (B, T) array of indices in the current context  
        for _ in range(max_new_tokens):  
            # Get the predictions  
            logits, loss = self(idx)  
            # Focus only on the last time step  
            logits = logits[:, -1, :]  # becomes (B, C)  
            # Apply softmax to get probabilities  
            probs = F.softmax(logits, dim=1)  # (B, C)  
            # Sample from the distribution  
            idx_next = torch.multinomial(probs, num_samples=1)  # (B, 1)  
            # Append sampled index to the running sequence  
            idx = torch.cat((idx, idx_next), dim=1)  # (B, T+1)

        return idx  

The BigramLanguageModel class demonstrates how an embedding table is used to map tokens to vectors of trainable parameters. These vectors are then fed into the Transformer model, which processes them to make predictions or generate new text sequences.

Input to the Transformer

Finally, we look at how the input to the Transformer model is structured. Given a batch of token sequences, the Transformer will process each token sequence and produce corresponding predictions for the next tokens.

# Example of input to the transformer  
print(xb)

# Output:  
tensor([[24, 43, 58, 5, 57, 1, 46, 431],  
        [44, 53, 58, 1, 58, 46, 39, 58],  
        [52, 58, 1, 58, 46, 39, 58, 1],  
        [25, 77, 27, 10, 0, 2, 1, 54]])  

This tensor represents the input to the Transformer after the token embedding has been applied. Each row corresponds to a sequence of tokens that the Transformer will use to predict the next set of tokens.

Understanding these foundational concepts is crucial for grasping how more advanced language models like GPT function and how they manage to generate coherent and contextually relevant text based on enormous amounts of training data.

Delving into Tokenization Schemes Beyond Character-Level

Despite the simplicity and utility of the character-level tokenization, state-of-the-art language models leverage far more complex tokenization schemes. Rather than operating at the character level, these schemes work at the “chunk” level, where chunks of characters are used to construct the token vocabulary.

Byte Pair Encoding: Striking a Balance

The Byte Pair Encoding (BPE) algorithm stands out as a critical method for creating these chunks. Initially designed for data compression, BPE has been adapted for the tokenization in language models. The GPT-2 paper, titled “Language Models are Unsupervised Multitask Learners,” brought significant attention to byte-level BPE as a tokenization strategy for large language models.

Core Principles of Language Modeling

To appreciate the motivation behind the adoption of BPE, let’s consider the core principles outlined in the GPT-2 paper:

BPE serves as an elegant middle ground between character-level and word-level tokenization, effectively managing frequent symbol sequences as well as rare or novel ones.

# Implementation of Byte Pair Encoding  
from collections import defaultdict

def get_stats(vocab):  
    pairs = defaultdict(int)  
    for word, freq in vocab.items():  
        symbols = word.split()  
        for i in range(len(symbols) - 1):  
            pairs[symbols[i], symbols[i+1]] += freq  
    return pairs

def merge_vocab(pair, v_in):  
    v_out = {}  
    bigram = ' '.join(pair)  
    replacement = ''.join(pair)  
    for word in v_in:  
        w_out = word.replace(bigram, replacement)  
        v_out[w_out] = v_in[word]  
    return v_out  

In the snippet above, we define functions to compute token frequencies and merge the most frequent pairs, which are the core operations of BPE. This process iteratively merges the most frequent pairs of characters or character sequences in the vocabulary until a desired vocabulary size is reached.

GPT-2’s Input Representation and Tokenization

The GPT-2 model expanded its vocabulary to 50,257 tokens, a substantial increase over its predecessor, and adjusted its architecture to accommodate the larger context size and vocabulary.

These decisions allowed GPT-2 to avoid the pitfalls of character-level tokenization while still being able to model a wide variety of strings effectively.

# Example of how BPE tokenizes a given string  
example_string = "This is an example of how BPE works."  
example_vocab = {  
    "Th is": 5, "is an": 6, "an ex": 5, "ex am": 4,   
    "am pl": 4, "pl e": 5, "e of": 9, "of ho": 5,  
    "ho w": 4, "w BP": 3, "BP E": 3, "E wo": 3,  
    "wo rk": 3, "rk s": 3, ".":10  
}

# Let's find out the most frequent pair of tokens  
pairs = get_stats(example_vocab)  
most_frequent_pair = max(pairs, key=pairs.get)

# Now, we will merge this most frequent pair in our vocabulary  
example_vocab = merge_vocab(most_frequent_pair, example_vocab)

print(f"Most frequent pair: {most_frequent_pair}")  
print(f"Updated vocabulary: {example_vocab}")  

In the code above, we simulate how BPE would tokenize an example string by first establishing a mock vocabulary with frequencies, finding the most frequent pair, and then merging this pair across the vocabulary.

Architectural Hyperparameters of GPT-2

The GPT-2 model was benchmarked in several configurations with different numbers of parameters and layers, as summarized in the following:

This scaling of the model was accompanied by a manual tuning of the learning rate to optimize performance on the WebText dataset.

Tokenization: The Properties You Want

When considering a tokenization method, certain properties are desirable:

BPE checks these boxes, providing a robust and flexible method that aligns with the requirements of large language models like GPT-2.

Concluding Remarks on GPT-2 Tokenization

As we have seen, the tokenization process is not a mere preprocessing step but a foundational element that significantly impacts the performance and capabilities of language models. The GPT-2 model’s adoption of a byte-level BPE tokenization was a strategic move that contributed to its state-of-the-art results, demonstrating the importance of a well-thought-out tokenization strategy in the development of LLMs.

Deep Dive into LLAMA 2 Pretraining and Tokenization Details

Building on the foundations of prior models, the development of LLAMA 2 models incorporates significant advances in pretraining methodologies, data curation, and architectural improvements.

Enhanced Pretraining Techniques

For the LLAMA 2 family of models, several key changes were introduced to optimize performance:

LLAMA 2 Architecture Overview

The above image illustrates the high-level architecture and training details of the LLAMA 2 models, including their hyperparameters and the improvements made over their predecessors.

Pretraining Data Considerations

The training data set for LLAMA 2 was carefully curated:

Training Details and Hyperparameters

The training of LLAMA 2 was guided by the following hyperparameters:

# Hyperparameters for LLAMA 2 Model Training  
optimizer = AdamW  
beta1 = 0.9  
beta2 = 0.95  
epsilon = 1e-5  
lr_schedule = CosineLearningRateSchedule(warmup=2000)  
weight_decay = 0.1  
gradient_clipping = 1.0  

These choices reflect a fine-tuning of the training process to optimize the learning curve and performance of the model, as depicted in Figure 5(a) from the original paper.

LLAMA 2 and LLAMA 2+CHAT Release

The release of LLAMA 2 and its variants marks a significant milestone in the field of LLMs:

For more information and resources on LLAMA 2, visit the official Meta AI resource page.

Responsible Use and Safety Considerations

Despite the excitement surrounding the release of these models, safety remains paramount:

The comprehensive approach to pretraining, fine-tuning, and safety underscores the responsible development and deployment of LLMs.

Addressing Tokenization Challenges in LLMs

Tokenization, the process of converting text into tokens, is at the heart of many challenges encountered in LLMs. It influences how well the model can perform various tasks and understand different languages.

Common Tokenization Issues

Here are some issues that can be traced back to tokenization:

Tokenization’s Pervasive Impact

The pervasive impact of tokenization is not to be underestimated. It is a fundamental aspect that needs to be addressed to improve LLMs’ overall capabilities and reliability. Understanding and optimizing tokenization is critical for advancing the field of language modeling.

Exploring Tokenization with Web Apps

To get a better grasp of how tokenization works in practice, let’s explore a web application that allows live experimentation with tokenization:

Visit the tokenization webapp: https://tiktokenizer.vercel.app  

This tool provides an interactive environment to input text and observe how different tokenization algorithms, such as the one used by GPT-2, break down and encode strings into tokens.


By dissecting pretraining methods, understanding the release of new LLM variants, and confronting tokenization challenges, we gain a deeper insight into the complexities and nuances of developing state-of-the-art language models. The continuous evolution of these models showcases the dynamic nature of the field and the ongoing quest to refine and enhance the capabilities of LLMs.

The Intricacies of Tokenization Visualized

Tokenization, as we’ve seen, can have a significant impact on the performance of Large Language Models (LLMs). Through a practical demonstration using a web application, we can visualize how tokenization operates in real-time.

Tokenization in Action

Using the GPT-2 tokenizer on a simple string, we can observe the process in a user-friendly format:

Tokenization Web App

The web application showcases how an English sentence is tokenized into different chunks, each represented by a unique token ID, and emphasizes the importance of whitespace in the process. For example:

This visualization underscores the inclusion of spaces as part of the token chunks. It’s also apparent that the tokenization of numbers can be inconsistent, as seen in arithmetic examples where the number 127 is a single token, but 677 is split into two separate tokens: “ 6” and “77”.

Tokenization and Non-English Languages

The tokenization of non-English languages, such as Korean, reveals a different set of challenges:

Korean Tokenization Example

As the tokenizer has been trained predominantly with English data, non-English text often results in more tokens for the same content, leading to “bloated” sequences. This can affect the model’s ability to maintain context due to the finite context length of the transformer architecture.

Python Code Tokenization

When tokenizing Python code, the allocation of tokens to individual spaces can be incredibly wasteful:

Python Code Tokenization

The tokenizer assigns a separate token for each space, resulting in an inefficient tokenization that could impede the model’s coding capabilities.

# Example of Python code tokenization  
for i in range(1, 101):  
    if i % 3 == 0 and i % 5 == 0:  
        print("FizzBuzz")  

In the above snippet, the spaces before the if statement are each given their own token, such as 22. This inefficiency is one reason behind GPT-2’s poorer performance in handling Python code.

Improvements in GPT-4 Tokenization

By switching to the GPT-4 tokenizer, we can see a significant reduction in token count for the same string:

GPT-4 Tokenization Efficiency

The GPT-4 tokenizer’s capacity to handle whitespace more effectively, especially in programming languages like Python, enhances the model’s ability to process code. This deliberate design choice by OpenAI results in a denser input to the transformer, allowing it to consider more context when predicting the next token.

The Underlying Complexity of Tokenization

Tokenization is not a mere preliminary step in preparing data for LLMs; it is a complex process that requires careful consideration, especially when dealing with a variety of languages and special characters. The goal is to convert strings into a fixed set of integers, which then correspond to vectors that serve as inputs to the transformer.

Unicode and Python Strings

In Python, strings are immutable sequences of Unicode code points. Understanding this is crucial when designing tokenizers capable of handling diverse languages and symbols.

# Python's approach to strings  
Strings in Python are represented as sequences of Unicode code points.  

The handling of strings in Python has evolved over time, with changes in version 3.2 introducing Sequence ABC support and version 3.3 adding new attributes and comparisons for range objects.

Tokenizing a Variety of Content

A tokenizer must be flexible enough to accommodate not just the English alphabet, but also other languages and a myriad of special characters found on the internet, such as emojis. This diversity is what makes the task of tokenization especially challenging in the context of LLMs.

# Challenges of diverse content tokenization  
Tokenization must support:  
- Multiple languages, including non-English scripts like Korean.  
- Special characters and emojis.  
- Case sensitivity and positional variations in words.  

As we delve deeper into the inner workings of tokenization, it becomes increasingly clear that this aspect of language modeling is a linchpin for the success of LLMs. The design of tokenizers has a profound effect on a model’s ability to understand and generate text, emphasizing the need for ongoing research and optimization in this area.

Understanding Unicode Code Points

Unicode code points are the backbone of modern text encoding and are crucial for processing text in Large Language Models (LLMs). They provide a unique number for every character, no matter the platform, program, or language.

The Unicode Standard Explained

The Unicode Standard is maintained by the Unicode Consortium and supports text in all of the world’s major writing systems. The key aspects of Unicode include:

The standard also offers guidance for implementation, including topics like character normalization, composition and decomposition, and directionality.

Unicode’s Role in Technology

Unicode has replaced many incompatible character sets that were previously used in different locales and computer architectures. Its support has become essential in software development, allowing consistent representation and data exchange across different platforms and locales.

Unicode Standard

The latest version, 15.1, was released in September 2023, indicating that the standard is actively maintained and updated to accommodate new characters and scripts.

Ligatures and Script-Specific Rules

Many scripts, including Arabic and Devanagari, use ligatures—a combination of letterforms into specialized shapes—according to complex orthographic rules. These require special technologies such as:

These technologies ensure that characters and ligatures are displayed correctly across different systems.

Standardized Subsets of Unicode

Unicode includes standardized subsets to support specific language groups:

Accessing Unicode Code Points in Python

In Python, the ord() function can be used to find a character’s Unicode code point. For example:

# Example of accessing Unicode code point of a character  
print(ord('H'))  # Output: 104  

This can also be applied to emojis and other characters:

# Example of accessing Unicode code point of an emoji  
print(ord('😀'))  # Output: 128512 (decimal representation of the emoji's code point)  

Keep in mind that ord() can only take single Unicode characters, not entire strings. Here’s how we could encode a string into Unicode code points:

# Encoding a string into Unicode code points  
string = "Hello, World!"  
code_points = [ord(character) for character in string]  
print(code_points)  

The output will be a list of integers representing the Unicode code points for each character in the string.

The Challenge of Using Unicode Code Points for Tokenization

The direct use of Unicode code points for tokenization in LLMs is impractical for a number of reasons:

Unicode in Wikipedia

Thus, while Unicode provides a way to represent characters from a wide array of languages and symbols, its direct use in tokenization is not feasible for LLMs. Tokenization strategies must strike a balance between the granularity of representation and the practical constraints of model architecture.

Exploring the Limits of Unicode for LLMs

The Unicode standard, while comprehensive and continuously evolving, presents challenges when applied directly to tokenization for Large Language Models (LLMs) due to the sheer volume of 150,000 different code points. In addition to the expansive size, the fluid nature of the Unicode standard makes it an unstable foundation for server-side usage. Therefore, a more sophisticated approach to encoding is necessary.

Delving into Python’s Encoding Capabilities

When working with Unicode in Python, we often utilize list comprehensions to convert a string into its corresponding code points:

# Converting a string to Unicode code points  
string_to_encode = 'Hello, World!'  
unicode_points = [ord(x) for x in string_to_encode]  
print(unicode_points)  

Understanding UTF Encodings

The Unicode Consortium defines three primary types of encodings: UTF-8, UTF-16, and UTF-32. These encodings allow the conversion of Unicode text into binary data or byte streams, with UTF-8 being the most prevalent.

UTF-8 Wikipedia Page

UTF-8 is a variable-length character encoding standard used for electronic communication. It has the following features:

UTF-8 Encoding Explanation

To illustrate how UTF-8 encodes different Unicode code points, consider the euro sign (€), which is U+20AC:

  1. The code point lies between U+0800 and U+FFFF, requiring three bytes to encode.
  2. The three bytes can be concisely written in hexadecimal as E2 82 AC.

Here is a summary of the conversion process for UTF-8:

UTF-16 and UTF-32: Alternatives to UTF-8

While UTF-8 is widely adopted, it is not without alternatives. UTF-16 and UTF-32 are other encoding formats, each with its own set of advantages and disadvantages.

UTF-16 is a variable-length character encoding capable of encoding all valid Unicode code points using one or two 16-bit code units. It is primarily used by systems like the Microsoft Windows API, Java, and JavaScript/ECMAScript. However, it has not gained popularity on the web and is declared by under 0.004% of web pages.

UTF-16 Encoding

On the other hand, UTF-32 is a fixed-length encoding that uses exactly 32 bits per code point. Its primary advantage is direct indexing of Unicode code points, enabling constant-time operations for finding the Nth code point in a sequence. Despite this, UTF-32 is space-inefficient, often resulting in larger data sizes compared to UTF-8 or UTF-16.

Additional Resources for Unicode and Encoding

For those seeking to deepen their understanding of Unicode and encodings, several resources are available:

Programmer’s Perspective on Unicode

From a programmer’s perspective, Unicode represents a significant leap in complexity. It’s not just about using wchar_t for strings; it involves understanding the character set, working with strings and files of Unicode text, and handling diverse encodings like UTF-8 and UTF-16.

Programmer's Introduction to Unicode

Here’s a summary of key points a programmer must consider:

For a comprehensive introduction to Unicode from a coder’s viewpoint, Nathan Reed’s blog “A Programmer’s Introduction to Unicode” is an invaluable resource.

In the next section, we’ll delve into the specific algorithms and techniques used for tokenization in LLMs, which address the challenges posed by Unicode’s complexity and the limitations of direct encoding methods.

UTF-8 Everywhere Manifesto

The discussion of Unicode would be incomplete without mentioning the “UTF-8 Everywhere Manifesto”. The manifesto advocates strongly for the adoption of UTF-8 as the universal character encoding, emphasizing that it should be the default choice for encoding text strings in memory, on disk, for communication, and for all other uses. The manifesto outlines several reasons for this:

The manifesto also contains recommendations for string handling in Windows applications and argues against the use of ‘ANSI codepages’. It states that developers should not have to be concerned with encoding complexities when creating applications that are not specialized in text processing. The manifesto advises that iterating over Unicode code points should not be considered an important task in text processing as many developers mistakenly regard code points as successors to ASCII characters.

UTF-8 Everywhere Manifesto

UTF-8: The Preferred Encoding

UTF-8 is the only encoding standard that is backward compatible with ASCII, making it significantly preferred and widely used on the internet. This backward compatibility is one of the major advantages of UTF-8, as it allows for a seamless transition from older technologies that use ASCII to the more expansive Unicode standard.

Encoding Strings in Python

Python’s string class provides a straightforward way to encode strings into various formats, including UTF-8, UTF-16, and UTF-32. Here’s an example of encoding a string into UTF-8:

# Encoding a string into UTF-8  
unicode_string = '안녕하세요 👋 (hello in Korean!)'  
utf8_encoded = unicode_string.encode('utf-8')  
utf8_encoded_list = list(utf8_encoded)  
print(utf8_encoded_list)  

When encoded into UTF-8, we see a bytes object that represents the string according to the UTF-8 encoding standards. However, if we were to look at the UTF-16 encoding of the same string, we would notice a pattern of zero bytes interspersed with the encoded characters, hinting at the inefficiency of UTF-16 for certain types of text:

# Encoding a string into UTF-16  
utf16_encoded = unicode_string.encode('utf-16')  
utf16_encoded_list = list(utf16_encoded)  
print(utf16_encoded_list)  

Similarly, encoding into UTF-32 would reveal even more wastefulness in terms of space:

# Encoding a string into UTF-32  
utf32_encoded = unicode_string.encode('utf-32')  
utf32_encoded_list = list(utf32_encoded)  
print(utf32_encoded_list)  

The inefficiency of UTF-16 and UTF-32, particularly for ASCII or English characters, reinforces the argument for the widespread adoption of UTF-8 for most applications.

The Challenge of Encoding for LLMs

Despite the efficiency of UTF-8, using it directly in language models is not without its problems. If we were to use UTF-8 byte streams as tokens, we would be limited to a vocabulary size of only 256 possible tokens, which is very small. This would result in our text being represented as very long sequences of bytes, which is not ideal. Long sequences would make the attention mechanism in Transformers computationally expensive and inefficient, as it would restrict the context length available for token prediction tasks.

Byte Pair Encoding (BPE) Algorithm

To address the challenge of encoding for LLMs, the Byte Pair Encoding (BPE) algorithm is employed. BPE allows us to compress byte sequences significantly. The algorithm works by iteratively finding the most frequent pair of tokens in the text and replacing that pair with a single new token that is added to the vocabulary.

Let’s take a closer look at how the BPE algorithm works with an example:

# Example of Byte Pair Encoding  
original_sequence = ['A', 'A', 'B', 'C', 'A', 'A', 'D', 'A', 'A', 'B', 'C']  
# Step 1: Identify the most frequent pair ('A', 'A') and replace with 'Z'  
step_1_sequence = ['Z', 'B', 'C', 'Z', 'D', 'Z', 'B', 'C']  
# Step 2: Identify the next most frequent pair ('B', 'C') and replace with 'Y'  
final_sequence = ['Z', 'Y', 'Z', 'D', 'Z', 'Y']  

The process is repeated until no further compression is possible or until a desired vocabulary size is reached. The BPE algorithm is an essential component in the tokenization process for LLMs, as it allows for a tunable vocabulary size while retaining the efficiency of UTF-8 encoding.

Hierarchical Transformers

The idea of feeding raw byte streams directly into language models is fascinating and has been explored in research. The paper titled “Tokenization Free: How Byte-Level Models Can Represent Code” from OpenAI discusses a hierarchical structuring of the Transformer architecture that could allow for the processing of raw bytes without tokenization. This approach could potentially enable aggressive sequence modeling at scale. However, this concept is still in the experimental phase and has yet to be widely adopted or proven efficient at scale.

Hierarchical Transformer Architecture

Conclusion on BPE and LLM Encoding

In conclusion, while the prospect of tokenization-free sequence modeling is intriguing, current language models rely on compression techniques like the BPE algorithm to manage the balance between vocabulary size and sequence length. These methods allow for efficient encoding while enabling the model to handle a wide variety of text data.

BPE Tokenizer

Continuation of Byte Pair Encoding (BPE) Algorithm

As we delve further into the Byte Pair Encoding algorithm, it becomes evident that the vocabulary size continues to grow with each iteration. We begin with a fixed initial vocabulary size based on our byte sequences but, as the algorithm progresses, we mint new tokens and append them to our vocabulary. This process iteratively compresses the sequence while expanding the vocabulary.

Let’s continue with our example. We find the most common pair ‘ZY’ in our sequence and replace it with a new token ‘X’. After this replacement, we get the following sequence:

# Continuing BPE example with new replacements  
step_3_sequence = ['X', 'D', 'X', 'Y']  

The original sequence of 11 tokens now becomes a sequence of 5 tokens, but our vocabulary size has expanded to 7. The BPE algorithm has compressed the original sequence by more than 50% while only adding two new tokens to the vocabulary.

BPE Continued Compression

Example of BPE in Practice

Consider the following string to be encoded using BPE:

aaabdaaabac  

The byte pair ‘aa’ is the most frequent, so we replace it with a new token ‘Z’:

ZabdZabac  

Continuing this process, we end up with a compressed sequence and an expanded vocabulary. This iterative method is how we prepare a training dataset for a language model, and create an algorithm for encoding and decoding arbitrary sequences.

Tokenization in Python

Using Python, we can encode our text into UTF-8, thereby converting it to a stream of bytes. For convenience in manipulation, we can then convert these bytes into integers and create a list. Below is an example of converting a paragraph into UTF-8 encoded bytes and then to a list of integers:

# Convert paragraph text into UTF-8 encoded bytes  
text = "Example paragraph text..."  
utf8_encoded = text.encode('utf-8')

# Convert UTF-8 encoded bytes to a list of integers  
tokens = list(map(int, utf8_encoded))  
print(tokens)  

The length of the paragraph in code points might be 533, but after encoding to UTF-8, we get a length of 616 bytes, or 616 tokens. More complex Unicode characters can expand into multiple bytes, which increases the token count.

Finding the Most Common Byte Pair

To effectively use BPE, we need to identify the most common pair of bytes in our data. Here is a snippet of how we might implement a function to find the most common byte pair in Python:

# Function to get statistics of byte pair frequencies  
def get_stats(ids):  
    counts = {}  
    for pair in zip(ids, ids[1:]): # Iterate over consecutive elements  
        counts[pair] = counts.get(pair, 0) + 1  
    return counts

# Example usage of get_stats function  
tokens = [239, 188, 181, ...]  # This is a shortened example list of tokens  
stats = get_stats(tokens)  
print(stats)  

This function, get_stats, calculates the frequency of each byte pair in the token list. We can then sort these to find the most common pair and proceed with the BPE algorithm.

Byte Pair Frequencies

Implementing BPE Tokenization

Now let’s implement the BPE tokenization process. We start by defining the get_stats function to collect statistics on byte pairs. Then, we can use these statistics to iteratively merge the most frequent pairs into new tokens. Here’s a simplified version of how you might write that function:

# Function to find the most common byte pair in a list of tokens  
def get_stats(ids):  
    counts = {}  
    for pair in zip(ids, ids[1:]): # Pythonic way to iterate consecutive elements  
        counts[pair] = counts.get(pair, 0) + 1  
    return counts

# Retrieve the byte pair frequencies for our tokens  
stats = get_stats(tokens)  
print(stats)  

By iterating over the byte pairs and updating the frequency in counts, we can identify which pairs to merge. This is just the first step in the BPE algorithm, which would continue until a specified vocabulary size is reached or no further significant compression can be achieved.

The process of BPE not only aids in compressing the dataset but also provides a systematic approach for encoding and decoding sequences that the language model can learn from. It’s this iterative refinement that enables language models to efficiently process and understand the vast array of human languages and symbols encoded in text.

Improving Byte Pair Encoding Understanding

To further grasp the inner workings of the Byte Pair Encoding (BPE) algorithm, let’s look at how we might implement a function to track the frequency of byte pairs in a sequence of tokens. This function is key in identifying which byte pairs are the most common and therefore should be merged together in the BPE process.

Here’s the Python code that achieves this:

def get_stats(ids):  
    counts = {}  
    for pair in zip(ids, ids[1:]): # Pythonic way to iterate consecutive elements  
        counts[pair] = counts.get(pair, 0) + 1  
    return counts

tokens = [239, 188, 181, ...]  # This is a shortened example list of tokens  
stats = get_stats(tokens)  
print(stats)  

In this code, get_stats goes through a list of tokens, ids, and counts the occurrences of each consecutive byte pair. The zip function is used to iterate over pairs of consecutive elements in the list.

To display the counts in a more readable format, we can sort the dictionary and print the results as follows:

print(sorted(((v, k) for k, v in stats.items()), reverse=True))  

This snippet sorts the byte pairs according to their frequency in descending order, allowing us to easily identify the most common pairs.

Identifying Frequent Byte Pairs

Let’s examine the actual output of the get_stats function for a token list. By sorting the dictionary of counts, we can see which byte pairs occur most frequently:

# Output from the sorted stats  
[  
    (20, (101, 32)),   
    (15, (240, 159)),   
    (12, (226, 228)),   
    (12, (105, 110)),   
    # ... truncated for brevity  
]  

In this example, the pair (101, 32) occurred 20 times, making it the most frequent byte pair in the tokens list. To understand what characters these numbers represent, we can use Python’s chr function, which returns a string representing a character whose Unicode code point is the integer passed.

print(chr(101), chr(32))  
# Output: e    

Here, chr(101) corresponds to the letter ‘e’ and chr(32) corresponds to a space. This suggests that the combination of ‘e’ followed by a space is particularly common in the text being analyzed, which is a pattern we might expect in English text.

Byte Pairs in Context

Understanding the context in which byte pairs appear is crucial. For instance, the frequent occurrence of the pair (101, 32) tells us that many words in the analyzed text end with the letter ‘e’ before a space, indicating the end of a word. This is a common pattern in English, where many words have an ‘e’ at the end, such as ‘the’, ‘be’, ‘are’, and so on.

The BPE algorithm takes advantage of these patterns to efficiently encode text by merging frequently occurring byte pairs into single tokens. This not only helps in compressing the data but also ensures that the language model can learn from these common patterns.

Byte Pair Encoding in Action

In practice, once we have identified the most common byte pairs, we would replace instances of these pairs in our data with a new token. This process is repeated iteratively, each time identifying and replacing the most common byte pair, until the vocabulary reaches a desired size or no significant compression can be achieved.

This method of tokenization is not just a theoretical exercise; it is used in real-world applications to prepare training datasets for language models. It is an essential step in creating an encoding and decoding mechanism that models can learn and leverage to understand and generate text accurately.

Code Example: Tokenization Process

Here is a comprehensive example of a tokenization process, including the use of the get_stats function and character conversion with chr:

# Define the get_stats function for byte pair frequencies  
def get_stats(ids):  
    counts = {}  
    for pair in zip(ids, ids[1:]): # Iterate consecutive elements  
        counts[pair] = counts.get(pair, 0) + 1  
    return counts

# Obtain token statistics  
tokens = [239, 188, 181, 239, 189, 142, ...] # Example token list  
stats = get_stats(tokens)

# Print sorted byte pair frequencies  
print(sorted(((v, k) for k, v in stats.items()), reverse=True))

# Convert byte pairs to their corresponding characters  
print(chr(101), chr(32))  # Output: ('e', ' ')  

By executing this code, we would see a list of the most common byte pairs alongside their frequencies, helping us understand which pairs to merge during BPE tokenization. The character conversion using chr gives us insight into what the byte pairs actually represent in human-readable text.

Implementing Token Merging

and this is the most common pair. So now that we’ve identified the most common pair we would like to

# Print sorted byte pair frequencies  
stats = get_stats(tokens)  
print(sorted(((v, k) for k, v in stats.items()), reverse=True))  

iterate over the sequence. We’re going to mint a new token with the ID of 256 right because

def get_stats(ids):  
    counts = {}  
    for pair in zip(ids, ids[1:]): # Pythonic way to iterate consecutive elements  
        counts[pair] = counts.get(pair, 0) + 1  
    return counts

# Print sorted byte pair frequencies  
stats = get_stats(tokens)  
print(sorted(((v, k) for k, v in stats.items()), reverse=True))  

these tokens currently go from 0 to 255. So when we create a new token it will have an ID of

In [99]: def get_stats(ids):  
    counts = {}  
    for pair in zip(ids, ids[1:]): # Pythonic way to iterate consecutive elements  
        counts[pair] = counts.get(pair, 0) + 1  
    return counts  

256 and we’re going to iterate over this entire list and every time we see 101 comma 32

Tokenization Last Checkpoint: a few seconds ago (autosaved)  
File Edit View Insert Cell Kernel Widgets Help  
Python 3 Trusted Logout  
In [99]: def get_stats(ids):  
  counts = {}  
  for pair in zip(ids, ids[1:]): # Pythonic way to iterate consecutive elements  
    counts[pair] = counts.get(pair, 0) + 1  
  return counts 

stats = get_stats(tokens)  
#print(stats)  
print(sorted(((v, k) for k, v in stats.items()), reverse=True))  

we’re going to swap that out for 256. So let’s implement that now and feel free to do that yourself

Tokenization Last Checkpoint: a few seconds ago (autosaved)  

at all. So first I commented this just so we don’t pollute the notebook too much.

Tokenization Last Checkpoint: a few seconds ago (autosaved)

length: 616

def get_stats(ids):  
    counts = {}  
    for pair in zip(ids, ids[1:]): # Pythonic way to iterate consecutive elements  
        counts[pair] = counts.get(pair, 0) + 1  
    return counts

stats = get_stats(tokens)  
# print(stats)  
# print(sorted(((v, k) for k, v in stats.items()), reverse=True))

top_pair = max(stats, key=stats.get)  
top_pair

# (101, 32)  

This is a nice way of in Python obtaining the highest ranking pair. We’re basically calling the max on this dictionary stats and this will return the maximum key and then the question is how does it rank keys so you can provide it with a function that ranks keys and that function is just stats that get. Stat that gets would basically return the value. And so we’re ranking by the value and getting the maximum key. So it’s 101 comma 32 as we saw. Now to actually merge 101 32

In [111]: top_pair = max(stats, key=stats.get)  
Out[111]: top_pair  
          (101, 32)

In [114]: def merge(ids, pair, idx):  
            # in the list of ints (ids), replace all consecutive occurrences of pair with the new token idx  
            newids = []  
            i = 0  
            while i < len(ids):  
              # if we are not at the very last position AND the pair matches, replace it  
              if i < len(ids) - 1 and ids[i] == pair[0] and ids[i+1] == pair[1]:  
                newids.append(idx)  
                i += 2  
              else:  
                newids.append(ids[i])  
                i += 1  
            return newids

print(merge([5, 6, 6, 7, 9, 1], (6, 7), 99))

#tokens2 = merge(tokens, top_pair, 256)  
#print(tokens2)  
#print(  

this is the function that I wrote but again there are many different versions of it. So we’re going to take a list of IDs and the pair that we want to replace and that pair will be replaced with the new index IDX. So iterating through IDs if we find the pair swap it out for IDX. So we create this new list and then we start at zero and then we go through this entirely sequentially from left to right. And here we are checking for equality at the current position with the pair. So here we are checking that the pair matches. Now here’s a bit of a tricky condition that you have to append if you’re trying to be careful and that is that you don’t want

Tokenization Last Checkpoint: 2 minutes ago (autosaved)

In [111]: top_pair = max(stats, key=stats.get)  
Out[111]: (101, 32)

In [114]: def merge(ids, pair, idx):  
             # in the list of ints (ids), replace all consecutive occurrences of pair with the new token idx  
             newids = []  
             i = 0  
             while i < len(ids):  
             # if we are not at the very last position AND the pair matches, replace it  
             if i < len(ids) - 1 and ids[i] == pair[0] and ids[i+1] == pair[1]:  
             newids.append(idx)  
             i += 2  
             else:  
             newids.append(ids[i])  
             i += 1  
             return newids

print(merge([5, 6, 6, 7, 9, 1], (6, 7), 99))  
#[5, 6, 99, 9, 1]

#tokens2 = merge(tokens, top_pair, 256)  
#print(tokens2)  
#print(  

this here to be out of bounds at the very last position when you’re on the right-most element of this list otherwise this would give you an out of bounds error. So we have to make sure that we’re not at the very very last element. So this would be false for that. So if we find a match we append to this new list that replacement index and we increment the position by two so we skip over that entire pair. But otherwise, if we haven’t found a matching pair we just sort of copy over the element of the position and increment by one in the return list. So here’s a very small toy example if we have a list 566791 and we want to replace the occurrences of 67 with 99 then

def merge(ids, pair, idx):  
# in the list of ints (ids), replace all consecutive occurrences of pair with the new token idx  
newids = []  
i = 0  
while i < len(ids):  
# if we are not at the very last position AND the pair matches, replace it  
if i < len(ids) - 1 and ids[i] == pair[0] and ids[i+1] == pair[1]:  
newids.append(idx)  
i += 2  
else:  
newids.append(ids[i])  
i += 1  
return newids

print(merge([5, 6, 5, 7, 9, 1], (6, 7), 99))

#[5, 6, 99, 9, 1]  

calling this on that will give us what we’re asking for. So here the 677 is replaced with 99. So now I’m going to uncomment this for our actual use case where we want to take our tokens we want to

Tokenization Last Checkpoint: 3 minutes ago (unsaved changes)

In [114]: def merge(ids, pair, idx):  
# in the list of ints (ids), replace all consecutive occurrences of pair with the new token idx  
newids = []  
i = 0  
while i < len(ids):  
#if we are not at the very last position AND the pair matches, replace it  
if i < len(ids) - 1 and ids[i] == pair[0] and ids[i+1] == pair[1]:  
newids.append(idx)  
i += 2  
else:  
newids.append(ids[i])  
i += 1  
return newids

print(merge([5, 6, 6, 7, 9, 1], (6, 7), 99))

tokens2 = merge(tokens, top_pair, 256)  
print(tokens2)  
print(  

take the top pair here and replace it with 256 to get tokens2 if we run this we get the following.

Tokenization Last Checkpoint: 3 minutes ago (unsaved changes)

In [114]: def merge(ids, pair, idx):  
    # in the list of ints (ids), replace all consecutive occurrences of pair with the new token idx  
    newids = []  
    i = 0  
    while i < len(ids):  
        # if we are not at the very last position AND the pair matches, replace it  
        if i < len(ids) - 1 and ids[i] == pair[0] and ids[i+1] == pair[1]:  
            newids.append(idx)  
            i += 2  
        else:  
            newids.append(ids[i])  
            i += 1  
    return newids

print(merge([5, 6, 6, 7, 9, 1], (6, 7), 99))  
tokens2 = merge(tokens, top_pair, 256)  
print(tokens2)  
print(  
Tokenization Last Checkpoint: 3 minutes ago (unsaved changes)

[5, 6, 99, 9, 1]  
[239, 188, 181, 239, 189, 142, 239, 189, 137, 239, 189, 131, 239, 189, 143, 239, 189, 132, 239, 189, 133, 32, 240, 159, 133, 164, 240, 159, 133, 157, 240, 159, 133, 152, 240, 159, 133, 146, 240, 159, 133, 158, 240, 159, 133, 147, 240, 159, 133, 148, 226, 128, 189, 32, 240, 159, 133, 135, 188, 226, 128, 140, 240, 159, 135, 179, 226, 128, 140, 240, 159, 135, 174, 240, 159, 135, 140, 240, 159, 135, 150, 240, 159, 135, 180, 240, 159, 135, 140, 240, 159, 135, 179, 226, 128, 140, 240, 245, 169, 226, 128, 140, 240, 159, 135, 170, 33, 32, 240, 159, 152, 132, 32, 84, 104, 256, 118, 101, 114, 121, 32, 115, 0, 97, 109, 256, 115, 116, 256, 114, 105, 105, 105, 105, 115, 256, 32, 240, 159, 107, 97, 114, 109, 256, 105, 110, 114, 256, 32, 105, 110, 116, 111, 32, 116, 104, 256, 105, 104, 101, 97, 114, 116, 115, 32, 111, 102, 32, 116, 104, 101, 32, 97, 110, 100, 32, 105, 109, 101, 114, 32, 105, 116, 32, 105, 32, 116, 104, 101, 32, 105, 32, 105, 256, 32, 101, 114, 256, 97, 105, 105, 105, 32, 105, 32, 105, 119, 32, 105, 32, 105, 32, 105, 32, 105, 32, 105, 32, 105, 32, 105, 32, 105, 32, 105, 32, 105, 32, 105, 32, 105, 32, 105, 32, 105, 32, 105, 32, 105, 32, 105, 32, 85, 110, 105, 99, 111, 100, 101, 32, 105, 32, 105, 32, 105, 32, 105, 32, 105, 32, 105, 32, 105, 32, 105, 32, 105, 32, 105, 32, 105, 32, 105, 32, 105, 7, 114, 256, 40, 119, 105, 97, 97, 105, 116, 105, 110, 105, 107, 32, 105, 105, 105, 32, 105, 32, 105, 32, 105, 32, 105, 32, 105, 32, 105, 32, 105, 32, 105, 8, 148, 108, 105, 107, 256, 117, 105, 105, 105, 105, 105, 32, 105, 32, 105, 97, 114, 114, 105, 105, 105, 32, 105, 32, 105, 32, 105, 32, 105, 32, 105, 32, 105, 108, 108, 32, 105, 116, 105, 105, 105, 105, 105, 105, 105, 105, 105, 44, 32, 105, 105, 105, 63, 41, 41, 44, 32, 105, 32, 105, length: 596  

Through the use of the merge function, we can see that a new token, 256, has been introduced to replace the frequent pair (101, 32). The output of the merge function shows the new list of tokens, tokens2, where this replacement has occurred. This process of token merging and vocabulary expansion is fundamental to the BPE algorithm and paves the way for more efficient tokenization in language models.

The critical aspect of this process is to iteratively merge the most frequent pairs, thereby condensing the text into fewer tokens that contain more information per token. This iterative process is the cornerstone of modern tokenization techniques used in large language models, enabling them to manage vast amounts of text while maintaining a workable vocabulary size.

Continued Token Merging and Vocabulary Expansion

So recall that previously we had a length of 616 in this list, and now we have a length of 596, right so this decrease by 20 which makes sense because there are 20 occurrences. Moreover, we can try to find the 256 here, and we see plenty of occurrences of it, and moreover, just double check there should be no occurrence of (101, 32) so this is the original array plenty of them and in the second array there are:

print('length:', len(tokens2))  
print(tokens2)  

no occurrences of (101, 32). So we’ve successfully merged this single pair and now we just iterate this so we are going to go over the sequence again, find the most common pair, and replace it. So let me write a while loop that uses these functions to do this sort of iteratively, and how many times do we do this? Well, how many merges do we want to do?

Let’s look at some code:

while True:  
    stats = get_stats(tokens2)  
    if not stats:  
        break  
    top_pair = max(stats, key=stats.get)  
    if stats[top_pair] < THRESHOLD:  
        break  
    tokens2 = merge(tokens2, top_pair, new_token_idx)  
    new_token_idx += 1  

In this loop, THRESHOLD would be a hyperparameter that you choose based on when you want to stop merging tokens based on their frequency. This step is crucial since it directly influences the size of the final vocabulary. Too large of a vocabulary might not offer the compact representation you desire, whereas too small might not capture enough nuances in the text.

Reflecting on the Tokenization Method

Tokenization is not just a mere preliminary step in text analysis; it’s a determining factor in the performance of LLMs. Different tokenization methods can produce vastly different results, and the choice of method should be aligned with the intended downstream tasks. As you can see from the iterative process we’ve been through, the methodology behind tokenization is anything but trivial. It requires careful consideration and understanding of the language and the model’s needs.

To illustrate further, consider the following points extracted from our exploration:

Tokenization process

As we dive deeper into the nuances of tokenization, it’s clear that this step is more than just a formality. It affects every aspect of the LLM, from the way it understands text to its performance on various tasks. The transformation of raw text into tokens, and the subsequent processing of these tokens, is a cornerstone of modern natural language processing.

Conclusion

In summary, tokenization is a critical step in the functioning of LLMs. It impacts the model’s ability to efficiently process text and understand its nuances. The methods used for tokenization, such as the BPE algorithm and its iterations, are complex but necessary for the creation of powerful language models. As we have seen, the choices made during tokenization, such as when to stop merging tokens, can have significant effects on the final outcomes. It’s a fascinating and intricate process that underpins much of the success in the field of natural language processing.

Deciding on Vocabulary Size

As we venture deeper into the intricacies of tokenization, we come across a critical hyperparameter: the final vocabulary size. This parameter is pivotal as it determines the balance between the granularity of the tokens and the length of the sequences we will eventually process. It’s a delicate balance that must be struck, as a larger vocabulary size leads to shorter sequences, but may also lead to a more sparse representation that could hinder the model’s performance.

When deciding on the final vocabulary size, one must take into account the nature of the language being tokenized and the computational resources at hand. For instance, GPT-4 uses around 100,000 tokens, which has been empirically determined to yield robust performance across a variety of tasks.

Let’s examine a practical example to illustrate the concept of vocabulary size and the process of merging tokens:

# Desired final vocabulary size  
vocab_size = 276  
# Number of merges to reach the desired vocabulary size  
num_merges = vocab_size - 256  
# Copy the original list to preserve the original tokens  
ids = list(tokens)  
# Dictionary to record merges  
merges = {} # (int, int) -> int

# Perform the merges  
for i in range(num_merges):  
    stats = get_stats(ids)  
    pair = max(stats, key=stats.get)  
    idx = 256 + i  
    print(f"Merging {pair} into new token {idx}")  
    ids = merge(ids, pair, idx)  
    merges[pair] = idx  

In this code, we set a final vocabulary size of 276, which means we need to perform 20 merges starting from the 256 tokens representing raw bytes. We iterate through the most common byte pairs, create a new token for each merge, and replace all occurrences of that pair with the new token. The merges dictionary keeps track of these changes, which will be useful for encoding and decoding sequences later on.

Iterative Token Merging

The merging process is not a one-time event but an iterative procedure that builds upon itself. As we merge tokens, the newly created tokens become candidates for subsequent merges. This leads to a hierarchical structure of tokens, akin to a forest rather than a single tree, as each merge connects two “leaves” to form a new “branch.”

Here are some of the merges that occurred during our example process:

After executing 20 merges, we can observe the evolution of our token list. It’s essential to note that individual tokens like 101 and 32 may still appear independently in the sequence; they only form the merged token when they appear consecutively.

Analyzing Compression Ratio

One of the benefits of tokenization is the compression of the original text data. By examining the compression ratio, we can gauge the efficiency of our tokenization process. In our example, we started with 24,000 bytes and reduced it to 19,000 tokens after 20 merges, achieving a compression ratio of approximately 1.27.

# Starting bytes  
starting_bytes = 24000  
# Tokens after merges  
final_tokens = 19000  
# Calculate compression ratio  
compression_ratio = starting_bytes / final_tokens  
print(f"Compression Ratio: {compression_ratio:.2f}")  

The resulting compression ratio indicates the degree of compactness we’ve achieved with our tokenization process. It’s a straightforward calculation, but it provides valuable insight into the efficiency of our approach.

Tokenizer as a Separate Entity

It is crucial to recognize that the tokenizer is an entirely separate component from the large language model (LLM) itself. The tokenizer has its dedicated training set, which may differ from the training set of the LLM. The tokenizer’s role is to preprocess text data, a task that is performed once before the LLM’s training begins.

Tokenizer Training

The Byte Pair Encoding (BPE) algorithm is employed to train the tokenizer’s vocabulary, and once trained, the tokenizer can perform both encoding and decoding of text data. It acts as a translation layer between the raw text, which consists of a sequence of Unicode code points, and the token sequence.

The following diagrams help visualize the tokenizer’s function and its place within the LLM ecosystem:

Tokenizer Function

Tokenizer Ecosystem

Encoding and Decoding with the Tokenizer

With the tokenizer trained and the merges dictionary established, we can now focus on the encoding and decoding steps. Encoding involves converting raw text into a sequence of tokens, while decoding is the reverse process, transforming a sequence of tokens back into human-readable text.

Encoding Decoding Process

These processes are foundational for interfacing with the LLM, as they allow us to convert between the model’s input/output format and the text data we naturally understand. The next phase of our exploration will delve into the practical implementation of these encoding and decoding operations, demonstrating how they bridge the gap between raw text and the LLM’s tokenized representation.

Understanding Encoding and Decoding

After discussing the intricacies of tokenization and its importance in the realm of language models, we now turn our attention to the practical aspects of encoding and decoding. Encoding is the process of converting raw text into a sequence of tokens, while decoding translates a sequence of tokens back into text. This translation is crucial as the large language model (LLM) operates exclusively on token sequences and does not directly interact with raw text.

To understand this process better, let’s visualize the flow of data:

Here is an example to illustrate the decoding process, which is the reverse of encoding:

# Define the vocabulary for the initial byte-level tokens  
vocab = {idx: bytes([idx]) for idx in range(256)}  
# Apply the merges learned by the BPE algorithm  
for (p0, p1), idx in merges.items():  
    vocab[idx] = vocab[p0] + vocab[p1]

# The decoding function converts a sequence of token IDs back to text  
def decode(ids):  
    # Given ids (a list of integers), return a Python string  
    tokens = b''.join(vocab[idx] for idx in ids)  
    text = tokens.decode('utf-8')  
    return text  

This code snippet provides a fundamental decoding function. We start by defining a vocabulary mapping integers to byte objects for the initial token set. Then we apply the merges learned during the BPE training process to this vocabulary. The decode function takes a list of token IDs (ids) and returns the corresponding text string.

Decoding Pitfalls

While the implementation above seems straightforward, it can encounter issues when dealing with certain sequences of token IDs. Let’s delve into a potential problem:

Imagine we try to decode a token sequence that includes the token 128. Since 128 is outside the ASCII range, trying to decode it as a single byte using the UTF-8 standard will result in an error:

print(decode([128]))  
# This will raise a UnicodeDecodeError  

The UnicodeDecodeError occurs because the UTF-8 encoding expects the byte corresponding to the token ID to be part of a valid UTF-8 sequence. If it’s not, the decoding will fail.

Encoding with UTF-8

To better understand this error, we need to examine how UTF-8 encoding works. UTF-8 encodes code points into a sequence of one to four bytes, depending on the value of the code point. For example:

This structure ensures that UTF-8 is backwards compatible with ASCII but also capable of representing every character in the Unicode standard.

Correcting the Decoding Function

To address the decoding issue mentioned earlier, we need to ensure that each token ID corresponds to a valid UTF-8 sequence before attempting to decode it. Here’s how we can modify our decode function to handle this correctly:

def decode(ids):  
    # Given ids (list of integers), return a Python string  
    tokens = []  
    for idx in ids:  
        if idx < 128:  
            # Directly convert ASCII tokens to their character representation  
            tokens.append(chr(idx).encode('utf-8'))  
        else:  
            # For non-ASCII tokens, fetch the corresponding byte sequence from the vocab  
            tokens.append(vocab[idx])  
    text = b''.join(tokens).decode('utf-8')  
    return text  

With this improved function, we’re checking if the token ID is within the ASCII range and handling it appropriately. This ensures that we don’t attempt to decode non-ASCII bytes as if they were standalone characters, preventing the UnicodeDecodeError.

Final Thoughts on Encoding and Decoding

Encoding and decoding are fundamental aspects of working with language models. They serve as the bridge between the human-readable text and the token sequences that the model processes. Understanding the potential pitfalls and intricacies of these processes is essential for anyone looking to delve deeper into the world of natural language processing and large language models.

As we wrap up this section, remember that the tokenizer is not only a pre-processing tool but also a vital component that significantly impacts the performance and capabilities of the language model. A well-trained tokenizer can handle multiple languages and various types of data, which in turn allows the LLM to perform effectively on a wide range of tasks.

UTF-8 Encoding Schema

When dealing with text in computing, it’s essential to understand that there’s a specific schema that UTF-8 bytes follow. UTF-8 is a variable-width character encoding used for electronic communication. It’s designed to be backward compatible with ASCII and to avoid the complications of byte-order marks (BOM), which can create issues in text processing.

Understanding UTF-8 Byte Structure

UTF-8 encodes characters into a sequence of bytes. This encoding supports one to four bytes for each character, but it’s not as simple as just splitting the bits of the character into the bytes directly. Instead, UTF-8 uses a specific structure:

The binary representation of the byte is crucial in understanding why certain bytes cannot be decoded without context. For instance, a byte such as 10000000 (128 in decimal) is invalid on its own because it doesn’t conform to the UTF-8 byte structure—it lacks the necessary leading bits that indicate it is part of a multi-byte sequence.

Correcting the Decode Function

To fix issues with decoding byte sequences that don’t conform to UTF-8’s rules, we can use the errors parameter in the bytes.decode() function. By default, errors is set to 'strict', which throws an error if the byte sequence isn’t valid UTF-8. However, there are other strategies we can employ:

Using 'replace' is particularly useful for ensuring that decoding can proceed even when encountering invalid bytes:

def decode(ids):  
    # Given ids (list of integers), return a Python string  
    tokens = b''.join(vocab[idx] for idx in ids)  
    text = tokens.decode('utf-8', errors='replace')  
    return text  

In the above code, if the byte sequence cannot be decoded, the invalid sections are replaced with the Unicode replacement character, allowing the rest of the sequence to be interpreted correctly.

Dealing with Decoding Errors

When decoding bytes, it’s important to handle errors gracefully. In Python, the bytes.decode() method offers several strategies for dealing with errors. The default behavior is 'strict', meaning any decoding error will raise a UnicodeDecodeError. However, Python’s flexibility allows developers to choose how to handle these errors by specifying different values for the errors parameter.

Options for Error Handling

For example, if we want to ensure that our decoding process never fails due to unexpected byte sequences, we might choose 'ignore' or 'replace'. Here’s how we could modify our decoding function to use 'replace':

def decode(ids):  
    # Given ids (list of integers), return a Python string  
    tokens = b''.join(vocab[idx] for idx in ids)  
    text = tokens.decode('utf-8', errors='replace')  
    return text  

With this modification, any invalid byte sequences will be replaced with �, thus allowing the rest of the byte sequence to be decoded and the program to continue running.

Decoding a Sequence of Integers

Decoding is not just about handling errors; it’s also about correctly interpreting a sequence of integers as text. Let’s consider a decoder that takes a list of integers, which are indices into a vocabulary, and returns the corresponding string:

vocab = {idx: bytes([idx]) for idx in range(256)}  
for (p0, p1), idx in merges.items():  
    vocab[idx] = vocab[p0] + vocab[p1]

def decode(ids):  
    # Given ids (list of integers), return a Python string  
    tokens = b''.join(vocab[idx] for idx in ids)  
    text = tokens.decode('utf-8', errors='replace')  
    return text  

In this code, we build up a vocab dictionary that maps each index to a byte sequence. The decode function then uses this vocabulary to translate a list of indices (ids) back into a string. If an index does not correspond to a valid UTF-8 sequence, the 'replace' error handling strategy ensures that the decoding process can continue without interruption.

Handling Invalid UTF-8 Byte Sequences

As we’ve seen, not every byte sequence is valid UTF-8, and invalid sequences can arise during the tokenization process. When a large language model (LLM) predicts tokens that do not fall into valid UTF-8, we encounter decoding problems. The standard practice to address this is to use errors='replace' during decoding. This approach is also found in the code released by OpenAI. If you see a replacement character � in your output, it’s an indication that the LLM’s output was not a valid sequence of tokens.

Let’s revisit the decode function with this in mind:

def decode(ids):  
    # Given ids (list of integers), return a Python string  
    tokens = b''.join(vocab[idx] for idx in ids)  
    text = tokens.decode('utf-8', errors='replace')  
    return text

print(decode([128]))  

The output will show the replacement character for any invalid sequences. This method ensures that we can decode the byte stream even if some parts of it are not valid UTF-8.

Encoding Strings into Tokens

Now let’s go the other way: from strings to tokens. The goal is to implement an encode function that takes a string and returns a list of integers representing the tokens. This is a crucial part of the tokenization process, allowing us to convert raw text into a format that an LLM can process.

The Encoding Process

The encoding process involves converting the text into its UTF-8 byte representation, then using a merge list to combine certain byte pairs according to a predefined set of rules. These rules are determined by how frequently certain byte pairs occur together in the training corpus.

Let’s take a look at the merge list, which is a crucial part of the encoding process:

merges = {  
    (101, 32): 256,  
    (105, 110): 257,  
    (115, 32): 258,  
    (116, 104): 259,  
    # ... (truncated for brevity)  
    (97, 108): 274,  
    (259, 256): 275  
}  

The merges dictionary indicates which byte pairs should be combined into single tokens. The keys are tuples representing byte pairs, and the values are the new token indices that result from merging those byte pairs.

Implementing the encode Function

Here’s an example implementation of the encode function:

def encode(text):  
    # Given a string, return a list of integers (the tokens)  
    tokens = list(text.encode('utf-8'))  
    # Process merges based on the dictionary  
    # ... (implementation details to merge tokens)  
    return tokens  

The function starts by encoding the given text into UTF-8, resulting in a list of byte integers. It then applies the merges according to the dictionary. The merging process should follow the order in which pairs were added to the merges dictionary, as some merges depend on earlier ones.

Let’s discuss how to implement the actual merging logic. We expect to perform multiple merges, so we’ll use a loop that repeatedly searches for the next eligible pair to merge. We can use a function like get_stats to help identify potential merge candidates by counting the occurrences of each byte pair in the token list.

Here’s how we might proceed to find and merge the eligible pairs:

while True:  
    # Get stats to find eligible merge pairs  
    stats = get_stats(tokens)  
    if not stats:  
        break  # No more merges possible

    # Find the pair with the lowest index in the merge list  
    pair_to_merge = min(stats, key=lambda pair: merges.get(pair, float('inf')))  
    if pair_to_merge not in merges:  
        break  # No merge found, exit loop

    # Perform the merge  
    tokens = merge_pair(tokens, pair_to_merge)  

In this loop, get_stats returns a dictionary with byte pairs as keys. We use the min function to find the pair with the lowest index in the merges dictionary. If a pair isn’t in the merges list, it can’t be merged, and we denote this with float('inf'). When no more merges are possible, the loop exits, and the final list of tokens is returned.

Encoding Example

Let’s consider a brief example to illustrate how this encoding might work:

# Sample merge list for demonstration purposes  
merges = {  
    (101, 32): 256,  
    (116, 104): 257,  
    # ... (additional merges)  
}

# Sample text to encode  
sample_text = "hello world"

# Encoding the sample text  
encoded_tokens = encode(sample_text)  
print(encoded_tokens)  

In this example, the encode function would take the string “hello world”, convert it to a list of UTF-8 bytes, and then apply the merge rules to produce the final list of token integers. The print statement would output this list, showing us the tokens that represent “hello world” in the LLM’s vocabulary.

Remember, the actual implementation of the encode function would involve more complexity to handle the merging process properly. It’s an exercise that requires careful thought and consideration of the specific tokenization rules used by the LLM.

Exploring the Encoding Function Implementation

In our journey to understand the tokenization process, we’ve seen how to handle invalid UTF-8 sequences and the basics of encoding strings into tokens. Now, let’s delve into the implementation details of the encode function, which is a cornerstone of the encoding process.

The Merging Logic

When we encode text, we often find ourselves in a situation where there is nothing left to merge because no single pair can be merged anymore. At this point, we must break out of the loop, as shown in this implementation snippet:

def encode(text):  
    # Given a string, return a list of integers (the tokens)  
    tokens = list(text.encode('utf-8'))  
    # ... (implementation details to merge tokens)  
    # ... (rest of the implementation)  
      
    while len(tokens) > 1:  
        stats = get_stats(tokens)  
        if not stats:  
            break  # No more merges possible  
          
        pair_to_merge = min(stats, key=lambda p: merges.get(p, float('inf')))  
        if pair_to_merge not in merges:  
            break  # No merge found, exit loop  
          
        # Perform the actual merge  
        tokens = merge_pair(tokens, pair_to_merge)  
      
    return tokens  

The loop continues until all possible merges are completed. Each iteration finds the lowest index merge pair and combines them, replacing occurrences of the pair with a single token index (IDX). This merging process is crucial for reducing the size of the token list and making it more manageable for the LLM.

Special Cases in Encoding

However, we must be vigilant about special cases that might arise during the encoding process. For instance, if we attempt to encode a single character or an empty string, the stats would be empty, causing an error in our loop. To handle this, we can introduce a check to ensure that tokens has at least two elements before proceeding with merges:

if len(tokens) < 2:  
    # If there's nothing to merge, return the tokens as-is  
    return tokens  

This additional check ensures that we don’t attempt to merge when it’s not possible, providing a more robust encode function.

Testing the Encoding Process

It’s also important to validate our encoding implementation with test cases. We should verify that encoding and then decoding a string returns the original text. This is generally true for strings that are valid UTF-8 and have been seen by the tokenizer during training. Here’s a simple test case:

training_text = "The text that we trained the tokenizer on."  
assert decode(encode(training_text)) == training_text  

We can also test on unseen text to ensure that the tokenization process generalizes well. For this purpose, we might use text from a different source, like an excerpt from Wikipedia:

# Example validation text from an external source  
validation_text = "Unicode, formally The Unicode Standard, is a text encoding standard maintained by the Unicode Consortium."  
assert decode(encode(validation_text)) == validation_text  

These tests give us confidence that our encoding and decoding functions work as expected, both for familiar and novel text.

Encoding and Decoding with Large Language Models

Having established the basics of the byte pair encoding algorithm, we’ve learned how to train a tokenizer and create the parameters that define it. These parameters essentially form a “binary forest” over the raw bytes. Once we have the merges table, we can seamlessly encode and decode between raw text and token sequences.

The encoding and decoding process might look something like this:

# Example of encoding and decoding  
original_text = "Example text to encode and decode."  
encoded_tokens = encode(original_text)  
decoded_text = decode(encoded_tokens)

print(f"Original text: {original_text}")  
print(f"Encoded tokens: {encoded_tokens}")  
print(f"Decoded text: {decoded_text}")  

Encoding and Decoding Process

In the output, we would expect to see the encoded tokens as a list of integers and the decoded text to match the original string. The simplicity of this tokenizer setting is only the beginning, as we will soon explore the intricacies of tokenizers used by state-of-the-art large language models.

Diving into GPT Tokenization Details

Let’s take a closer look at the GPT series, specifically the GPT-2 model, released in 2019. The paper titled “Language Models are Unsupervised Multitask Learners” describes the approach taken to tokenization and how it affects language modeling tasks.

In the paper, the authors explore a variety of natural language processing tasks such as question answering, machine translation, and summarization. They demonstrate that language models can begin to learn these tasks without explicit supervision when trained on a large dataset like WebText. The GPT-2 model, with its 1.5 billion parameters, achieves state-of-the-art results in a zero-shot setting on several language modeling datasets.

GPT-2 Paper Abstract

The paper emphasizes the importance of the model’s capacity for zero-shot task transfer and how increasing this capacity improves performance across tasks. This is a testament to the power of effective tokenization and the capability of large language models to generalize across a wide range of language tasks.

In the next section, we will continue to dissect the complexities of tokenization as we examine the tokenizers used by these advanced models. The process becomes more intricate, and we will address each aspect of this “complexification” in detail. Stay tuned as we unravel the tokenization techniques employed by the GPT series and other state-of-the-art language models.

GPT-2’s Approach to Tokenization

The GPT-2 paper, published by OpenAI, brings forward several crucial insights into the tokenization process, which are instrumental in the model’s ability to understand and generate text.

Byte Pair Encoding (BPE)

In Section 2.2, titled “Input Representation,” the GPT-2 paper addresses the fundamental goal of a language model: to compute the probability of any given string and to generate strings accordingly. The paper acknowledges the limitations of current large-scale language models (LMs), which include preprocessing steps such as lower-casing, tokenization, and handling out-of-vocabulary tokens. These steps restrict the variety of strings the model can handle.

To overcome these restrictions, the GPT-2 team adopted the Byte Pair Encoding (BPE) algorithm as a middle ground between character and word-level language modeling. BPE finds a balance by using word level inputs for frequent symbol sequences and character level inputs for less common symbol sequences. This approach allows the model to maintain a rich vocabulary while avoiding a prohibitively large base vocabulary that would result if every possible Unicode symbol were included.

Optimizing the Vocabulary

The paper illustrates that while BPE typically operates on Unicode code points, implementing BPE on a byte level necessitates only a base vocabulary size of 256, as opposed to over 130,000 for a full Unicode implementation. However, applying BPE directly to byte sequences can lead to sub-optimal merges due to its greedy nature. To optimize the vocabulary, the GPT-2 team made modifications to prevent BPE from merging across character categories within any byte sequence, with an exception for spaces. This approach efficiently compresses the vocabulary without overly fragmenting words.

Byte-Level BPE Implementation

GPT-2 Model Architecture

A brief overview of the GPT-2 architecture is provided in the paper, highlighting the model’s hyperparameters across four different sizes:

These configurations illustrate the scalability of GPT-2’s architecture, with the largest model, GPT-2, having more than an order of magnitude more parameters than its predecessor, GPT.

Language Modeling and Zero-Shot Task Transfer

The GPT-2’s language modeling capabilities are examined through the lens of zero-shot task transfer. This refers to the model’s performance on tasks that it was not explicitly trained to perform. Because the model operates on a byte level and does not require lossy preprocessing or tokenization, it can be evaluated on any dataset, regardless of the language or benchmark used. The paper discusses the use of perplexity as a measure of the model’s performance, which is a reflection of the average negative log probability or entropy.

Tokenization’s Impact on Language Models

The tokenization process has a profound impact on the performance of language models. GPT-2’s sophisticated use of BPE and its commitment to a byte-level approach allow it to generate and understand a wide array of text strings, setting a new standard for flexibility in language modeling.

In the following code block, we can simulate a simplified version of the tokenization process described in the GPT-2 paper. Let’s start by defining a base vocabulary and implementing a rudimentary BPE merge operation:

# Define a base vocabulary  
base_vocab = {'dog': 1, 'cat': 2, 'fish': 3, ' ': 4, '.': 5, '!': 6, '?': 7}

# Define a function to perform BPE merges  
def bpe_merge(text, vocab):  
    # Split text into tokens based on the vocabulary  
    tokens = [vocab.get(char, char) for char in text.split()]  
    # Perform BPE merge operations (simplified for example purposes)  
    for i in range(len(tokens) - 1):  
        pair = (tokens[i], tokens[i + 1])  
        # Check for the pair in the vocabulary  
        if pair in vocab:  
            tokens[i] = vocab[pair]  
            tokens.pop(i + 1)  
            break  
    return tokens

# Simulate BPE tokenization on a string  
test_string = "dog . dog! dog?"  
tokens = bpe_merge(test_string, base_vocab)  
print("Tokens after BPE merge:", tokens)  

Output:

Tokens after BPE merge: [1, 5, 1, 6, 1, 7]  

In this example, we’ve tokenized the string “dog . dog! dog?” using our base vocabulary and a BPE merge function. The output tokens correspond to the vocabulary indices for ‘dog’, ‘.’, ‘!’, and ‘?’. This simplified process gives us an insight into how GPT-2 might tokenize a string, albeit at a much more complex scale.

As we delve deeper into the nuances of tokenization, it becomes evident that the design choices made in this process can have far-reaching implications on a language model’s capability to comprehend and produce language. The GPT-2’s tokenization methodology is a key component of its success and a fascinating subject for those interested in the inner workings of large language models.

Understanding GPT-2’s Byte Pair Encoding

The GPT-2 model introduces an optimized version of the Byte Pair Encoding (BPE) algorithm, which is an essential component in creating the model’s tokenizer. BPE serves as a bridge between word-level and character-level language modeling, allowing for efficient handling of frequent and infrequent symbol sequences.

The Principle of BPE

BPE works by starting with a base vocabulary of individual characters and iteratively combining the most frequent pairs of symbols to form new tokens. This results in a hierarchical vocabulary that can efficiently represent common words and phrases with fewer tokens while still being capable of representing rare words through a sequence of subword units.

Challenges in Traditional BPE Implementations

While traditional BPE implementations often operate on Unicode code points, this method would require a base vocabulary of over 130,000 symbols to cover all Unicode strings. Such a large base vocabulary is impractical for language models that typically use 32,000 to 64,000 token vocabularies. To address this, GPT-2 utilizes a byte-level BPE, reducing the base vocabulary size to just 256.

GPT-2’s Byte-Level BPE Optimization

GPT-2’s implementation of BPE avoids suboptimal merges that could occur due to the greedy nature of the algorithm. For example, the word “dog” might appear in various contexts like “dog,”, “dog!” or “dog?”. Naively applying BPE could lead to many tokens representing “dog” with different punctuation, resulting in inefficient use of the vocabulary.

To circumvent this issue, GPT-2 performs a pre-merge step, enforcing that certain character categories should never be merged together. This rule-based approach ensures that semantics and punctuation do not get conflated, leading to a more optimal allocation of the limited vocabulary slots and model capacity.

Practical Example of Byte Pair Encoding

Consider the following Python code snippet that demonstrates a simplified version of byte pair encoding:

# Define a simple base vocabulary  
base_vocab = {'h': 1, 'e': 2, 'l': 3, 'o': 4, ' ': 5, '.': 6}

# Define a function to simulate BPE merges  
def bpe_merge(text, vocab):  
    tokens = [vocab[char] for char in text]  
    tokens = [str(token) for token in tokens]  # Convert tokens to strings for merging  
    # Perform BPE merge operations  
    for i in range(len(tokens) - 1):  
        pair = ''.join(tokens[i:i + 2])  
        # Check if the pair can be merged  
        if pair in vocab:  
            tokens[i:i + 2] = [vocab[pair]]  
            break  
    return tokens

# Simulate BPE tokenization on a string  
test_string = "hello."  
tokens = bpe_merge(test_string, base_vocab)  
print("Tokens after BPE merge:", tokens)  

Output:

Tokens after BPE merge: ['1', '2', '3', '3', '4', '6']  

In this example, “hello.” is tokenized using the base vocabulary. The BPE merge operation is simplified here, but it gives us a glimpse into how the process might look in the GPT-2 tokenizer.

GPT-2’s Tokenizer Implementation Details

GPT-2’s tokenizer is implemented in a file named encoder.py, which, despite its name, handles both encoding and decoding processes.

The Role of Regular Expressions

Regular expressions (regex) play a crucial role in tokenization. GPT-2’s tokenizer uses a complex regex pattern to identify parts of the text that should not be merged during the BPE process. This pattern ensures that the tokenizer adheres to the pre-merge rules set by the developers.

Exploring GPT-2’s Regex Pattern

The regex pattern used in GPT-2’s tokenizer is designed to match sequences of characters that should be tokenized together. It is built with a series of OR conditions, allowing it to categorize different types of characters and prevent undesired merges.

Code Walkthrough of GPT-2’s Regex Pattern

Let’s take a closer look at the regex pattern used in GPT-2’s tokenizer and how it operates. Here is a simplified example to illustrate the functionality:

import re

# Define the regex pattern used in GPT-2's tokenizer  
pattern = re.compile(r'''  
    \p{L}+|           # Match one or more Unicode letters  
    \p{N}+|           # Match one or more Unicode numerals  
    \p{P}+|           # Match one or more Unicode punctuations  
    \p{S}+|           # Match one or more Unicode symbols  
    [^\s\p{L}\p{N}\p{P}\p{S}]+|   # Match any other characters not matched by previous patterns  
    \s+               # Match one or more whitespace characters  
''', re.VERBOSE)

# Test string to tokenize  
test_string = "hello world"

# Find all matches of the pattern in the test string  
matches = pattern.findall(test_string)  
print("Tokenized string:", matches)  

Output:

Tokenized string: ['hello', ' ', 'world']  

In this example, the regex pattern tokenizes the string “hello world” into separate tokens for “hello”, a space, and “world”. This demonstrates the pattern’s ability to distinguish between different categories of characters and tokenize them accordingly.

Tokenization in Practice with GPT-2

The tokenization process in GPT-2 is more than just splitting words and punctuation. The tokenizer must handle a wide range of character categories while respecting the rules that prevent certain merges. This delicate balance is what allows GPT-2 to maintain a rich and versatile vocabulary, capable of handling the nuances of human language.

The tokenization step is critical for the model’s subsequent learning and generation capabilities. By understanding and implementing these principles, GPT-2 achieves a level of flexibility and power unseen in previous language models.

In the next section, we will delve deeper into how GPT-2’s tokenizer is used in practice, including how it encodes and decodes text, and how it interfaces with the model’s architecture for language understanding and generation.

Step-by-Step Tokenization Process

The tokenization process is a critical aspect of preparing data for a language model. It involves converting raw text into a sequence of tokens that the model can understand. Let’s delve into this process with an illustrative example and code snippets.

From Text to Token Sequences

When tokenizing a string like “hello world how are you,” the tokenizer processes each element of the string independently. Each element is converted into a token sequence, and these sequences are then concatenated to form the final tokenized output. Here’s a simplified example of how this works:

def tokenize(text):  
    # Split the text into separate components  
    elements = text.split(' ')  
      
    # Process each element independently  
    token_sequences = [encode(element) for element in elements]  
      
    # Concatenate all token sequences  
    return sum(token_sequences, [])  

Encoding and Decoding Functions

The encoding function converts a string of text into a list of integer tokens, while the decoding function reverses this process. Below is a Python code example that demonstrates encoding and decoding operations:

def encode(text):  
    # Given a string, return a list of integers (the tokens)  
    tokens = list(text.encode())  
    return tokens

def decode(tokens):  
    # Given a list of tokens, return the string  
    text = bytes(tokens).decode()  
    return text

# Example Usage  
original_text = "hello world"  
encoded_tokens = encode(original_text)  
decoded_text = decode(encoded_tokens)

print("Original Text:", original_text)  
print("Encoded Tokens:", encoded_tokens)  
print("Decoded Text:", decoded_text)

# Output:  
# Original Text: hello world  
# Encoded Tokens: [104, 101, 108, 108, 111, 32, 119, 111, 114, 108, 100]  
# Decoded Text: hello world  

Tokenizer Illustration

Tokenizer as an Independent Module

It’s crucial to understand that the tokenizer is a completely separate, independent module from the language learning model (LLM). It has its own training dataset of text, which can be different from the LLM’s dataset. The tokenizer trains the vocabulary using the Byte Pair Encoding (BPE) algorithm and translates back and forth between raw text and sequences of tokens. The LLM subsequently only sees the tokens and doesn’t directly deal with text.

Token Sequence Concatenation

After tokenization, the resulting token sequences are joined to form a continuous sequence, which is then fed to the LLM. The tokenizer ensures that certain character combinations, such as letters and punctuation, are never merged, respecting the boundaries defined by the regex patterns used during tokenization.

Understanding Unicode Categories

To tokenize text effectively, it’s important to understand the Unicode categories used in regex patterns. These categories help the tokenizer identify and separate different types of characters, such as letters, numbers, punctuation, and more.

Here are some significant Unicode categories and their meanings:

By using these categories, the tokenizer can effectively split text while preserving the integrity of words, numbers, and punctuation.

Tokenization of Apostrophes

The handling of apostrophes in tokenization can be tricky. While common apostrophes are tokenized correctly, unicode apostrophes might not be handled as expected, leading to inconsistencies. For example, “house’s” might be tokenized differently than “house’s” due to the use of a unicode apostrophe.

The Importance of Case Sensitivity

In some cases, the tokenizer’s behavior might vary depending on the case of the text. For example, the tokenizer might separate out apostrophes when they’re followed by lowercase letters but not when they’re followed by uppercase letters. This is an important consideration when designing regex patterns for tokenization.

Encoder Class and BPE Merges

The Encoder class is responsible for handling the encoding of text into tokens and decoding tokens back into text. It also manages the BPE ranks, which are used to determine the order of merges during the tokenization process.

Here’s a snippet of the Encoder class and the get_pairs function, which finds pairs of symbols for potential merges:

def get_pairs(word):  
    # Word is represented as a tuple of symbols (symbols being variable-length strings)  
    pairs = set()  
    prev_char = word[0]  
    for char in word[1:]:  
        pairs.add((prev_char, char))  
        prev_char = char  
    return pairs

class Encoder:  
    def __init__(self, encoder, bpe_merges, errors='replace'):  
        self.encoder = encoder  
        self.decoder = {v: k for k, v in self.encoder.items()}  
        self.errors = errors  # How to handle errors in decoding  
        # ... other initializations ...  

By understanding these various aspects of the tokenization process, we gain insight into how language models like GPT-2 prepare and process text for natural language understanding and generation. The tokenization step is not merely a technicality but a foundational aspect of how these models operate and understand language.

Apostrophe Tokenization and Regex Patterns

Handling apostrophes in tokenization can lead to inconsistencies, especially when dealing with uppercase versus lowercase text. The tokenizer’s behavior may vary, separating out apostrophes in a way that feels “extremely gnarly and slightly gross,” but this is a part of how the tokenizer operates.

Regex Patterns for Tokenization

Regex patterns are essential for the tokenizer to identify and separate characters, numbers, and punctuation correctly. Consider the following example using Python’s regex library to illustrate forced splits in token sequences:

import regex as re

gpt2pat = re.compile(r'''...''')  # The actual pattern is omitted for brevity  

The regex patterns enforce the tokenizer to split the text into chunks whenever the category of the character changes, making sure that merges within elements do not occur. This is particularly evident when observing the treatment of spaces in the tokenization process.

Spaces in Tokenization

In the GPT series of tokenizers, spaces play a significant role and are often preserved as separate elements. For example, OpenAI’s tokenizer maintains spaces as independent tokens and assigns them a specific token ID, such as 20. This deliberate choice implies that additional rules, beyond chunking and applying the BPE algorithm, are enforced to handle spaces during tokenization.

Tokenizer Space Handling

The tokenizer prefers tokens that start with a space followed by a letter or number, which is a consistent pattern. This behavior ensures that common tokens, such as “ space U,” maintain their form even when additional spaces are introduced.

The Encoder Class in GPT Tokenization

A critical component of the tokenization process is the Encoder class, which is responsible for the encoding and decoding of text into tokens. Additionally, the Encoder class manages the BPE merges, determining the order in which token pairs are merged.

The following Python code snippet provides insight into the Encoder class structure and its bpe method:

class Encoder:  
    def __init__(self, encoder, bpe_merges, errors='replace'):  
        # Initializations  
        self.encoder = encoder  
        self.decoder = {v: k for k, v in self.encoder.items()}  
        self.bpe_ranks = dict(zip(bpe_merges, range(len(bpe_merges))))  
        self.cache = {}  
        # Other attributes and methods

    def bpe(self, token):  
        # BPE merge algorithm implementation  
        if token in self.cache:  
            return self.cache[token]  
        word = tuple(token)  
        pairs = get_pairs(word)

        if not pairs:  
            return token

        # Apply merges based on BPE ranks  
        while True:  
            # Find the pair with the lowest rank  
            bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float('inf')))  
            if bigram not in self.bpe_ranks:  
                break  
            # Merge the pair and update word and pairs  
            # ...  

It’s important to note that the code provided by OpenAI for the tokenizer is mainly for inference, not training. This means that while we can use the code to tokenize new text using existing BPE merges, the process of training the tokenizer with new text is not covered.

TikToken Library by OpenAI

OpenAI’s official library for tokenization is called TikToken. To use it, one would typically install the package and perform tokenization inference. Below is an example of how to utilize TikToken for tokenization:

import tiktoken

# Initialize the tokenizer (example for GPT-2, which does not merge spaces)  
enc = tiktoken.get_encoding(...)  

TikToken Library Usage

The TikToken library serves for inference purposes, allowing users to obtain tokens for text according to the GPT-2 or GPT-4 tokenizer specifications.

Tokenization Differences between GPT-2 and GPT-4

A key difference in tokenization between GPT-2 and GPT-4 is how spaces are treated. While GPT-2 keeps white spaces unmerged, GPT-4 introduces merges for spaces. This change is evident when observing the tokenization output for both models:

# Example in Python demonstrating the tokenization difference  
for i in range(1, 101):  
    if i % 3 == 0 and i % 5 == 0:  
        print(...)  

In GPT-2, spaces within the example code remain separate tokens, but in GPT-4, these spaces may be merged, affecting the tokenization output.

GPT-4 Tokenization of Spaces

To investigate these differences further, one can delve into the TikToken library’s source code and examine the regular expressions used for chunking text in GPT-4. The modifications in regex patterns reflect the evolution of tokenization strategies employed by OpenAI in their language models.

Exploring the TikToken Library

OpenAI provides a comprehensive suite of tools for tokenization within their TikToken library. The tiktoken directory structure, as shown in the Jupyter Notebook screenshot, includes several Python modules that contribute to the tokenization process. These modules, such as core.py, load.py, and model.py, are critical for understanding and utilizing the tokenization functionality provided by OpenAI.

TikToken Directory Structure

The tiktoken_ext directory contains openai_public.py, which is essential for interfacing with the public tokenization definitions that OpenAI maintains. This module facilitates the inference process, necessary for the implementation of OpenAI’s tokenization algorithms.

GPT-2 Tokenization Functions

The gpt2 function within the openai_public.py module is of particular interest. It lays out the structure for GPT-2’s tokenization, including constants such as ENCODE_CONSTRUCT and various tokenization functions:

def gpt2():  
    # Tokenization constants and function definitions for GPT-2  
    # ...  

This function provides the foundation for GPT-2’s tokenization mechanism and is a critical starting point for those seeking to understand or utilize GPT-2’s tokenizer.

GPT-4 Tokenization Updates

When examining the GPT-4 tokenizer, notable differences become apparent. The c100k_base function, representative of GPT-4’s tokenization strategy, includes changes in the pattern matching and a different approach to handling white space and numbers:

def c100k_base():  
    # Tokenization constants and function definitions for GPT-4  
    # ...  

Some of the major changes in GPT-4’s tokenizer include:

These adjustments reflect OpenAI’s commitment to evolving and refining their tokenization strategies. The increase in vocabulary size from approximately 50,000 to 100,000 tokens is also a significant change in GPT-4.

Special Tokens and Patterns

OpenAI’s tokenization process includes a variety of special tokens, which are integral to the operation of language models. The pattern definition is crucial for efficient execution, and while the details are complex, they play a pivotal role in the tokenizer’s functionality.

For instance, the definition of special tokens and the corresponding regex patterns for GPT-2 can be observed in the openai_public.py module:

def gpt2():  
    # Special token and pattern definitions for GPT-2  
    # ...  

These definitions ensure that the tokenizer can accurately process and encode various forms of text, including those with special characters or formatting.

Encoder Class Details

Diving deeper into the tokenization process, the Encoder class within the encoder.py module provides the methods necessary for both encoding and decoding text:

class Encoder:  
    def bpe(self, token):  
        # Byte Pair Encoding (BPE) algorithm for a single token  
        # ...

    def encode(self, text):  
        # Convert text string to a list of token integers  
        # ...

    def decode(self, tokens):  
        # Convert a list of token integers back to a text string  
        # ...

    def get_encoder(model_name, models_dir):  
        # Load the encoder and BPE merges from files  
        # ...  

These methods are critical for converting text to and from the tokenized format used by OpenAI’s language models. The bpe method applies the Byte Pair Encoding algorithm to merge token pairs efficiently, while the encode and decode methods handle the conversion between text and token representations.

Loading Tokenizer Files

To instantiate an Encoder object with the appropriate tokenization rules, two files are loaded: encoder.json and vocab.bpe. The encoder.json file maps tokens to their encoded representations, while vocab.bpe contains the BPE merges:

def get_encoder(model_name, models_dir):  
    with open(os.path.join(models_dir, model_name, 'encoder.json'), 'r') as f:  
        encoder = json.load(f)  
    with open(os.path.join(models_dir, model_name, 'vocab.bpe'), 'r', encoding='utf-8') as f:  
        bpe_data = f.read()  
    # ...  
    return Encoder(encoder=encoder, bpe_merges=bpe_merges)  

These two files are analogous to the vocab and merges variables within the tokenization system, enabling the encoding and decoding of text once the tokenizer has been trained.

Understanding Encoding and Decoding

The encoding process involves converting text into a sequence of tokens, while decoding refers to the reverse process. The encode and decode functions within the Encoder class exemplify this process:

class Encoder:  
    # ...  
      
    def encode(self, text):  
        # Convert text to tokens  
        # ...  
      
    def decode(self, tokens):  
        # Convert tokens to text  
        # ...  

These methods are instrumental for interacting with the tokenized representation of text, allowing for both the analysis and generation of language model outputs.

Byte Pair Encoding Mechanisms

OpenAI’s approach to tokenization involves several layers of encoding and decoding, which may seem complex at first glance. In the GPT-2 source code, we find a method named get_pairs used within the Byte Pair Encoding process. This function is instrumental for identifying the pairs of symbols that should be merged during the encoding phase.

def get_pairs(word):  
    # ...  

The byte_encoder and byte_decoder play a crucial role in OpenAI’s tokenizer. Despite appearing to be a minor detail, these components are responsible for additional encoding and decoding layers that work in conjunction with the main tokenizer.

# Byte encoder and decoder mappings  
byte_encoder = {v: k for k, v in encoder.items()}  
byte_decoder = {v: k for k, v in byte_encoder.items()}  

These mappings are used to convert between bytes and their encoded representations, ensuring that text is appropriately processed before and after the main tokenization steps.

Byte Encoding and Decoding Process

The encoding and decoding process is a two-step sequence involving byte operations before the primary encoding or after the decoding:

  1. Byte Encoding: The text is first processed by the byte_encoder, which converts raw text into a byte-encoded format.
  2. Token Encoding: The byte-encoded text is then passed to the tokenizer’s encode function, converting it into a sequence of token integers.

The decoding process happens in reverse:

  1. Token Decoding: A sequence of token integers is converted back into byte-encoded text using the tokenizer’s decode function.
  2. Byte Decoding: The byte-encoded text is finally processed by the byte_decoder to retrieve the original text.

This can be visualized in the following example, where the decode function is used after encoding a text to verify that it matches the original:

text2 = decode(encode(text))  
print(text2 == text)  
# True  

Detailed Look at the Encoder Class

Diving deeper, the Encoder class exposes the inner workings of the tokenization process. The class contains methods for Byte Pair Encoding (bpe), encoding (encode), and decoding (decode). Each method has its specific role in transforming text to and from tokenized forms.

class Encoder:  
    def bpe(self, token):  
        # Implementation of the Byte Pair Encoding algorithm for a single token

    def encode(self, text):  
        # Convert text string to a list of token integers

    def decode(self, tokens):  
        # Convert a list of token integers back to a text string

    def get_encoder(model_name, models_dir):  
        # Load the encoder and BPE merges from files  

The bpe method is particularly noteworthy as it merges pairs of tokens based on the BPE algorithm, which OpenAI uses to compress and decompress the text efficiently.

The Role of Special Tokens

Special tokens are a significant aspect of OpenAI’s tokenization system. These tokens serve various functions, such as demarcating sections of data or introducing structure to the stream of tokens. The Encoder class and the associated encoder.json and vocab.bpe files are central to this system.

To obtain the special tokens and the BPE merges, one would download the encoder.json and vocab.bpe files from OpenAI’s public repository:

# To download these two files:  
wget https://openaipublic.blob.core.windows.net/gpt-2/models/1558M/vocab.bpe  
wget https://openaipublic.blob.core.windows.net/gpt-2/models/1558M/encoder.json  

With the files downloaded, the code to load them into the tokenization system would look like this:

import os, json

with open('encoder.json', 'r') as f:  
    encoder = json.load(f)  # <---- ~equivalent to our 'vocab'

with open('vocab.bpe', 'r', encoding='utf-8') as f:  
    bpe_data = f.read()  
    bpe_merges = [tuple(merge_str.split()) for merge_str in bpe_data.split('\n')[1:-1]]  
# ^---- ~equivalent to our 'merges'  

The encoder.json file maps tokens to their encoded representations, much like a vocabulary, while the vocab.bpe contains the BPE merges necessary for the bpe method to function correctly.

Understanding the Special End-of-Text Token

The GPT-2 tokenizer includes one particular special token, known as the end-of-text token. This token is used to signal the end of a document in the training set. When preparing training data, this special token is inserted between documents, helping the language model to understand that a new, unrelated document follows.

The end-of-text token is represented as the very last token in the vocab mapping and plays a crucial role in the tokenizer’s ability to delineate separate text entries.

# Example of using the end-of-text token in tokenization  
special_tokens = {  
    # ...,  
    '  
#### Special Tokens and Encoding Specifics {#special-tokens-encoding}

Let's explore the handling of special tokens within the GPT tokenizer. Special tokens are not processed through the standard BPE merges. Instead, the tokenizer contains special case instructions to recognize and handle these tokens appropriately.

```python  
class Encoder:  
    # ... (previous methods)

    def bpe(self, token):  
        # ... (existing BPE implementation)  
          
        # Special case handling for special tokens  
        if token in self.special_tokens:  
            return token

        # ... (rest of the BPE method)  

In the absence of special handling in the Encoder class, we can look at libraries like tiktoken, implemented in Rust, to see how they manage special tokens. These libraries often contain additional logic for recognizing and encoding special tokens that you can register and add to the vocabulary.

Special Tokens in Tokenization Libraries

The tiktoken library showcases the special handling for registered tokens. Here’s an example from the library’s source code:

// src/lib.rs excerpt from tiktoken library  
fn _encode_native(&self, text: &str, allowed_special: HashSet<&str>) -> (Vec<Rank>, usize) {  
    let special_regex = self._get_tl_special_regex();  
    let regex = self._get_tl_regex();  
    let mut ret = vec![];

    // ... (rest of the encode_native function)  
      
    // Handling special tokens  
    for mat in regex.find_iter(&text[start..end]) {  
        let piece = mat.as_str().to_string();  
        if let Some(token) = self.encoder.get(piece) {  
            // ... (encoding logic)  
        }  
    }  
}  

This code fragment demonstrates how the tiktoken library processes text, looking for special tokens to encode them differently from the usual BPE merges.

Integrating Special Tokens with Language Modeling

Special tokens are instrumental not only in basic language modeling but also in more advanced applications, such as fine-tuning models for specific tasks. For instance, in a chatbot interaction, special tokens can delineate separate messages within a conversation, allowing the model to understand the flow of dialogue.

Conversation Tokens

In the example above, tokens like I am start (short for imagining my log_start) mark the beginning and end of messages. These structured interactions are crucial for models like GPT-3.5 Turbo, which are fine-tuned for chat applications.

Extending Tokenization with Custom Special Tokens

The flexibility to extend tokenization with custom special tokens is a powerful feature. For instance, in the tiktoken library, you can fork the base token set and add new special tokens tailored to your use case.

# Example of extending the tokenizer with custom special tokens  
from tiktoken.load import data_gym_to_mergeable_bpe_ranks, load_tiktoken_bpe

# Define your custom special tokens  
CUSTOM_SPECIAL_TOKENS = {  
    # ... (your special tokens)  
}

# Add custom tokens to the tokenizer  
# ... (code to extend tokenizer with custom tokens)  

Special Token Customization

As the example suggests, you can introduce any arbitrary tokens and the tiktoken library will correctly handle them during tokenization.

Model Surgery for Adding Special Tokens

When adding special tokens, modifications to the underlying model, often referred to as “model surgery,” are required. The embedding matrix for vocabulary tokens must be expanded to accommodate the new special tokens.

# Pseudo-code for model surgery to add a special token  
vocab_size = len(encoder)  # Current vocabulary size  
new_vocab_size = vocab_size + 1  # Add one for the new special token

# Extend the embedding matrix with an additional row  
# Initialize the new row with small random values  
model.token_embedding_table = nn.Embedding(new_vocab_size, n_embd)  

You must also extend the final layer of the transformer to ensure that the new token is included in the classifier’s projection.

Building Your Own GPT-4 Tokenizer

Armed with the knowledge of how tokenization works, including the handling of special tokens and the necessary model modifications, you can build your own GPT-4 tokenizer. To assist with this, educational resources and code repositories are available, such as the minbpe repository.

# Importing the minbpe library  
import minbpe

# Use the library to perform BPE tokenization  
tokens = minbpe.encode("Your text here")  

minbpe Repository

The minbpe project provides a minimal, clean implementation of the byte-level BPE algorithm commonly utilized in LLM tokenization, serving as a starting point for customized tokenizer development.

Exercise for Rewriting minbpe for Learners

For those interested in a hands-on learning experience, an exercise has been created to rewrite the minbpe library. This exercise breaks down the task into manageable parts, guiding learners through the process of understanding and implementing BPE tokenization.

# Exercise instructions from the minbpe repository  
- Add a small amount of code to write out the GPT-4 vocab...  
- Change the handling of special tokens inside encode...  
- Create exercise to rewrite minbpe for learners...  

The exercise, available in the repository’s exercise.md file, is an excellent opportunity to deepen your understanding of the tokenization process and how it integrates with LLMs.

Exercise Instructions

By completing this exercise, you’ll be well-equipped to customize and enhance tokenization for your language models, whether for research or practical applications.

Recovering Raw Merges and Byte Shuffle in GPT-4 Tokenizer

When delving into the intricacies of the GPT-4 tokenizer, you will encounter certain complexities. One such challenge is the recovery of raw merges from the tokenizer. Although it’s straightforward to retrieve what’s termed as vocab, the process of decoding the raw merges, stored under enc._mergeable_ranks, is less intuitive. Fortunately, the minbpe library provides a recover_merges function in minbpe/gpt4.py, which can transform these ranks back into raw merges. The function operates under the principle that storing only the parent nodes and their rank suffices, thus discarding the details of child merges.

Another quirk of the GPT-4 tokenizer is its permutation of raw bytes. This permutation is encapsulated within the first 256 elements of the mergeable ranks, allowing a relatively simple recovery of the byte shuffle:

byte_shuffle = {i: enc._mergeable_ranks[i] for i in range(256)}  

Incorporating this shuffle into both your encode and decode functions is crucial. Hints on implementation can be found in the minbpe/gpt4.py file.

Adding Special Tokens

An optional yet intricate feature you might want to implement is the ability to handle special tokens. This allows your tokenizer to match the output of libraries like tiktoken even when special tokens are present:

import tiktoken  
enc = tiktoken.get_encoding("Your special token here")  

Building a GPT-4 Tokenizer: A Step-by-Step Guide

Constructing a tokenizer akin to GPT-4’s can be broken down into four progressive steps:

  1. Write a BasicTokenizer class:
    • Implement core functions: train, encode, and decode.
    • Train your tokenizer on a text of your choice, such as tests/taylorswift.txt, and evaluate the merged tokens.
  2. Transition to a RegexTokenizer:
    • Utilize a regex pattern that splits text into categories like numbers, letters, and punctuation.
    • Retrain your tokenizer and compare the results, ensuring no tokens span across categories.

Tokenizer Training Visualization

Reference the minbpe repository for guidance, and if you find yourself stuck, consult the clean and understandable code within.

Implementing the BasicTokenizer

Here’s a glimpse into how the BasicTokenizer class might be structured:

class BasicTokenizer(Tokenizer):  
    def train(self, text, vocab_size, verbose=False):  
        for i in range(num_merges):  
            # count up the number of times every consecutive pair appears  
            stats = get_stats(ids)  
            # find the pair with the highest count  
            pair = max(stats, key=stats.get)  
            # mint a new token: assign it the next available id  
            idx = 256 + i  
            # replace all occurrences of pair in ids with idx  
            ids = merge(ids, pair, idx)  
            # save the merge  
            merges[pair] = idx  
            vocab[idx] = vocab[pair[0]] + vocab[pair[1]]  
            # prints  
            if verbose:  
                print(f"Merged {pair} into token {idx}")  

Visualizing Token Vocabulary

By training your tokenizer, you can visualize the token vocabulary. For instance, training on the Taylor Swift Wikipedia page generates a vocabulary where the first 256 tokens are raw bytes, followed by merges, such as combining two spaces into a single token (token 256 in GPT-4). This visualization aids in comparing and understanding the tokenization process.

len(encoder) # 256 raw byte tokens. 50,000 merges. +1 special token  
50257

encoder[256] # Merged ' ' + ' ' -> '  ' (two spaces)  
encoder[257] # Merged 'e' + 'l' -> 'el'  
# ... and so on  

The above example illustrates the GPT-4 tokenizer’s training results, which you can replicate with your implementation. It’s important to note that the training set influences the tokenizer’s merges, such as the inclusion of Python code affecting whitespace tokenization.

Exploring SentencePiece: An Alternative Tokenization Method

Moving beyond GPT-4’s tokenizer, let’s examine the sentencepiece library, a popular choice for handling both the training and inference of BPE tokenizers efficiently:

Here’s a TLDR of the differences between tiktoken and sentencepiece:

import sentencepiece as spm

# Example of using sentencepiece  
spm.SentencePieceTrainer.train('...')  

SentencePiece Tokenization

The approach of sentencepiece is distinct in that it operates directly on code points, which may offer advantages in handling a variety of languages and scripts. However, some may find the utf-8 byte approach used by tiktoken to be cleaner and more straightforward.

By following the above steps and utilizing the provided resources, you’ll be well on your way to building a tokenizer that closely resembles the one powering GPT-4. Whether for academic curiosity or practical application, understanding the tokenization process is a cornerstone of working with LLMs.

Understanding SentencePiece’s Approach

The distinction between tiktoken and sentencepiece is subtle yet significant. While tiktoken encodes text to UTF-8 bytes before performing BPE, sentencepiece operates directly on Unicode code points. This difference affects how rare code points are handled during tokenization.

The SentencePiece Method

sentencepiece considers the entire spectrum of Unicode code points during its BPE process. If a code point is too rare—based on the character_coverage hyperparameter—it will be either mapped to an UNK (unknown) token or encoded into UTF-8 bytes, which are then tokenized. This method adds special byte tokens to the vocabulary for handling these rare code points.

Here’s a summary of the differences:

Using SentencePiece

Below is an example of how one might import and use the sentencepiece library:

import sentencepiece as spm

# Example code to write a toy text file with random content  
with open('toy.txt', 'w', encoding='utf-8') as f:  
    f.write("Some random text for tokenizer training.")  

sentencepiece is known for its efficiency in both training and inference of BPE tokenizers. Its use in projects like the Llama and Mistral series highlights its viability for large-scale language modeling tasks.

SentencePiece Model Training

Training a sentencepiece model involves a multitude of options and configurations. This level of customization is due to its aim to accommodate a wide variety of use cases over time, which also means it has accumulated historical baggage.

For instance, when setting up training options for a sentencepiece model, one might encounter configurations like:

import sentencepiece as spm

options = {  
    # input specification  
    'input': 'toy.txt',  
    'model_prefix': 'spm',  
    'vocab_size': 8000,  
    'character_coverage': 0.9995,  
    # ... other options  
}

spm.SentencePieceTrainer.train(**options)  

The character_coverage option, for example, determines how the model handles rare code points, either by mapping them to an UNK token or encoding them into bytes.

Diving into SentencePiece Configuration

When you delve into the sentencepiece configurations, you find options that were relevant during its early days of development. These include settings for splitting by number, whitespace, and treating whitespace as a suffix or allowing pieces that only contain whitespace. Many of these options may not be necessary for modern Large Language Models (LLMs), which tend to treat text data as raw as possible.

Here are some of the configurations you might encounter:

SentencePiece Training Options

The training options for spm_train can be quite extensive. These options can be listed using spm_train --help. However, not all of these options are relevant to every use case. For LLMs, many of these pre-tokenization and normalization settings are often turned off to preserve the raw data.

SentencePiece Protobuf Configuration

The SentencePiece model’s configuration is encapsulated in a protobuf file, sentencepiece_model.proto. This file outlines all the options available for training, including the aforementioned settings.

Training a SentencePiece Model

Let’s consider an example where we train a sentencepiece model with settings that are believed to be similar to those used for training the Llama 2 tokenizer:

import os

options = {  
    # input specification  
    'input': 'toy.txt',  
    'model_prefix': 'spm',  
    'vocab_size': 8000,  
    'character_coverage': 0.9995,  
    # ... other options, potentially mirroring Llama 2 settings  
}

spm.SentencePieceTrainer.train(**options)  

The input option specifies the text file containing the data for training. The model_prefix option determines the prefix for the output model files, and vocab_size sets the number of tokens in the vocabulary.

SentencePiece and Modern LLMs

Modern LLMs approach tokenization differently from earlier NLP applications. Normalization and pre-processing rules, once prevalent, are now often bypassed in favor of retaining the original text structure. The idea is to let the model learn from data that is as close to its natural form as possible, avoiding any unnecessary transformations.

Tokenization in LLMs

Tokenization is a critical step in preparing text for LLMs. The choice between tiktoken and sentencepiece can influence how a model handles different languages, scripts, and rare code points. For those who prefer a cleaner approach, tiktoken may be the preferred method, but sentencepiece offers a robust and efficient alternative, especially for handling diverse and large datasets.

By understanding the nuances of these tokenization methods and their configurations, practitioners can better customize their tokenizers to suit the specific needs of their models and datasets.

SentencePiece Merge Rules and Special Tokens

When it comes to fine-tuning the tokenization process, SentencePiece provides a set of merge rules that can be applied. These rules affect how the tokenizer splits up text into tokens, especially with respect to digits, whitespace, and other special characters. Here’s a breakdown of some of the merge rules that could be used in SentencePiece:

In addition to these merge rules, SentencePiece allows the specification of special tokens, which play significant roles during the tokenization and model training process. These tokens typically include:

Here is an example of how these special tokens and merge rules can be set up in SentencePiece:

import os  
import sentencepiece as spm

options = {  
    # Special tokens  
    'unk_id': 0,  # the UNK token MUST exist  
    'bos_id': 1,  # Beginning Of Sentence token  
    'eos_id': 2,  # End Of Sentence token  
    'pad_id': -1,  # Padding token, set to -1 to turn off  
      
    # Merge rules  
    'split_digits': True,  
    'split_by_unicode_script': True,  
    'split_by_whitespace': True,  
    'split_by_number': True,  
    'max_sentencepiece_length': 16,  
      
    # System settings  
    'num_threads': os.cpu_count(),  # Use all system resources  
}

# Train the SentencePiece model with the specified options  
spm.SentencePieceTrainer.train(**options)  

Note that in the example above, pad_id is set to -1, indicating that no padding token is used in this configuration.

After training, you can expect the SentencePiece model to produce a vocabulary that starts with special tokens, followed by byte tokens, merge rules tokens, and finally the individual code point tokens. The ordering reflects how SentencePiece represents its vocabulary internally.

Examining the Trained Vocabulary

To understand the outcome of the training process, let’s examine the vocabulary generated by SentencePiece. The following code snippet loads the trained model and lists the vocabulary, showing the association between tokens and their IDs:

# Load the trained SentencePiece model  
sp = spm.SentencePieceProcessor()  
sp.load('tok400.model')

# Retrieve the vocabulary  
vocab = [[sp.id_to_piece(idx), idx] for idx in range(sp.get_piece_size())]  

The vocab list will contain tuples of tokens and their corresponding IDs, starting with special tokens like <unk>, <s>, and </s>, followed by byte tokens which may include tokens like <0x00>, <0x01>, etc., representing individual bytes as tokens.

For example, the output might look something like this:

[['<unk>', 0],  
 ['<s>', 1],  
 ['</s>', 2],  
 ['<0x00>', 3],  
 ['<0x01>', 4],  
 ...  
 ['n', 259],  
 ['_', 260],  
 ['t', 261],  
 ['in', 262],  
 ...  
 ['w', 380],  
 ['y', 381],  
 ...  
]  

Notice that after special and byte tokens, the vocabulary includes individual tokens for common substrings and letters encountered during training, such as 'n', '_', and 't'. These tokens are the result of SentencePiece’s tokenization process, which aims to efficiently encode the training text.

Configuration for Large Language Models

For large language models like Llama 2, the SentencePiece tokenizer might employ a configuration that includes a set of rules tailored to handle the complexities of diverse linguistic data. Here’s an example of such a configuration:

options = {  
    # Normalization and pre-tokenization handling  
    'normalization_rule_name': 'identity',  # Turn off normalization  
    'remove_extra_whitespaces': False,  
    'input_sentence_size': 2000000000,  # Max number of training sentences  
    'max_sentence_length': 4192,  # Max number of bytes per sentence  
    'seed_sentencepiece_size': 1000000,  
    'shuffle_input_sentence': True,  
      
    # Rare word treatment  
    'character_coverage': 0.99995,  
    'byte_fallback': True,  # Use byte-level fallback for rare words  
      
    # Merge rules and special tokens (defined earlier)  
    ...  
      
    # System settings (defined earlier)  
    ...  
}

spm.SentencePieceTrainer.train(**options)  

By setting normalization_rule_name to 'identity', we avoid any text normalization, thus preserving the original text structure. The byte_fallback option is set to True, allowing the model to use byte-level tokens for rare words that are not covered by the vocabulary. These configurations ensure that the tokenizer is prepared to handle the vast and varied data typically used to train large language models.

Throughout this exploration of SentencePiece and its configurations, it becomes clear that tokenization is not a one-size-fits-all solution. The tokenizer must be carefully configured to meet the specific needs of the language model and the characteristics of the dataset it will be trained on. By tuning the options and understanding the underlying mechanisms, practitioners can significantly impact the performance and capabilities of their language models.

Understanding SentencePiece Tokenization

SentencePiece is a library that supports the tokenization of texts into subword units. One of the key features of SentencePiece is its ability to tokenize directly at the raw text level, which means it can encode text into subword units without depending on language-specific rules or preprocessing. This makes it a versatile tool for tokenizing text in various languages, including those with complex scripts or non-concatenative morphology.

Let’s delve into the specifics of how SentencePiece handles tokenization:

import sentencepiece as spm

# SentencePiece enables an end-to-end system that does not rely on language-specific pre/post-processing.  
# Here's how to train a SentencePiece model with settings that are similar to those used for Llama 2.  
options = {  
    'input': 'path/to/input/text',  # Specify the input text file  
    'model_prefix': 'spm_model',    # The prefix for the output model and vocabulary files  
    'vocab_size': 32000,            # The size of the final vocabulary  
    'model_type': 'bpe',            # We use BPE (Byte Pair Encoding) for subword tokenization

    # Special tokens  
    'unk_id': 0,                    # The token ID for unknown words  
    'bos_id': 1,                    # The token ID for the beginning of a sentence  
    'eos_id': 2,                    # The token ID for the end of a sentence  
    'pad_id': -1,                   # Padding token, set to -1 to disable

    # Merge rules  
    'split_by_whitespace': True,    # Split tokens by whitespace  
    'split_by_number': True,        # Split tokens at number boundaries  
    'max_sentencepiece_length': 16, # Maximum length of the sentence pieces

    # System settings  
    'num_threads': os.cpu_count(),  # Utilize all available system resources  
}

spm.SentencePieceTrainer.train(**options)  

This configuration allows SentencePiece to tokenize text into subword units called sentence pieces. It can handle a wide range of scripts and languages since it operates on Unicode code points and can fall back to UTF-8 bytes for rare code points.

Token Encoding and Decoding

After training the SentencePiece model, we can use it to encode text into token IDs and decode these IDs back into text. This is a crucial step in preparing data for training language models.

# Load the trained SentencePiece model  
sp = spm.SentencePieceProcessor()  
sp.load('spm_model.model')

# Example of encoding text into token IDs  
text_to_encode = "Hello, SentencePiece!"  
encoded_ids = sp.encode(text_to_encode, out_type=int)  
print(encoded_ids)  
# Output: A list of token IDs

# Example of decoding token IDs back into text  
decoded_text = sp.decode(encoded_ids)  
print(decoded_text)  
# Output: "Hello, SentencePiece!"  

Byte Fallback Mechanism

When SentencePiece encounters code points not seen during training, it can fall back to byte-level representations. This is particularly useful for handling rare characters or scripts that were not included in the training data.

To illustrate, let’s consider a string with Korean characters:

# Encoding a string with unseen Korean characters using byte fallback  
korean_string = "안녕하세요"  
encoded_korean = sp.encode(korean_string, out_type=int)  
print(encoded_korean)  
# Output: Token IDs representing the UTF-8 bytes of the Korean string

# Decoding the token IDs back into the original string  
decoded_korean = sp.decode(encoded_korean)  
print(decoded_korean)  
# Output: "안녕하세요"  

The byte fallback mechanism ensures that even if the model has not been trained on specific characters, it can still process and generate them, albeit less efficiently than for characters within its trained vocabulary.

Exploring the Vocabulary

By examining the vocabulary of a trained SentencePiece model, we can gain insights into how the tokenizer has segmented the training text.

# Retrieve the vocabulary from the trained model  
vocab = [[sp.id_to_piece(idx), idx] for idx in range(sp.get_piece_size())]

# Print a subset of the vocabulary  
print(vocab[:20])  
# Output: A list of tuples containing tokens and their corresponding IDs  

The vocabulary list will include special tokens like <unk>, <s>, <pad>, as well as subword units and individual characters. This reflects the hierarchy of tokens that SentencePiece creates based on their frequency and utility in representing the training data.

Advantages of Using Byte Fallback

The byte fallback option in SentencePiece is especially beneficial for language models, as it prevents the tokenizer from mapping unrecognized or rare characters to a single unknown token, which could hinder the model’s ability to understand and generate diverse text. Instead, with byte fallback enabled, the model can represent a wide range of characters, giving it a more nuanced understanding of different languages and scripts.

Setting Up SentencePiece for Optimal Tokenization

To configure SentencePiece for optimal performance, we must carefully set the merge rules and special tokens to suit the language model’s needs. Here is an example configuration with a focus on handling rare words and maximizing the tokenizer’s effectiveness:

koptions = {  
    'input': 'path/to/input/text',  
    'model_prefix': 'spm_optimal',  
    'vocab_size': 32000,  
    'model_type': 'bpe',

    # Rare word treatment  
    'character_coverage': 0.99995,  
    'byte_fallback': True,  
      
    # Merge rules  
    'split_digits': True,  
    'split_by_unicode_script': True,  
    'split_by_whitespace': True,  
    'split_by_number': True,  
    'max_sentencepiece_length': 16,  
    'add_dummy_prefix': True,  
    'allow_whitespace_only_pieces': True,

    # Special tokens  
    'unk_id': 0,  
    'bos_id': 1,  
    'eos_id': 2,  
    'pad_id': -1,

    # System settings  
    'num_threads': os.cpu_count(),  
}

spm.SentencePieceTrainer.train(**koptions)  

Through careful configuration and understanding of SentencePiece’s tokenization mechanics, we can significantly enhance the capabilities of language models, enabling them to process and generate text across a wide array of languages and scripts with greater accuracy and fluency.

Incorporating Unseen Characters with Byte Fallback

When dealing with text that includes characters not present in the training data, SentencePiece’s byte fallback mechanism ensures these characters can be incorporated into the model. This inclusion is critical for maintaining the versatility of language models, especially when encountering uncommon scripts or emojis.

Here’s an example of how SentencePiece encodes unknown or rare code points:

# Encoding text with unknown or rare code points  
example_text = "Here is a rare character: 𐐷"  
ids = sp.encode(example_text, out_type=int)  
print(ids)  
# Output: A list of token IDs, where unseen characters are represented as byte-level tokens  

When decoding token IDs, SentencePiece handles spaces in a particular way, which can be observed in the following code snippet:

# Decoding token IDs back into text, noting how spaces are handled  
decoded_text = sp.decode(ids)  
print(decoded_text)  
# Output: The original text with spaces correctly placed  
Handling Spaces and Dummy Prefixes

SentencePiece has a unique way of dealing with spaces. It replaces them with underscores for visualization purposes, which may seem odd at first. The reason behind this choice might be for clarity during visual inspections of tokenized sequences.

To illustrate the handling of spaces and the addition of dummy prefixes, consider the following code snippet:

# Understanding the handling of spaces in SentencePiece  
encoded_text = sp.encode("Hello world", out_type=str)  
print(encoded_text)  
# Output: ['▁', 'H', 'e', 'l', 'l', 'o', '▁', 'w', 'o', 'r', 'l', 'd']

# Inspecting the vocabulary and special tokens  
vocab = [[sp.id_to_piece(idx), idx] for idx in range(sp.get_piece_size())]  
print(vocab[:10])  
# Output: A list of the first 10 vocabulary tokens and their IDs  

In the encoded output, you can observe that the space character is represented as an underscore (▁). This underscores the importance of the add_dummy_prefix option, which adds a space at the beginning of text to harmonize the representation of words at the start and in the middle of sentences.

# Adding dummy prefix to standardize token IDs for words  
options = {  
    'add_dummy_prefix': True,  
    # Other options omitted for brevity  
}  
spm.SentencePieceTrainer.train(**options)  

SentencePiece Tokenization Visualization

By adding a dummy prefix, the tokenizer treats “world” and “ world” similarly, assigning them the same token ID, which assists the language model in recognizing these as the same concept.

SentencePiece Configuration Details

The configuration of SentencePiece involves several parameters that affect how the text is tokenized. Here’s an example of a more detailed configuration that includes system settings and special tokens:

# Example SentencePiece configuration with detailed options  
options = {  
    'model_prefix': 'tok400',  
    'vocab_size': 400,  
    # Special tokens and system settings  
    'unk_id': 0,  # the UNK token MUST exist  
    'bos_id': 1,  # the others are optional, set to -1 to turn off  
    'eos_id': 2,  
    'pad_id': -1,  
    'num_threads': os.cpu_count(),  # use all system resources  
    # Other options omitted for brevity  
}

spm.SentencePieceTrainer.train(**options)  

SentencePiece Tokenization Configuration

Protocol Buffers in SentencePiece

SentencePiece uses protocol buffers to define its tokenization model’s settings. Understanding these settings can help replicate the tokenization behavior of large language models like Llama 2.

Here’s an example of how one might extract tokenization settings from the raw protocol buffer representation:

# Extracting tokenization settings from protocol buffer representation  
normalizer_spec = {  
    'name': 'nmt_nfkc',  # Normalization rule name  
    # Other fields omitted for brevity  
}

trainer_spec = {  
    'input': 'path/to/input/text',  
    'model_prefix': 'tok400',  
    'vocab_size': 400,  
    # Other fields omitted for brevity  
}

# Example of printing token IDs and corresponding tokens  
ids = sp.encode("Some example text", out_type=int)  
print([sp.id_to_piece(idx) for idx in ids])  
# Output: A list of tokens corresponding to the token IDs  

The configuration details found in the protocol buffer include various parameters, such as normalization rules and merge settings, which can be crucial for creating a tokenizer that aligns closely with a specific model’s expectations.

Final Notes on SentencePiece Tokenization

In summary, SentencePiece is a powerful tool for tokenization with some quirks and a learning curve, especially when it comes to its documentation and protocol buffer specifications. However, its efficiency, flexibility, and byte fallback mechanism make it popular in the industry. By utilizing SentencePiece, one can train language models that are robust and capable of handling a diverse range of text inputs.

Understanding Vocab Size in Model Architecture

As we continue to explore the intricacies of tokenization and its impact on language model performance, it’s important to revisit the issue of setting the vocabulary size. This parameter can significantly influence how well a model learns and performs. Recall the model architecture from our previous discussions where we constructed a GPT-like model from scratch. The file we worked on demonstrated the foundational structure for our language model.

Key Model Parameters

Let’s delve into some of the key parameters we defined in our previous session:

block_size = 256  # maximum context length for predictions  
max_iters = 5000  # maximum number of training iterations  
eval_interval = 500  # interval for evaluation  
learning_rate = 3e-4  # learning rate for optimization  
device = 'cuda' if torch.cuda.is_available() else 'cpu'  # device configuration  
eval_iters = 200  # number of iterations for evaluation  
n_embd = 384  # embedding dimension  
n_head = 6  # number of heads in multi-head attention  
n_layer = 6  # number of layers in the model  
dropout = 0.2  # dropout rate  

These parameters, including the block_size, n_embd, n_head, n_layer, and dropout, set the stage for our model’s capabilities and limitations.

Vocabulary Mapping and Data Splitting

In the script, we created a mapping from characters to integers and vice versa, allowing us to convert strings to integer sequences and back:

# Setting the seed for reproducibility  
torch.manual_seed(1337)

# Read in the text data  
with open('input.txt', 'r', encoding='utf-8') as f:  
    text = f.read()

# Create a list of unique characters  
chars = sorted(list(set(text)))  
vocab_size = len(chars)  # Vocabulary size

# Mapping from characters to integers and back  
stoi = { ch:i for i, ch in enumerate(chars) }  
itos = { i:ch for i, ch in enumerate(chars) }

# Encoder and decoder functions  
encode = lambda s: [stoi[c] for c in s]  
decode = lambda l: ''.join([itos[i] for i in l])

# Train and test data split  
data = torch.tensor(encode(text), dtype=torch.long)  
n = int(0.9 * len(data))  # 90% for training, 10% for validation  
train_data = data[:n]  
val_data = data[n:]  

Data Loading for Training

The data loading function generates batches of input and target pairs for the model:

def get_batch(split):  
    # Choose the data split  
    data = train_data if split == 'train' else val_data  
    ix = torch.randint(len(data) - block_size, (batch_size,))  
      
    # Create input and target tensors  
    x = torch.stack([data[i:i+block_size] for i in ix])  
    y = torch.stack([data[i+1:i+block_size+1] for i in ix])  
      
    # Move tensors to the configured device  
    x, y = x.to(device), y.to(device)  
    return x, y  

Loss Estimation Function

The estimate_loss function calculates the average loss over a number of iterations, which is useful for monitoring the model’s performance:

@torch.no_grad()  
def estimate_loss():  
    out = {}  
    model.eval()  # Switch to evaluation mode  
    for split in ['train', 'val']:  
        losses = torch.zeros(eval_iters)  
        for k in range(eval_iters):  
            x, y = get_batch(split)  
            logits, loss = model(x, y)  
            losses[k] = loss.item()  
        out[split] = losses.mean()  
    model.train()  # Switch back to training mode  
    return out  

The Role of Vocab Size in the Model

Vocab size is a critical parameter in our language model. It defines the number of unique tokens that the model can recognize. This size directly impacts the dimensions of the embedding layers and the final linear layer that produces the logits for the next token in the sequence:

class GPTLanguageModel(nn.Module):  
    def __init__(self):  
        super().__init__()  
        self.token_embedding_table = nn.Embedding(vocab_size, n_embd)  
        self.position_embedding_table = nn.Embedding(block_size, n_embd)  
        self.blocks = nn.Sequential(*[Block(n_embd, n_head=n_head) for _ in range(n_layer)])  
        self.ln_f = nn.LayerNorm(n_embd)  # Final layer norm  
        self.lm_head = nn.Linear(n_embd, vocab_size)  # Linear layer to produce logits  
          
        # Initialize weights  
        self.apply(self._init_weights)

    def _init_weights(self, module):  
        # Custom weight initialization for linear and embedding layers  
        if isinstance(module, nn.Linear):  
            torch.nn.init.normal_(module.weight, mean=0.0, std=0.02)  
            if module.bias is not None:  
                torch.nn.init.zeros_(module.bias)  
        elif isinstance(module, nn.Embedding):  
            torch.nn.init.normal_(module.weight, mean=0.0, std=0.02)

    def forward(self, idx, targets=None):  
        B, T = idx.shape  
        tok_emb = self.token_embedding_table(idx)  # Token embeddings  
        pos_emb = self.position_embedding_table(torch.arange(T, device=device))  # Positional embeddings  
        x = tok_emb + pos_emb  # Combine token and positional embeddings  
        x = self.blocks(x)  # Pass through transformer blocks  
        x = self.ln_f(x)  # Apply final layer normalization  
        logits = self.lm_head(x)  # Obtain logits  
          
        # Calculate loss if targets are provided  
        if targets is not None:  
            B, T, C = logits.shape  
            logits = logits.view(B * T, C)  
            targets = targets.view(B * T)  
            loss = F.cross_entropy(logits, targets)  
        else:  
            loss = None  
              
        return logits, loss  

In the model definition, vocab_size is used to initialize the token embedding table and the linear layer (lm_head) that maps the final transformed embeddings to logits. The choice of vocab_size thus has a direct bearing on the model’s capacity to represent and predict tokens.

The Importance of Weight Initialization

Proper weight initialization is crucial for model training. We use a normal distribution with a mean of 0 and a standard deviation of 0.02 to initialize the weights of the linear and embedding layers. This step ensures that our model starts with weights that are neither too large nor too small, promoting effective learning during training.


In summary, the vocab_size is a pivotal setting in the construction of a language model, influencing both the architecture and performance. It comes into play in critical components of the model, such as the token embedding table and the final linear layer. The proper selection and management of vocab_size are essential for training efficient and accurate language models.

The Limits of Vocab Size Expansion

As we push the boundaries of language modeling, one might wonder why we can’t simply increase the vocabulary size indefinitely. Indeed, a larger vocabulary allows for a richer representation of language nuances. However, there are practical constraints that limit this expansion:

  1. Computational Complexity: Each token added to the vocabulary size requires additional computation in the final linear layer of the transformer. The token embedding table grows, which in turn increases the number of dot products the model must compute to produce the probabilities for the next token. This can make the lm_head layer significantly more computationally expensive.

  2. Parameter Saturation: More parameters do not always equate to better performance, especially if they are undertrained. If the vocabulary is too large, individual tokens may appear infrequently in the training data, leading to less robust representations for those tokens.

  3. Sequence Length Consideration: A larger vocabulary typically means larger tokens, potentially compressing more information into fewer tokens. While this can be beneficial for representing more text within the same sequence length, it may also mean that the model has less granularity to work with, making it harder to process the information effectively.

Modifying the Model Architecture

When designing the vocabulary size, it’s important to note that it’s mostly an empirical hyperparameter. Current state-of-the-art architectures often have vocab sizes in the tens or hundreds of thousands. But what happens when we want to extend a pre-trained model’s vocabulary size? This is a common scenario when fine-tuning models for specific tasks or adding functionality through special tokens.

Extending the vocabulary size involves a mild form of model surgery:

This process is often accompanied by freezing the base model’s parameters and only training the new ones. It’s a selective training process that allows for the introduction of new tokens without a complete overhaul of the model.

Innovative Applications of Extended Vocabularies

The expansion of a model’s vocabulary can go beyond just adding special tokens for new functionalities. To illustrate this, let’s look at an innovative approach detailed in a research paper titled “Learning to Compress Prompts with Gist Tokens.”

Gist Tokens: Compressing Prompts for Efficiency

The paper introduces the concept of gisting, a method to compress lengthy prompts into smaller sets of “gist” tokens, allowing for more efficient computation and storage. Here’s a summary of the findings:

This technique showcases an entire design space for applying new tokens to enhance language models’ efficiency and functionality.

Reducing Prompt Costs with Gisting

Consider a language model like ChatGPT, which uses prompts to guide its responses. Encoding the same long prompt repeatedly can be costly in terms of computation and memory. Gisting offers a solution by compressing these prompts into shorter, reusable gist tokens, thereby reducing the computational load.

Practical Implementation of Gist Tokens

The practical implementation of gist tokens involves training the model to recognize these tokens as stand-ins for longer prompts, effectively compressing the information. This is achieved through a distillation process where:

Gist Tokens Implementation

This approach is part of a broader category of parameter-efficient fine-tuning techniques, where the main body of the model remains static, and only the embeddings of the new tokens are adjusted.

In essence, the exploration of new tokens and the modification of vocabulary size opens up a spectrum of opportunities for enhancing language models. From adding special tokens for distinct functionalities to employing compression techniques like gisting, the potential for innovation is vast. These methods not only improve the efficiency of models but also enable them to adapt to a wider range of applications and tasks.

Exploring the Design Space of Tokenization

As we delve deeper into the intricacies of tokenization, it’s essential to recognize that there is a significant design space worth exploring in the future. The momentum is building around the construction of transformers that can process not just textual modalities but also images, videos, audio, and more. The key question is how to feed these various modalities into a transformer and whether the architecture needs fundamental changes to handle this multimodal input.

Recent convergences suggest that there’s no need to alter the core transformer architecture. Instead, the focus is on tokenizing the inputs—regardless of their modality—and treating them as text tokens. This approach has been depicted in an early paper that illustrates how an image can be chunked into integers, which then become the tokens representing the image. These tokens can be either hard tokens, which are discrete, or soft tokens, which go through bottlenecks similar to those in autoencoders.

Multimodal Tokenization

The paper from OpenAI titled “Sora” is particularly groundbreaking, as it has inspired many people with what is possible in the realm of tokenization. The paper, dated February 15, 2024, provides an overview of Sora and its capabilities in video generation.

Sora Overview

Sora has pioneered a method for turning visual data into a unified representation that facilitates large-scale training of generative models. It also provides a qualitative evaluation of the model’s capabilities and limitations.

Tokenizing Visual Patches with Sora

Sora introduces an innovative approach to tokenization, where elements of text are transformed into visual patches. This technique allows for chunking videos into tokens with their own vocabularies. These tokens can then be processed either with autoregressive models or combined with soft tokens in fusion models. It’s an active area of research that extends beyond the scope of this video but underscores the potential of tokenization in multimodal applications.

Visual Patches

Reflection on LLM Weirdness Due to Tokenization

Returning to our initial discussion of tokenization, let’s revisit some of the points raised at the beginning of this video and understand the root of certain quirks in LLMs:

Non-English Tokenization

Addressing Arithmetic Difficulties with Tokenization

The tokenization of numbers presents a unique challenge for LLMs, particularly in arithmetic tasks. The arbitrary representation of numbers based on the tokenization process complicates the model’s ability to perform operations that humans consider simple. A blog post titled “Integer Tokenization is Insane” provides an excellent systematic breakdown of how this issue manifests in LLMs, showcasing the need for a better understanding of number tokenization.

Arithmetic Tokenization

In conclusion, tokenization is a multifaceted process that goes beyond mere text processing. Its implications affect the performance of LLMs across various tasks and languages. As we continue to explore the design space of tokenization, it’s clear that innovations in this area will significantly influence the evolution of language models and their capabilities in handling multimodal data.

The Arbitrariness of Number Tokenization

Continuing our exploration into tokenization, we encounter the perplexing case of numeric tokenization. It’s quite revealing that for four-digit numbers, tokenization does not follow a consistent pattern. Depending on the number, it may be tokenized as a single unit or split into two tokens. This can be a 1-3, 2-2, or a 3-1 combination of digits; a bewildering variety that adds unnecessary complexity for language models attempting arithmetic operations.

Number Tokenization

The randomness of this approach means that models sometimes encounter a token representing all four digits, and other times, they must piece together the number from smaller tokens. This inconsistency is a significant challenge for LLMs like GPT-2 when dealing with numerical data. Meta’s LLaMA-2 algorithm, for instance, addresses this by deliberately splitting each digit to boost arithmetic performance.

The difficulties with tokenization extend beyond just numbers. When it comes to coding in languages like Python, GPT-2’s struggle can be partly attributed to the inefficient encoding of spaces — every single space is tokenized separately, drastically reducing the model’s contextual awareness. This was identified as a bug and subsequently rectified in GPT-4.

The Special Token Conundrum

Special tokens in language models introduce another layer of complexity. For instance, GPT-4 exhibits unexpected behavior when encountering the string `

Managing Unstable Tokens in Encoding

When diving into the internals of a tokenizer, we’re faced with the challenge of handling unstable tokens. These are sequences of characters that don’t neatly fit into the model’s predefined vocabulary, often resulting in unexpected behavior or inefficiencies. Consider the following code snippet extracted from a tokenizer’s internal functions:

def _increase_last_piece_token_len(debug_assert!(last_piece_token_len <= tokens.len());   
    (tokens, last_piece_token_len)  
      
def _encode_unstable_native(&self, text: &str, allowed_special: &HashSet<&str>,) -> (Vec<Rank>, HashSet<Vec<Rank>>) {  
    let (tokens, last_piece_token_len) = self._encode_native(text, allowed_special);  
    if last_piece_token_len == 0 {  
        # If last_piece_token_len is zero, the last token was a special token and we have  
        # no unstable bytes  
        return (tokens, HashSet::new());  
    }  
    let (mut tokens, last_piece_token_len) = self._increase_last_piece_token_len(tokens, last_piece_token_len);  
    let unstable_bytes = self._decode_native(&tokens[len() - last_piece_token_len..]);  
    tokens.truncate(tokens.len() - last_piece_token_len);  
    # TODO: we should try harder to find additional stable tokens  
    # This would reduce the amount of retokenising when determining completions  
    # Refer to the logic in an older version of this file  
    let mut completions = HashSet::new();  
    if unstable_bytes.is_empty() {  
        return (tokens, completions);  
    }  
    # This is the easy bit. Just find all single tokens that start with unstable_bytes  
    ...  

The function _encode_unstable_native is tasked with identifying and processing these unstable tokens. It’s evident that this part of tokenization is not widely documented, yet it plays a crucial role in ensuring the tokenizer’s robustness.

The Encoding Process for Unstable Tokens

The handling of unstable tokens requires a delicate balance between performance and accuracy. The code snippet below outlines a method for encoding unstable tokens by searching for single tokens that start with the bytes of the unstable token:

fn _encode_unstable_native(  
    # Separating this from the loop below helps with performance in a common case.  
    let mut point = self  
        .sorted_token_bytes  
        .partition_point(|x| x.as_slice() <= unstable_bytes.as_slice());  
    while point < self.sorted_token_bytes.len()  
    && self.sorted_token_bytes[point].starts_with(&unstable_bytes)  
    {  
        completions.insert(vec![  
            self.encoder[self.sorted_token_bytes[point].as_slice()]  
        ]);  
        point += 1;  
    }  
    # Now apply even more brute force. At every (other) possible position for the straddling  
    # token, concatenate additional bytes from that token (if any) to unstable_bytes,  
    # and re-tokenise the whole thing and see what we get.  
    ...  

The approach involves iterating through potential tokens and employing a brute force strategy to concatenate bytes, attempting to stabilize the unstable sequences. This process is a testament to the complexity of tokenization and the lengths developers must go to encode text accurately.

The Case of Unstable Tokens and Completion APIs

When considering completion APIs, the goal is to go beyond simple token predictions. Rather than appending a token directly after a partial list, the aim is to consider a variety of tokens that, when re-tokenized, would yield high probability outcomes. This complex interplay is crucial for allowing the model to add individual characters or tokens that maintain the contextual integrity of the text:

fn _encode_unstable_native(  
    let mut reencoded = byte_pair_encode(  
        &unstable_bytes[1..unstable_bytes.len() - last_decoded.1],  
        &self.encoder,  
    );  
    reencoded.extend(byte_pair_encode(  
        &unstable_bytes[unstable_bytes.len() - last_decoded.1..],  
        &self.encoder,  
    ));  
    completions.insert(reencoded);  
}

(tokens, completions)  

This function illustrates the process of re-encoding parts of an unstable token using the byte-pair encoding technique to generate potential completions. It is a sophisticated approach, emphasizing the intricacy of tokenization in language models.

Unpacking the Mystery of SolidGoldMagikarp

Among the many peculiarities of tokenization is the case of SolidGoldMagikarp. This term, which might seem nonsensical at first, actually sheds light on the enigmatic behavior of tokens within language models. An investigation into token embedding clusters revealed a set of unusual tokens that appeared semantically irrelevant or downright bizarre. These tokens, including SolidGoldMagikarp, were identified through their proximity to the centroid of their respective clusters in the embedding space.

SolidGoldMagikarp and Token Clusters

Further probing into these tokens’ behaviors yielded even more curious results. When prompted to repeat these odd tokens, language models like GPT-3 exhibited a range of evasive and hallucinatory responses, often refusing or failing to reproduce the tokens accurately. This phenomenon underscores the unpredictable nature of tokenization and its impact on language model outputs.

The Influence of Token Clustering

The phenomenon of token clustering provides a fascinating perspective on how language models organize and interpret tokens. By examining the embedding representations, researchers have discovered clusters of tokens that defy expectations, featuring seemingly random and unrelated tokens. This exploration into token behavior and the resulting clusters offers a glimpse into the intricate workings of language models and the unforeseen consequences of their tokenization processes.

Token Clustering

Understanding these clusters and the tokens within them continues to be a challenging yet critical aspect of demystifying language models. As we delve deeper into the world of tokenization, the intricacies of these models become more apparent, and the quest to unravel their complexities grows ever more compelling.

Unraveling the Curious Case of SolidGoldMagikarp

As we delve deeper into the enigmatic behavior of language models, we encounter a perplexing scenario. Researchers noticed that when they asked the model to repeat certain strings, such as “strength sold gold magic harp”, the model’s behavior became erratic. Instead of simply echoing the phrase, the language model might respond with an evasive comment or generate hallucinatory completions that bear little resemblance to the original string. This unexpected behavior raises questions about the underlying mechanisms of tokenization and how it can lead to such outcomes.

SolidGoldMagikarp and Token Clusters

The plot thickens when examining the token SolidGoldMagikarp. Researchers have found that language models, when prompted with this token, exhibit a range of behaviors that can be grouped into the following:

The behavior becomes even more baffling when the models respond with insults or bizarre humor, seemingly breaking down when faced with these simple strings. Researchers have documented numerous tokens that exhibit similar behaviors, not just SolidGoldMagikarp.

The Mystery Behind Anomalous Tokens

The GPT tokenization process, which involves scraping web content, has resulted in a set of 50,257 tokens used by GPT-2 and GPT-3 models. However, the training text for GPT models is more curated, possibly excluding many of the sources where these anomalous tokens originate. These tokens, which may have had little involvement in training, cause the model to behave evasively or erratically when encountered. They tend to cluster near the centroid in the embedding space, although the reason for this remains unclear.

The non-determinism observed at temperature zero could be attributed to floating-point errors during forward propagation. When the model “doesn’t know what to do” with a token, it may lead to maximum uncertainty, making logits for multiple completions close together and hence more prone to floating-point errors—a known but rare issue within GPT models.

The Impact of Tokenization on Model Behavior

Understanding the connection between tokenization and model behavior is crucial. Consider the case of the Reddit user SolidGoldMagikarp. It is hypothesized that the tokenization dataset, rich with Reddit data, frequently mentioned this user. As a result, the tokenizer may have created a dedicated token for this specific Reddit user within the vocabulary. When the model encounters such tokens during prediction, it may generate unexpected results due to the discrepancy between the tokenization and training datasets.

Investigating the Origins of Anomalous Tokens

In an attempt to understand these tokens better, researchers took to the internet and even asked ChatGPT for explanations. The responses were often puzzling, with the model providing definitions unrelated to the tokens in question. For example, when asked about SolidGoldMagikarp, ChatGPT might respond with an explanation of the word “distribute” instead of addressing the actual token.

To further investigate, researchers created a set of prompt templates and used GPT-3 davinci-instruct-beta with a temperature setting of zero. This approach was intended to simplify the task for the model, but it led to even more peculiar behavior. The mysterious tokens seemed unspeakable, with the model incapable of repeating them and responding in various strange ways.

The Intriguing Behavior of GPT Models with Anomalous Tokens

The kinds of responses elicited by these tokens range from evasive to downright nonsensical:

The complexity of tokenization and its impact on language models become evident when considering these anomalies. As models encounter tokens that are not well-represented in their training data, their responses become unpredictable, sometimes even contravening safety guidelines and model alignment principles.

GPT Behavior with Anomalous Tokens

In summary, tokenization is not just a technical detail but a fundamental aspect that significantly influences the performance and behavior of language models. As researchers continue to explore these phenomena, our understanding of tokenization’s role in the functioning of large language models will undoubtedly deepen.

The Tokenization Dataset and Training Discrepancies

The peculiar behavior of language models in relation to certain tokens, such as SolidGoldMagikarp, can be traced back to inconsistencies between the tokenization dataset and the actual training data. While the tokenization process might include a variety of strings, the training phase might not encompass all of these tokens. For instance, tokens like “sold gold magic harp” may never appear in the training set, although they exist in the tokenization dataset. This disconnection leads to tokens that are never activated or updated during training, resulting in what could be likened to unallocated memory in a computer program.

When these untrained tokens are encountered at test time, they behave unpredictably—like extracting an untrained vector from the embedding table. This vector, in turn, feeds into the transformer model, leading to undefined behavior. Such anomalies confirm that the model is operating out of its learned distribution, and its responses to these out-of-sample tokens can be erratic and unexpected.

Prompt Generation and Anomalous Tokens

Researchers have ventured into the depths of language models to fish for these anomalous tokens, hoping to uncover a pattern or explanation. Some of the findings include:

Hallucinatory Completions and Evasion

These behaviors raise questions about the tokenization process and its influence on model outputs, especially when the tokens in question have not been well-represented or are entirely absent from the training data.

The Heart of LLM Weirdness: Tokenization

Tokenization is a critical factor in many of the peculiarities observed in language models. The root cause of a variety of issues can often be traced back to the tokenization process:

As the development of language models has progressed, the intricacies of their tokenization mechanisms have also evolved. For instance, with GPT-2, the size of the context window increased significantly. The original 512-token window in GPT-1 expanded to an impressive 1024-token window in GPT-2, allowing the model to handle much longer dependencies in text.

Delving into GPT-2’s Tokenization Code

To understand how GPT-2’s tokenization works in practice, let’s explore the code that encapsulates its tokenization process. The tokenization relies on a combination of a vocabulary and a set of merge operations defined in vocab.bpe and encoder.json files, respectively.

Downloading Vocabulary and Encoder Files

First, we need to acquire the vocabulary and encoder files used by GPT-2:

# To download the vocab.bpe and encoder.json files for GPT-2:  
wget https://openaipublic.blob.core.windows.net/gpt-2/models/1558M/vocab.bpe  
wget https://openaipublic.blob.core.windows.net/gpt-2/models/1558M/encoder.json  

With these files in hand, we can proceed to load and process them using Python:

import os, json

# Load the vocabulary  
with open('encoder.json', 'r') as f:  
    encoder = json.load(f)  # Equivalent to our 'vocab'

# Load the Byte Pair Encoding (BPE) merges  
with open('vocab.bpe', 'r', encoding='utf-8') as f:  
    bpe_data = f.read()  
    bpe_merges = [tuple(merge_str.split()) for merge_str in bpe_data.split('\n')[1:-1]]  
    # Equivalent to our 'merges'  

Understanding the Encoding Process

The encoding process in GPT-2 is more advanced than the character-level encoding we previously discussed. It uses BPE to break down words into more frequently occurring subwords or character combinations. The above code snippets are essential to understand how GPT-2 tokenizes a given piece of text.

Implementing Tokenization with BPE

To demonstrate how tokenization is performed in a Jupyter Notebook, let’s consider an example where we tokenize a snippet of Python code:

import re

# Example string containing Python code  
example = """  
for i in range(1, 101):  
    if i % 3 == 0 and i % 5 == 0:  
        print("FizzBuzz")  
"""

# Regular expression pattern for GPT-2 tokenization  
gpt2pat = re.compile(r'your-regex-pattern-here')

# Tokenizing the example string  
print(re.findall(gpt2pat, example))  

In the above code, replace 'your-regex-pattern-here' with the actual regular expression pattern that matches the tokenization rules of GPT-2. The re.findall() function will then extract all tokens according to that pattern.

The Encoder Class

Understanding how the Encoder class works is crucial for grasping the tokenization behavior:

class Encoder:

    def bpe(self, token):  
        word = list(token)  
        new_word = []  
        # Tokenization logic  
        # ...

    def encode(self, txt):  
        # Encoding logic  
        # ...

    def decode(self, tokens):  
        # Decoding logic  
        # ...

    def get_encoder(model_name, models_dir):  
        # Logic to load encoder and BPE merges  
        # ...  

The Encoder class above is a simplified representation. The actual class would include specific methods for Byte Pair Encoding (bpe), encoding text to tokens (encode), decoding tokens to text (decode), and loading the appropriate encoder and BPE merges based on the model name and directory (get_encoder).

Tokenization Artifacts in Practical Use

The tokenization process we’ve outlined is not merely academic—it has real-world implications. For instance, certain artifacts or “quirks” of tokenization can emerge when using language models:

Tokenization, while often underappreciated, is a cornerstone of language modeling. It shapes how models interpret text, influences their training efficiency, and affects their ability to generalize across tasks and languages. As LMs continue to evolve, the tokenization process will remain a critical area of research and development, with innovations in tokenization strategies promising to unlock new capabilities and efficiencies in language understanding and generation.