This package contains Pydantic datamodels that fully describe the tokenizer.json file used in transformers via Tokenizers. This is useful, because working with this format is complicated.
The Hugging Face tokenizers representation does not reliably allow you to edit tokenizers as a structured object. This means that complex changes to tokenizers require you to edit the tokenizer.json file manually. This is annoying, because the format of this file is complicated.
Furthermore, tokenizers does not give reasonable errors when parsing a tokenizer fails. It does give line/character numbers, but those point to the last character of the section where the parsing fails. For example, inserting an illegal vocabulary item just tells you that there is an issue in the vocabulary somewhere by pointing out the last character of the vocabulary as the place where the error occurs.
This package contains datamodels (pydantic Basemodels) that contain the same constraints as the tokenizers package. In other words, if you can create a model in this package, the tokenizers package can parse it. This allows you to progressively edit tokenizer json files, all the while getting productive error messages.
Install it via pip
pip install skeletokenHere's some examples of what skeletoken can do:
skeletoken autofixes any tokenizer you load. See automatic checks to see what gets fixed automatically. For example, the Qwen/Qwen3-0.6B tokenizer has a lot of special tokens that are not part of the regular tokenizer vocabulary. This leads to a mismatch between the size of a tokenizer and the number of tokens that tokenizer can produce.
from transformers import AutoTokenizer
from skeletoken import TokenizerModel
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-0.6B")
# Mismatch due to missing special tokens
print(tokenizer.vocab_size) # 151643
print(len(tokenizer)) # 151669
# Load a model from the hub.
tokenizer_model = TokenizerModel.from_pretrained("Qwen/Qwen3-0.6B")
# Convert the tokenizer to transformers
tokenizer = tokenizer_model.to_transformers()
print(tokenizer.vocab_size) # 151669
print(len(tokenizer)) # 151669skeletoken can add components to a tokenizer. First we load one, and inspect it:
from skeletoken import TokenizerModel
# Directly pull a tokenizer from the hub
tokenizer_model = TokenizerModel.from_pretrained("gpt2")
print(tokenizer_model.model.type)
# ModelType.BPE
print(tokenizer_model.pre_tokenizer.type)
# PreTokenizerType.BYTELEVELWe can then add a digit splitter to the tokenizer.
from skeletoken import TokenizerModel
from skeletoken.pretokenizers import DigitsPreTokenizer
model = TokenizerModel.from_pretrained("gpt2")
tok = model.to_tokenizer()
# Create the digits pretokenizer
digits = DigitsPreTokenizer(individual_digits=True)
model = model.add_pre_tokenizer(digits)
new_tok = model.to_tokenizer()
print(tok.encode("hello 123").tokens)
# ['hello', 'Ġ123']
print(new_tok.encode("hello 123").tokens)
# ['hello', 'Ġ', '1', '2', '3']For background, see this blogpost. Decasing is super easy using skeletoken.
from tokenizers import Tokenizer
from skeletoken import TokenizerModel
model_name = "intfloat/multilingual-e5-small"
tokenizer = Tokenizer.from_pretrained(model_name)
print([tokenizer.encode(x).tokens for x in ["Amsterdam", "amsterdam"]])
# [['<s>', '▁Amsterdam', '</s>'], ['<s>', '▁am', 'ster', 'dam', '</s>']]
model = TokenizerModel.from_pretrained(model_name)
model = model.decase_vocabulary()
lower_tokenizer = model.to_tokenizer()
print([lower_tokenizer.encode(x).tokens for x in ["Amsterdam", "amsterdam"]])
# [['<s>', '▁amsterdam', '</s>'], ['<s>', '▁amsterdam', '</s>']]For background, see this blog post. Like decasing, turning any tokenizer into a greedy one is super easy using skeletoken.
from tokenizers import Tokenizer
from skeletoken import TokenizerModel
model_name = "gpt2"
tokenizer = Tokenizer.from_pretrained(model_name)
print([tokenizer.encode(x).tokens for x in [" hellooo", " bluetooth"]])
# [['Ġhell', 'ooo'], ['Ġblu', 'etooth']]
model = TokenizerModel.from_pretrained(model_name)
model.make_model_greedy()
greedy_tokenizer = model.to_tokenizer()
print([greedy_tokenizer.encode(x).tokens for x in [" hellooo", " bluetooth"]])
# [['Ġhello', 'oo'], ['Ġblue', 'too', 'th']]Here's a rough roadmap:
- ✅ Add automated lowercasing (see blog)
- ✅ Add vocabulary changes + checks (e.g., check the merge table if a token is added)
- ✅ Add helper functions for adding modules
- ✅ Add secondary constraints (e.g., if an
AddedTokenrefers to a vocabulary item does not exist, we should crash.) - ✅ Add a front end for the Hugging Face trainer
- ✅ Add automatic model editing
MIT
Stéphan Tulkens
