Ronit D. Grossa,1, Yarden Tzacha,1, Tal Halevia, Ella Koresha and Ido Kantera,b,*
aDepartment of Physics, Bar-Ilan University, Ramat-Gan, 52900, Israel
bGonda Interdisciplinary Brain Research Center, Bar-Ilan University, Ramat-Gan, 52900, Israel
*Corresponding author at: Department of Physics, Bar-Ilan University, Ramat-Gan, 52900, Israel.
E-mail address: ido.kanter@biu.ac.il (I. Kanter)
1These authors equally contributed to this work
If you find this repository useful, please cite:
@misc{gross2025tinylanguagemodels,
title={Tiny language models},
author={Ronit D. Gross and Yarden Tzach and Tal Halevi and Ella Koresh and Ido Kanter},
year={2025},
eprint={2507.14871},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2507.14871},
}
This repository demonstrates pre-training and fine-tuning a compact BERT model (optionally with convolutional layers) using Masked Language Modeling (MLM) on Wikipedia, then fine-tuning for text classification tasks like AG News or DBpedia. It’s modular and PyTorch-based, utilizing Hugging Face Transformers.
A prominent achievement of natural language processing (NLP) is its ability to understand and generate meaningful human language. This capability relies on complex feedforward transformer block architectures pre-trained on large language models (LLMs). However, LLM pre-training is currently feasible only for a few dominant companies due to the immense computational resources required, limiting broader research participation. This creates a critical need for more accessible alternatives. In this study, we explore whether tiny language models (TLMs) exhibit the same key qualitative features of LLMs. We demonstrate that TLMs exhibit a clear performance gap between pre-trained and non-pre-trained models across classification tasks, indicating the effectiveness of pre-training, even at a tiny scale. The performance gap increases with the size of the pre-training dataset and with greater overlap between tokens in the pre-training and classification datasets. Furthermore, the classification accuracy achieved by a pre-trained deep TLM architecture can be replicated through a soft committee of multiple, independently pre-trained shallow architectures, enabling low-latency TLMs without affecting classification accuracy. Our results are based on pre-training BERT-6 and variants of BERT-1 on subsets of the Wikipedia dataset and evaluating their performance on FewRel, AGNews, and DBPedia classification tasks. Future research on TLM is expected to further illuminate the mechanisms underlying NLP, especially given that its biologically inspired models suggest that TLMs may be sufficient for children or adolescents to develop language.
- Pre-train compact BERT with optional convolutional layers
- Fine-tune for text classification
- Dataset loading, sampling, and tokenization utilities
- Modular and easy-to-extend structure
pip install torch datasets transformers tqdm
Pre-train BERT6 on Wikipedia
from torch.optim import AdamW
from transformers import get_linear_schedule_with_warmup
args = init_parser('agnews')
wiki_dataloader = load_wikipedia_subset(args.pre_train_size)
pre_train_model = build_bert_with_optional_conv_for_pre_train(
hidden_size=args.head_size * args.num_attention_heads,
num_hidden_layers=args.bert_layers,
num_conv_layers=args.conv_layers,
kernel_size=args.kernel,
num_attention_heads=args.num_attention_heads,
intermediate_size=args.head_size * args.num_attention_heads * 4,
max_position_embeddings=args.max_length,
conv_channels_dim=args.d,
vocab_size=args.vocab_size,
cls_token_id=args.cls_token_id,
pad_token_id=args.pad_token_id,
sep_token_id=args.sep_token_id,
mask_token_id=args.mask_token_id,
)
optimizer = AdamW(pre_train_model.parameters(), lr=args.pre_train_lr, weight_decay=args.pre_train_weight_decay)
epochs = args.pre_train_epochs
scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=epochs // 100, num_training_steps=epochs)
for epoch in range(epochs):
mlm_train(wiki_dataloader, pre_train_model, optimizer)
scheduler.step()
Fine-tune for Classification (e.g., AG News)
train_dataloader, test_dataloader = load_agnews_dataloaders(args.samples_per_label_train, args.samples_per_label_test)
model = build_bert_with_optional_conv_for_classification(
hidden_size=args.head_size * args.num_attention_heads,
num_hidden_layers=args.bert_layers,
num_conv_layers=args.conv_layers,
kernel_size=args.kernel,
num_attention_heads=args.num_attention_heads,
intermediate_size=args.head_size * args.num_attention_heads * 4,
max_position_embeddings=args.max_length,
conv_channels_dim=args.d,
num_labels=args.num_labels,
vocab_size=args.vocab_size,
cls_token_id=args.cls_token_id,
pad_token_id=args.pad_token_id,
sep_token_id=args.sep_token_id,
mask_token_id=args.mask_token_id,
)
model.load_state_dict(pre_train_model.state_dict(), strict=False)
model.to(device)
optimizer = AdamW(model.parameters(), lr=args.lr, weight_decay=args.weight_decay)
criterion = torch.nn.CrossEntropyLoss()
num_epochs = args.epochs
scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=0, num_training_steps=num_epochs)
for epoch in range(num_epochs):
cls_train(train_dataloader, optimizer, criterion, model)
scheduler.step()
acc = cls_test(test_dataloader, model)
print(f"Test Accuracy: {acc:.4f}")
datasets
library and must be downloaded manually.
- 📥 Download the dataset from the official site: https://thunlp.github.io/fewrel.html
After downloading:
- Place the files (
train_wiki.json
,val_wiki.json
) into a local folder. - Update your code to load them directly via file path.
📌 Note: This manual step is not handled automatically in the repository. Make sure you’ve placed the files before attempting to fine-tune on FewRel.
- To switch datasets, modify:
init_parser('dbpedia')
orinit_parser('fewrel')
. - Adjust model and training parameters directly in the scripts as desired.
Built using PyTorch and Hugging Face Transformers.