A collection of ready-to-run notebooks and practical guides for training and fine-tuning large language models (LLMs) in 2025.
Train models on labeled input-output pairs using Teacher Forcing.
Align models with human preferences using pairwise comparisons.
Reduce memory and compute requirements using 8-bit and 4-bit model formats via BitsAndBytes
.
Fine-tune models efficiently by injecting low-rank adapters, using the peft
library.
Understand layer-wise learning dynamics with the spectrum
library.
Assess performance with metrics like Accuracy
, BLEU
, ROUGE
, BERTScore
, and Levenshtein distance
.
⚠️ These notebooks are optimized for demonstration and may use small datasets and short training times.