Skip to content

A collection of practical guides and examples for training and fine-tuning large language models.

Notifications You must be signed in to change notification settings

szamani20/LLM-Cookbook

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 

Repository files navigation

LLM-Cookbook

A collection of ready-to-run notebooks and practical guides for training and fine-tuning large language models (LLMs) in 2025.


Core Techniques Covered

Supervised Fine-Tuning (SFT)

Train models on labeled input-output pairs using Teacher Forcing.

Direct Preference Optimization (DPO)

Align models with human preferences using pairwise comparisons.

Quantization

Reduce memory and compute requirements using 8-bit and 4-bit model formats via BitsAndBytes.

Parameter-Efficient Fine-Tuning (PEFT) with LoRA

Fine-tune models efficiently by injecting low-rank adapters, using the peft library.

Signal-to-Noise Ratio (SNR) Analysis

Understand layer-wise learning dynamics with the spectrum library.

Evaluation Metrics

Assess performance with metrics like Accuracy, BLEU, ROUGE, BERTScore, and Levenshtein distance.


Dataset Creation


Training

⚠️ These notebooks are optimized for demonstration and may use small datasets and short training times.

About

A collection of practical guides and examples for training and fine-tuning large language models.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published