Skip to content

itanvir/tlora

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TLoRA: Tri-Matrix Low-Rank Adaptation for Large Language Models

This repository implements TLoRA, a novel approach for parameter-efficient fine-tuning of large language models using a tri-matrix low-rank adaptation strategy. Our paper details the method and experimental results, and this code repository is linked in the paper.

Paper: https://arxiv.org/abs/2504.18735

Overview

TLoRA aims to significantly reduce the number of trainable parameters while retaining high model performance. The repository includes:

  • Data utilities: Data loading and preprocessing (data.py)
  • Model adaptation: Core implementation of TLoRA (tlora.py)
  • Experiment scripts: Running and logging experiments (experiments.py)
  • Visualization tools: Plotting training dynamics and model analysis (plot.py, figures.ipynb)
  • Logs and models: Experiment logs in logs/ and checkpoints in models/

Installation

A Python 3.8+ environment is required. We recommend using a virtual environment:

python -m venv venv
source venv/bin/activate  # On Linux/MacOS
venv\Scripts\activate     # On Windows

Install all required packages with:

pip install -r requirements.txt

If a requirements file is unavailable, please refer to the installation section in the paper.

Usage

Running Experiments

Use the experiments.py script to launch experiments. Output logs will be saved in the logs/ directory.

Visualizing Results

Open the figures.ipynb notebook to view plots of training curves, eigenvalue distributions, and other diagnostics:

jupyter notebook figures.ipynb

The notebook leverages functions from plot.py to generate detailed visualizations that complement the results reported in the paper.

Code Structure

  • data.py: Data loading and preprocessing.
  • tlora.py: Core implementation of the TLoRA adaptation method.
  • experiments.py: Scripts for executing experiments and logging results.
  • plot.py: Plotting and visualization utilities.
  • figures.ipynb: Interactive notebook for analysis and result visualization.
  • logs/: Directory containing experiment logs.
  • models/: Pretrained model checkpoints and adapted model weights.
  • paper/: Paper, supplementary materials, and related figures.

Results

Our experiments demonstrate that TLoRA achieves competitive performance with a small fraction of trainable parameters compared to conventional fine-tuning methods. Detailed layer-wise analysis, weight distributions, and training curves are available in the figures.ipynb notebook and the logged outputs in logs/.

Citation

If you use this work in your research, please cite our paper:

@inproceedings{tlorapaper,
  title={TLoRA: Tri-Matrix Low-Rank Adaptation for Large Language Models},
  author={Tanvir Islam},
  booktitle={https://arxiv.org/abs/2504.18735},
  year={2025},
}

GitHub Repository: https://github.com/itanvir/tlora

About

TLoRA: Tri-Matrix Low-Rank Adaptation of Large Language Models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published