-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Description
I have been fine-tuning different LLM models (mainly Llama family) since last year and use peft with LoraConfig all the time with no issues.
Just recently I was fine-tuning the llama 70B on multiple GPU using accelerate then saving the adapter once training is done. (This was always my setup since last year)
However now I want to load the adapter into the base model as follows:
base_model = AutoModelForCausalLM.from_pretrained(model_id, dtype= torch.float16, device_map = 'auto')
model = PeftModel.from_pretrained(base_model, adapter_path)
Now I am getting this warning:
UserWarning: Found missing adapter keys while loading the checkpoint:
Then it lists some Lora weights.
I tried changing LoraConfig parameters but still the problem persists.
Can anyone please tell me what is the issue here and how to fix it. I do not know which package is causing this, is this accelerate, is it peft, since I downgraded the package to 0.14 version but still the error persists.
I am using the latest version of ```
peft, transformers, accelerate,
trl
Note: I am also using the same format for model during the training and inference.
I have already looked at this and seems same issue, but I load my model using `AutoModelForCasaulLM `in both cases:
https://github.com/huggingface/peft/issues/2566