Finetuning using lit-llama

I have recently gone through the article of " How To Finetune GPT Like Large Language Models on a Custom Dataset" and i am facing error when i run the last command


!python finetune_adapter.py \
    --data_dir data/dolly \
    --checkpoint_dir checkpoints/togethercomputer/RedPajama-INCITE-Base-3B-v1

Hi @pashikantipranith

Could you try one of the repo names listed in the error message instead? It might be that checkpoints weren’t downloaded correctly or the name of the model is not right. The download guide in the repo should help you: lit-parrot/download_stablelm.md at main · Lightning-AI/lit-parrot · GitHub

@aniketmaurya We might need to update the blogpost to reflect any changes made to the download instructions.
@carmocca @carlosgridai for visibility

You are missing the lit_model.pth file in the directory you show at the left.

For that, you should run python scripts/convert_hf_checkpoint.py --checkpoint_dir checkpoints/togethercomputer/RedPajama-INCITE-Base-3B-v1 as described in lit-parrot/download_redpajama_incite.md at main · Lightning-AI/lit-parrot · GitHub

1 Like

hi @pashikantipranith, as Adrian and Carlos mentioned you’re missing the Lit-Parrot version of model weights. You need to download the weights and convert them to Lit-Parrot format as mentioned in the blog and the howto section

# download weights
python scripts/download.py --repo_id togethercomputer/RedPajama-INCITE-Base-3B-v1

# convert weights to Lit-Parrot format
python scripts/convert_hf_checkpoint.py --checkpoint_dir checkpoints/togethercomputer/RedPajama-INCITE-Base-3B-v1