Re-train the fine tune model for new class

Hi, I have fine-tune the classification model for 38 classes. Now I want to re-train the fine-tune only with only 1 new class.

So I have loaded checkpoint from last fine-tune model like that
How I can do that, please find my code below

retrain_model = SentenceTagger.load_from_checkpoint(
    "lightning_logs/sentence-comments/version_0/checkpoints/epoch=4-step=5349.ckpt",
    n_classes=len(LABEL_COLUMNS),
    learning_rate=0.0001, 
    freeze=True,
    n_warmup_steps=20,
    n_training_steps= total_training_steps
)

But getting error like,
RuntimeError: Error(s) in loading state_dict for SentenceTagger:
size mismatch for classifier.weight: copying a param with shape torch.Size([38, 768]) from checkpoint, the shape in current model is torch.Size([1, 768]).
size mismatch for classifier.bias: copying a param with shape torch.Size([38]) from checkpoint, the shape in current model is torch.Size([1]).

I have look different forum to solve the issue. I have tried skip_mismatch=True but doesn’t help

Thanks

I’m running into a similar issue. Essentially I’m using a pre-trained LM, and trying to replace the current positional encoder with one with a different max length. Did you ever figure out a workaround?

P.S. Is skip_mismatch a valid argument? I don’t see it in the documentation.

I am not an LLM expert here but afaik when you replace the positional encoder with a different max length then you will have a different number of parameters.