I set pl.Trainer(precision=16), but the model’s dtype is still float32. Is there a way to train the whole model using float16?
I set pl.Trainer(precision=16), but the model’s dtype is still float32. Is there a way to train the whole model using float16?