Deepspeed stage 3 + quantization

Is it possible to use a trainer with deepspeed_stage_3 (ie. trainer = Trainer(…strategy=“deepspeed_stage_3”, …))in order to fine-tune a pre-trained model that has been quantized to 8bit (e.g. loaded using model = AutoModelForCausalLM.from_pretrained(
pretrained_fm,
trust_remote_code=True,
torch_dtype=torch.float16,
load_in_8bit=True,
))?

I currently get different device error upon sanity checking. I also see that no precision=8 option is currently supported for lightning.Trainer