Multi GPU computing

I am launching the fabric below
fabric = Fabric(accelerator=“cuda”, devices=2, strategy=“auto”, precision=“16-mixed”)

I was expecting the model to get trained in a distributed manner on 2 GPUs. But from the output below, the model is getting trained separately on 2 GPUs. I have set for 10 epochs and the model is getting trained separately for 10 epochs on 2 GPUs. Please suggest

Epoch: 0001/0010 | Batch 0000/0007 | Batch Train Loss: 0.9700
Epoch: 0001/0010 | Batch 0000/0007 | Batch Train Loss: 0.9700
Epoch: 0001/0010 | Batch 0001/0007 | Batch Train Loss: 0.9626
Epoch: 0001/0010 | Batch 0001/0007 | Batch Train Loss: 0.9626
Epoch: 0001/0010 | Batch 0002/0007 | Batch Train Loss: 0.9524
Epoch: 0001/0010 | Batch 0002/0007 | Batch Train Loss: 0.9524
Epoch: 0001/0010 | Train Loss: 0.9222 | Val Loss: 0.7932
Epoch: 0001/0010 | Train Loss: 0.9222 | Val Loss: 0.7931