Multiple dataloaders in training_step() and use them separately

Hi, I’m figuring out how to use multiple dataloaders in training_step() of LightningModule. Currently, I pass a list of dataloaders in, it will return the list of batches, each from a dataloader simultaneously. However, my use case differs in that I would want to process each batch from each dataset sequentially.

For example, I have three datasets. For step i, I receive a batch from dataset 0, update my model. For step i+1, I receive a batch from dataset 1 and update my model. For step i+2, I get a batch from dataset 2 and update my model. The process repeats until all samples are iterated.

How can I implement this in Pytorch Lightning ? Are there already supports for this ? I would be happy to dive in myself, but I don’t know where to start.