Combining loss for multiple dataloaders

Hi @goku, thanks for prompt reply. Yes, the second solution seems more fitting to my needs. However there is one small change in that, I wanted to incorporate here. as per below:

def training_step(…):
if self.current_epoch % 2 == 0:
# apply loss with labels
else:
# apply unsupervised loss
return (labelled loss + unsupervised loss)

but since 2 losses are being calculated alternatively, unable to return them togetther