Combining loss for multiple dataloaders

Hi,

I am implementing a module where i am trying to use labeled and unlabeled dataset for semi-supervised classifcation. The solution provided here ( switch between multiple train dataloaders ) on how to load 2 dataloaders is a big help. As i understand, we calculate loss here alternatively for labeled and unlabeled. However, in my problem, i have below issue.

  1. The loss obtained from labeled and unlabeled dataloader needs to be added per batch
  2. This combined loss is then will be optimized using loss.backward()

I tried to use the solution here (https://pytorch-lightning.readthedocs.io/en/latest/multiple_loaders.html), however in my case batch size of both dataloader is different, so this doesn’t work.

any small help is appreciated. Please let me know if some things unclear.

On the GitHub issue, you mentioned that you want to use a labeled dataset on an epoch and other unlabeled dataset on another epoch. If this is what you meant, then I am not sure how are you going to introduce the loss with the unlabeled dataset in the first epoch and the same with the other dataset in the second epoch.

yeah, 2 losses needs to be calculated - it can be done within same epoch as per solution provided by @awaelchli, however i would want to aggregate/add both loss for optimizer after each batch or epoch

In the proposed solution on GitHub, the first solution gives you a different dataset in each epoch. The second solution will give you a batch of both the datasets in each step for all epochs.

With the second solution, you can simply calculate the loss by computing it separately with each dataset. With the first solution I don’t see how that’s possible. Maybe if you can share the exact workflow of your use-case, I can suggest a solution then.

Hi @goku, thanks for prompt reply. Yes, the second solution seems more fitting to my needs. However there is one small change in that, I wanted to incorporate here. as per below:

def training_step(…):
if self.current_epoch % 2 == 0:
# apply loss with labels
else:
# apply unsupervised loss
return (labelled loss + unsupervised loss)

but since 2 losses are being calculated alternatively, unable to return them togetther

Hi @goku, apologies for late reply, but here is the expected workflow:

    model.train()
    torch.set_grad_enabled(True)
    train_loader = zip(labeled_trainloader, unlabeled_trainloader)
    for batch_idx, (data_x, data_u) in enumerate(train_loader):
        inputs_x, targets_x = data_x
        (inputs_u_w, inputs_u_s), _ = data_u
        inputs = torch.cat((inputs_x, inputs_u_w, inputs_u_s)).
        logits = model(inputs)
        Lx = (calculate labled loss using targets_x and logits)

        Lu = (callculate unlabeled loss using some targets_u , logits)

        loss = Lx +  Lu

        loss.backward()

        optimizer.step()

Just concat your dataset, create a single dataloader, in each iteration of training_step you will get the batch from both the dataset. Simply calculate the loss, add them and return it.

Thanks @goku . I looked that approach, but it doesnt really work, becuase the batch size of 2 datasets is different in my case

I think figured it out, I am returning final loader as zip(labeled_loader, unlabeled_loader) from data module and also make reload_dataloader_every_epoch=True on the trainer

I didn’t know your batch size of both the datasets is different.