Running meta-learning tutorial on TPU

I’m following this tutorial to implement meta-learning on Colab.

I install the TPU version for pytorch using the command

!pip install --quiet cloud-tpu-client==0.10 torch==2.0.0 torchvision==0.15.1 https://storage.googleapis.com/tpu-pytorch/wheels/colab/torch_xla-2.0-cp39-cp39-linux_x86_64.whl

When I run the training with 8 TPU cores, it throws an error in this file. Looks like the custom arguments of FewShotBatchSampler cause the error, and lightning expects the FewShotBatchSampler has the same arguments as BatchSampler.

How can we fix this and run it on multiple TPU cores?