Hi, I build a dataloader and wrapped it with a lightning datamodule that loads precomputed features saved in pkl file. So in the getitem function, I load it with pickl.load(filename), but I got the error:
*** RuntimeError: Caught RuntimeError in DataLoader worker process 2.
Original Traceback (most recent call last):
File "/home/szding/.conda/envs/v2sa2/lib/python3.12/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
data = fetcher.fetch(index) # type: ignore[possibly-undefined]
^^^^^^^^^^^^^^^^^^^^
File "/home/szding/.conda/envs/v2sa2/lib/python3.12/site-packages/torch/utils/data/_utils/fetch.py", line 54, in fetch
return self.collate_fn(data)
^^^^^^^^^^^^^^^^^^^^^
File "/home/szding/.conda/envs/v2sa2/lib/python3.12/site-packages/torch/utils/data/_utils/collate.py", line 316, in default_collate
return collate(batch, collate_fn_map=default_collate_fn_map)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/szding/.conda/envs/v2sa2/lib/python3.12/site-packages/torch/utils/data/_utils/collate.py", line 154, in collate
clone.update({key: collate([d[key] for d in batch], collate_fn_map=collate_fn_map) for key in elem})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/szding/.conda/envs/v2sa2/lib/python3.12/site-packages/torch/utils/data/_utils/collate.py", line 141, in collate
return collate_fn_map[elem_type](batch, collate_fn_map=collate_fn_map)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/szding/.conda/envs/v2sa2/lib/python3.12/site-packages/torch/utils/data/_utils/collate.py", line 213, in collate_tensor_fn
return torch.stack(batch, 0, out=out)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument tensors in method wrapper_CUDA_cat_out_out)
And I checked after loading the file with pickle, the location of the data is on CPU. I thought it will be automatically mapped to cuda, now what should I do? Thanks!