TypeError: cannot pickle 'module' object

Hi, I am trying to run the HTS-Audio-Transformer training notebook.

However, when i try to run trainer.fit() , I get the following error message:

TypeError: cannot pickle ‘module’ object

I have been trying to find a solution, but could not find one. Does anyone here have an idea?

Below is the full error message

--------------------------------------------------------------------------- TypeError Traceback (most recent call last) File [c:\ProgramData\anaconda3\envs\htsat8\lib\site-packages\pytorch_lightning\trainer\call.py:44](file:///C:/ProgramData/anaconda3/envs/htsat8/lib/site-packages/pytorch_lightning/trainer/call.py:44), in _call_and_handle_interrupt(trainer, trainer_fn, *args, **kwargs) 43 return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs) —> 44 return trainer_fn(*args, **kwargs) 46 except _TunerExitException: File [c:\ProgramData\anaconda3\envs\htsat8\lib\site-packages\pytorch_lightning\trainer\trainer.py:581](file:///C:/ProgramData/anaconda3/envs/htsat8/lib/site-packages/pytorch_lightning/trainer/trainer.py:581), in Trainer._fit_impl(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path) 575 ckpt_path = self._checkpoint_connector._select_ckpt_path( 576 self.state.fn, 577 ckpt_path, 578 model_provided=True, 579 model_connected=self.lightning_module is not None, 580 ) → 581 self._run(model, ckpt_path=ckpt_path) 583 assert self.state.stopped File [c:\ProgramData\anaconda3\envs\htsat8\lib\site-packages\pytorch_lightning\trainer\trainer.py:990](file:///C:/ProgramData/anaconda3/envs/htsat8/lib/site-packages/pytorch_lightning/trainer/trainer.py:990), in Trainer._run(self, model, ckpt_path) 987 # ---------------------------- 988 # RUN THE TRAINER 989 # ---------------------------- → 990 results = self._run_stage() 992 # ---------------------------- 993 # POST-Training CLEAN UP 994 # ---------------------------- File [c:\ProgramData\anaconda3\envs\htsat8\lib\site-packages\pytorch_lightning\trainer\trainer.py:1036](file:///C:/ProgramData/anaconda3/envs/htsat8/lib/site-packages/pytorch_lightning/trainer/trainer.py:1036), in Trainer._run_stage(self) 1035 with torch.autograd.set_detect_anomaly(self._detect_anomaly): → 1036 self.fit_loop.run() 1037 return None File [c:\ProgramData\anaconda3\envs\htsat8\lib\site-packages\pytorch_lightning\loops\fit_loop.py:194](file:///C:/ProgramData/anaconda3/envs/htsat8/lib/site-packages/pytorch_lightning/loops/fit_loop.py:194), in _FitLoop.run(self) 193 def run(self) → None: → 194 self.setup_data() 195 if self.skip: File [c:\ProgramData\anaconda3\envs\htsat8\lib\site-packages\pytorch_lightning\loops\fit_loop.py:258](file:///C:/ProgramData/anaconda3/envs/htsat8/lib/site-packages/pytorch_lightning/loops/fit_loop.py:258), in _FitLoop.setup_data(self) 257 self._data_fetcher.setup(combined_loader) → 258 iter(self._data_fetcher) # creates the iterator inside the fetcher 259 max_batches = sized_len(combined_loader) File [c:\ProgramData\anaconda3\envs\htsat8\lib\site-packages\pytorch_lightning\loops\fetchers.py:99](file:///C:/ProgramData/anaconda3/envs/htsat8/lib/site-packages/pytorch_lightning/loops/fetchers.py:99), in _PrefetchDataFetcher.iter(self) 98 def iter(self) → “_PrefetchDataFetcher”: —> 99 super().iter() 100 if self.length is not None: 101 # ignore pre-fetching, it’s not necessary File [c:\ProgramData\anaconda3\envs\htsat8\lib\site-packages\pytorch_lightning\loops\fetchers.py:48](file:///C:/ProgramData/anaconda3/envs/htsat8/lib/site-packages/pytorch_lightning/loops/fetchers.py:48), in _DataFetcher.iter(self) 47 def iter(self) → “_DataFetcher”: —> 48 self.iterator = iter(self.combined_loader) 49 self.reset() File [c:\ProgramData\anaconda3\envs\htsat8\lib\site-packages\pytorch_lightning\utilities\combined_loader.py:335](file:///C:/ProgramData/anaconda3/envs/htsat8/lib/site-packages/pytorch_lightning/utilities/combined_loader.py:335), in CombinedLoader.iter(self) 334 iterator = cls(self.flattened, self._limits) → 335 iter(iterator) 336 self._iterator = iterator File [c:\ProgramData\anaconda3\envs\htsat8\lib\site-packages\pytorch_lightning\utilities\combined_loader.py:87](file:///C:/ProgramData/anaconda3/envs/htsat8/lib/site-packages/pytorch_lightning/utilities/combined_loader.py:87), in _MaxSizeCycle.iter(self) 86 def iter(self) → Self: —> 87 super().iter() 88 self._consumed = [False] * len(self.iterables) File [c:\ProgramData\anaconda3\envs\htsat8\lib\site-packages\pytorch_lightning\utilities\combined_loader.py:40](file:///C:/ProgramData/anaconda3/envs/htsat8/lib/site-packages/pytorch_lightning/utilities/combined_loader.py:40), in _ModeIterator.iter(self) 39 def iter(self) → Self: —> 40 self.iterators = [iter(iterable) for iterable in self.iterables] 41 self._idx = 0 File [c:\ProgramData\anaconda3\envs\htsat8\lib\site-packages\pytorch_lightning\utilities\combined_loader.py:40](file:///C:/ProgramData/anaconda3/envs/htsat8/lib/site-packages/pytorch_lightning/utilities/combined_loader.py:40), in (.0) 39 def iter(self) → Self: —> 40 self.iterators = [iter(iterable) for iterable in self.iterables] 41 self._idx = 0 File [c:\ProgramData\anaconda3\envs\htsat8\lib\site-packages\torch\utils\data\dataloader.py:438](file:///C:/ProgramData/anaconda3/envs/htsat8/lib/site-packages/torch/utils/data/dataloader.py:438), in DataLoader.iter(self) 437 else: → 438 return self._get_iterator() File [c:\ProgramData\anaconda3\envs\htsat8\lib\site-packages\torch\utils\data\dataloader.py:386](file:///C:/ProgramData/anaconda3/envs/htsat8/lib/site-packages/torch/utils/data/dataloader.py:386), in DataLoader._get_iterator(self) 385 self.check_worker_number_rationality() → 386 return _MultiProcessingDataLoaderIter(self) File [c:\ProgramData\anaconda3\envs\htsat8\lib\site-packages\torch\utils\data\dataloader.py:1039](file:///C:/ProgramData/anaconda3/envs/htsat8/lib/site-packages/torch/utils/data/dataloader.py:1039), in _MultiProcessingDataLoaderIter.init(self, loader) 1033 # NB: Process.start() actually take some time as it needs to 1034 # start a process and pass the arguments over via a pipe. 1035 # Therefore, we only add a worker to self._workers list after 1036 # it started, so that we do not call .join() if program dies 1037 # before it starts, and del tries to join but will get: 1038 # AssertionError: can only join a started process. → 1039 w.start() 1040 self._index_queues.append(index_queue) File [c:\ProgramData\anaconda3\envs\htsat8\lib\multiprocessing\process.py:121](file:///C:/ProgramData/anaconda3/envs/htsat8/lib/multiprocessing/process.py:121), in BaseProcess.start(self) 120 _cleanup() → 121 self._popen = self._Popen(self) 122 self._sentinel = self._popen.sentinel File [c:\ProgramData\anaconda3\envs\htsat8\lib\multiprocessing\context.py:224](file:///C:/ProgramData/anaconda3/envs/htsat8/lib/multiprocessing/context.py:224), in Process._Popen(process_obj) 222 @staticmethod 223 def _Popen(process_obj): → 224 return _default_context.get_context().Process._Popen(process_obj) File [c:\ProgramData\anaconda3\envs\htsat8\lib\multiprocessing\context.py:327](file:///C:/ProgramData/anaconda3/envs/htsat8/lib/multiprocessing/context.py:327), in SpawnProcess._Popen(process_obj) 326 from .popen_spawn_win32 import Popen → 327 return Popen(process_obj) File [c:\ProgramData\anaconda3\envs\htsat8\lib\multiprocessing\popen_spawn_win32.py:93](file:///C:/ProgramData/anaconda3/envs/htsat8/lib/multiprocessing/popen_spawn_win32.py:93), in Popen.init(self, process_obj) 92 reduction.dump(prep_data, to_child) —> 93 reduction.dump(process_obj, to_child) 94 finally: File [c:\ProgramData\anaconda3\envs\htsat8\lib\multiprocessing\reduction.py:60](file:///C:/ProgramData/anaconda3/envs/htsat8/lib/multiprocessing/reduction.py:60), in dump(obj, file, protocol) 59 ‘’‘Replacement for pickle.dump() using ForkingPickler.’‘’ —> 60 ForkingPickler(file, protocol).dump(obj) TypeError: cannot pickle ‘module’ object During handling of the above exception, another exception occurred: RuntimeError Traceback (most recent call last) Cell In[11], line 3 1 # Training the model 2 # You can set different fold index by setting ‘esc_fold’ to any number from 0-4 in esc_config.py ----> 3 trainer.fit(model, audioset_data) File [c:\ProgramData\anaconda3\envs\htsat8\lib\site-packages\pytorch_lightning\trainer\trainer.py:545](file:///C:/ProgramData/anaconda3/envs/htsat8/lib/site-packages/pytorch_lightning/trainer/trainer.py:545), in Trainer.fit(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path) 543 self.state.status = TrainerStatus.RUNNING 544 self.training = True → 545 call._call_and_handle_interrupt( 546 self, self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path 547 ) File [c:\ProgramData\anaconda3\envs\htsat8\lib\site-packages\pytorch_lightning\trainer\call.py:68](file:///C:/ProgramData/anaconda3/envs/htsat8/lib/site-packages/pytorch_lightning/trainer/call.py:68), in _call_and_handle_interrupt(trainer, trainer_fn, *args, **kwargs) 66 for logger in trainer.loggers: 67 logger.finalize(“failed”) —> 68 trainer._teardown() 69 # teardown might access the stage so we reset it after 70 trainer.state.stage = None File [c:\ProgramData\anaconda3\envs\htsat8\lib\site-packages\pytorch_lightning\trainer\trainer.py:1017](file:///C:/ProgramData/anaconda3/envs/htsat8/lib/site-packages/pytorch_lightning/trainer/trainer.py:1017), in Trainer._teardown(self) 1015 # loop should never be None here but it can because we don’t know the trainer stage with ddp_spawn 1016 if loop is not None: → 1017 loop.teardown() 1018 self._logger_connector.teardown() 1019 self._signal_connector.teardown() File [c:\ProgramData\anaconda3\envs\htsat8\lib\site-packages\pytorch_lightning\loops\fit_loop.py:407](file:///C:/ProgramData/anaconda3/envs/htsat8/lib/site-packages/pytorch_lightning/loops/fit_loop.py:407), in _FitLoop.teardown(self) 405 def teardown(self) → None: 406 if self._data_fetcher is not None: → 407 self._data_fetcher.teardown() 408 self._data_fetcher = None 409 self.epoch_loop.teardown() File [c:\ProgramData\anaconda3\envs\htsat8\lib\site-packages\pytorch_lightning\loops\fetchers.py:75](file:///C:/ProgramData/anaconda3/envs/htsat8/lib/site-packages/pytorch_lightning/loops/fetchers.py:75), in _DataFetcher.teardown(self) 74 def teardown(self) → None: —> 75 self.reset() 76 if self._combined_loader is not None: 77 self._combined_loader.reset() File [c:\ProgramData\anaconda3\envs\htsat8\lib\site-packages\pytorch_lightning\loops\fetchers.py:134](file:///C:/ProgramData/anaconda3/envs/htsat8/lib/site-packages/pytorch_lightning/loops/fetchers.py:134), in _PrefetchDataFetcher.reset(self) 133 def reset(self) → None: → 134 super().reset() 135 self.batches = File [c:\ProgramData\anaconda3\envs\htsat8\lib\site-packages\pytorch_lightning\loops\fetchers.py:71](file:///C:/ProgramData/anaconda3/envs/htsat8/lib/site-packages/pytorch_lightning/loops/fetchers.py:71), in _DataFetcher.reset(self) 69 # teardown calls reset(), and if it happens early, combined_loader can still be None 70 if self._combined_loader is not None: —> 71 self.length = sized_len(self.combined_loader) 72 self.done = self.length == 0 File [c:\ProgramData\anaconda3\envs\htsat8\lib\site-packages\lightning_fabric\utilities\data.py:51](file:///C:/ProgramData/anaconda3/envs/htsat8/lib/site-packages/lightning_fabric/utilities/data.py:51), in sized_len(dataloader) 48 “”“Try to get the length of an object, return None otherwise.”“” 49 try: 50 # try getting the length —> 51 length = len(dataloader) # type: ignore [arg-type] 52 except (TypeError, NotImplementedError): 53 length = None File [c:\ProgramData\anaconda3\envs\htsat8\lib\site-packages\pytorch_lightning\utilities\combined_loader.py:342](file:///C:/ProgramData/anaconda3/envs/htsat8/lib/site-packages/pytorch_lightning/utilities/combined_loader.py:342), in CombinedLoader.len(self) 340 “”“Compute the number of batches.”“” 341 if self._iterator is None: → 342 raise RuntimeError(“Please call iter(combined_loader) first.”) 343 return len(self._iterator) RuntimeError: Please call iter(combined_loader) first.

SOLUTION: set num_workers to 0