Hello! I wanted to ask about behaviour I noticed using PyTorch Lightning. I built a pipeline to generate sliding windows and perform other operations on the initial dataset for forecasting models. I’m using PyTorch Lightning with the darts
library. I noticed that when I pass multiple devices to run in DataParallel mode to a new Trainer instance, the Trainer generates a new process per device. The issue is that these new processes go through all of the pipeline, initializations, windows generation, etc. Since the data was preprocessed by the process creating the new Trainer and the model is loaded as well (so essentially everything is ready to run), why do the new processes have to go through all of the process again? Shouldn’t it work just by creating new processes, sending data and model to them and running in parallel? Thanks in advance for any advice, let me know what other info could be of use to you