Hello everyone! I am interested in training a bunch of weak classifiers in parallel on a single machine. The machine I am working on has 4 GPUs and each weak classifier replica fits on the GPU, so I am wondering if there is a way to train a single model on a single GPU in parallel, thus allowing me to train 4 replicas at a given time, in PyTorch Lightning.
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
How to train PyTorch on multiple GPUs | 1 | 594 | August 27, 2020 | |
Training on single-node multi-GPU VM
|
1 | 1170 | February 22, 2021 | |
Lightning Trainer works on one gpu but OOM on more | 1 | 1044 | October 30, 2023 | |
Training fails: , but found at least two devices, cuda:0 and cpu | 1 | 10328 | February 5, 2021 | |
How to split my Pytorch model to different gpu?
|
1 | 2749 | December 22, 2020 |