Hello everyone! I am interested in training a bunch of weak classifiers in parallel on a single machine. The machine I am working on has 4 GPUs and each weak classifier replica fits on the GPU, so I am wondering if there is a way to train a single model on a single GPU in parallel, thus allowing me to train 4 replicas at a given time, in PyTorch Lightning.
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
How to train PyTorch on multiple GPUs | 1 | 596 | August 27, 2020 | |
Training on single-node multi-GPU VM
|
1 | 1170 | February 22, 2021 | |
Lightning Trainer works on one gpu but OOM on more | 1 | 1074 | October 30, 2023 | |
Training fails: , but found at least two devices, cuda:0 and cpu | 1 | 10414 | February 5, 2021 | |
How to split my Pytorch model to different gpu?
|
1 | 2760 | December 22, 2020 |