Training multiple model replicas on different GPUs

Hello everyone! I am interested in training a bunch of weak classifiers in parallel on a single machine. The machine I am working on has 4 GPUs and each weak classifier replica fits on the GPU, so I am wondering if there is a way to train a single model on a single GPU in parallel, thus allowing me to train 4 replicas at a given time, in PyTorch Lightning.