How should I check for gpu availability?

I like to run the same script both on a development machine which doesn’t have a gpu, and a cluster, with does. A convenient check to know what kind of machine I am on, and hence which type of calculation to run (how many epochs, how big of a data set), is to call torch.cuda.is_available()

However, apparently lightning does not want me to call torch.cuda.anything, so what is another way to check whether I have a gpu or not?

Relevant code:


if torch.cuda.is_available():
    device = torch.device('cuda:1')
    print('running on GPU')
else:
    device = torch.device('cpu')
    print('running on CPU')

device = torch.device('cuda')

#%% set up trainer etc.
from seqSleep_pytorchLightning_v4 import SeqSleepPL, testModel_rolling
from pytorch_lightning.callbacks.early_stopping import EarlyStopping


trainerKeys={'max_epochs':1,
             'deterministic':True,
             'logger':neptune_logger,
             'enable_progress_bar':True,
             'benchmark':True}

if device.type=='gpu':
    trainerKeys['accelerator']='gpu'
    trainerKeys['devices']=1
    trainerKeys['max_epochs']=30

trainer=pl.Trainer(**trainerKeys)

Error:

RuntimeError: Lightning can’t create new processes if CUDA is already initialized. Did you manually call torch.cuda.* functions, have moved the model to the device, or allocated memory on the GPU any other way? Please remove any such calls, or change the selected strategy. You will have to restart the Python kernel.