CPU multithreading

I was wondering whether pytorch lightning trainer offered the possibility to set the number of threads for intraop parallelism. This in pytorch can be achieved through torch.set_num_threads(). I think this is different than specifying the number of devices in the trainer. When I tried using torch.set_num_threads() with lightning it seemed to have no effect at all. In my experience this brought to huge speedup in my trainings when using pytorch and I was wondering whether it was possible as well in pytorch lightning.

Thank you very much! :slight_smile: