Cache/store transformed input to speed up data loading

Hey there,

The inputs to my dataloader are audio files (wav.), which during the preprocessing go through stft and other transformations. The spectrograms are then fed into the network.

Instead of computing stft and performing the same transformations at each epoch, I would like to cache the transformed inputs either in RAM or on disk. What’s a more efficient way of doing it in pytorch and with lightning?

I have seen people caching those transformed as numpy, which I don’t know if is slower than saving as tensors. Others use tfrecord and custom pytorch dataloader. (I would like to utilize lightning’s dataloader if possible.)

Many thanks in advance!