Storage type used on Multi-GPU machines?

I’m curious about the type of storage units used in your GPU machines. Specifically, do the storage units vary with the more advanced GPU setups?

The reason I ask is that I’ve noticed slower training times on your multi-GPU machines (even with x8 H100 GPUs) compared to my own setup. My suspicion is that the bottleneck might be related to I/O or data transfer from storage to the GPUs.

On my local machine, I’m using an SSD and an RTX 4070, which provides fast data transfer speeds. Do your multi-GPU machines use similar high-speed storage solutions?

In the meantime, I guess I can try just loading the whole dataset into RAM.