Computing model output over dataset

I couldn’t find the code of the write_prediction method.

  1. Assuming that it dumps the data onto disk at every call without using asynchronous process, isn’t it inefficient?

I’m using the code of @will, and I found a hack to get my job done. However, I was wondering if I want to accumulate predictions

  1. Can I gather them into an attribute of the Trainer instance, i.e. self? Any caveats with that approach namely as part of a distributed setup?