Hello,
Seeking for advice regarding extending functionality of pl.Trainer object.
Currently we’re building open-source framework that heavily using PyTorch Lightning for default pipelines like fit and test. part of our models trained with raw PyTorch using Active Learning method.
We want to port our pipeline to PyTorch Lightning, but it’s not clear how to add additional functionality for Trainer class like samples addition to dataset or reloading datamodules. Also we want to call various functions at different time during training (e.g. train pipeline in GOLF - [2311.06295] Gradual Optimization Learning for Conformational Energy Minimization).
We want to maintain our library with future releases of PyTorch Lightning. What path of adding functionality to vanilla pl.Trainer should we choose without using inheritance?