`.detach()` cannot stop backprop in `training_step`

not a flag but you can just do

class LITModule(LightningModule):
    def toggle_optimizer(self, *args, **kwargs):
        pass

to disable this behavior.

Or maybe try manual_optimization: GitHub - Lightning-AI/lightning: Build and train PyTorch models and connect them to the ML lifecycle using Lightning App templates, without handling DIY infrastructure, cost management, scaling, and other headaches.

you can read more about it in the docs. ^^