Adversarial training with Lightning

To implement adversarial training in Lightning I have different concerns. In adversarial training, adversarial samples are first computed with the adversarial attack and then used for training. Also, both clean and adversarial samples are used during validation and testing.
As gradients are required to compute adversarial samples, I simply enable gradients during validation or testing. This works fine during training (when doing trainer.fit(model), but fails during testing (trainer.test(model)) as below:

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

As I checked similar issues in Lightning, the solution was to enable gradients or remove automatic optimization, but since trainer.fit works fine, I don’t understand why manual optimization would be necessary. Lightning may be able to resolve it for adversarial machine learning topics like adversarial defense and attack.

Having reviewed several responses from different sources, I believe that having inference_mode=False is the best method of resolving this issue. Therefore, you can use it for the evaluation and test phase, and it won’t change anything. Instead, you will only notice a slow progress rather than the pure state that occurs when inference_mode=True (default).
Additionally, you can follow the progress of Adversarial training and Adversarial evaluation with the Lightning and hydra in Robustness-Framework located at https://github.com/khalooei/robustness-framework. I will update that if any further updates are received. I would be happy to hear from you if you would like to contribute to that. In addition, I would like to thank my friend Sergedurand who devoted time to discussing this issue besides the Lightning team.