Train 2 epochs head, unfreeze / learning rate finder, continue training (fit_one_cycle)

@s-rog Thank you! This seems to indeed solve the epoch logging issue.

Now I’m trying to invoke the trainer.tune(model) after the training for the first 2 epochs.
However, this fails:

LR finder stopped early due to diverging loss.
Failed to compute suggesting for `lr`. There might not be enough points.
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/tuner/lr_finder.py", line 340, in suggestion
    min_grad = np.gradient(loss).argmin()
  File "<__array_function__ internals>", line 6, in gradient
  File "/usr/local/lib/python3.6/dist-packages/numpy/lib/function_base.py", line 1042, in gradient
    "Shape of array too small to calculate a numerical gradient, "
ValueError: Shape of array too small to calculate a numerical gradient, at least (edge_order + 1) elements are required.

Colab showing this issue: Google Colab