Using Hydra + DDP

Just to give a high-level overview: because DDP launches separate processes for each GPU, certain tasks should not be executed on all processes to avoid errors, such as read/writing the same file. So, we only perform them on the rank 0 process.
In PL, there is a method that can also be used as a decorator. Any method with this decorator will only execute on rank 0 process:

from pytorch_lightning.utilities import rank_zero_only

...
    @rank_zero_only
    def log_hyperparams(self, params: Union[Dict[str, Any], Namespace],
                        metrics: Optional[Dict[str, Any]] = None) -> None:

You could also just check for rank 0 yourself:

# Global rank 0 is the 0th process on the 0th node
if int(os.environ.get('LOCAL_RANK', 0) == 0 and os.environ.get('NODE_RANK', 0):
    <do something>

If one is using a callback for some task, then the callback will generally use the @rank_zero_only decorator and perform the task during the setup or pretrain period.

Anyway, looks like your error might be an actual bug? If so, feel free to refer to my repo for a tmp. alternative (set hydra run dir to current directory, create a new logging dir for this run manually, pass that directory to the ModelCheckpoint callback).