Log_save_interval and row_log_interval

What do log_save_interval and row_log_interval from Trainer flags do?
I noticed logging in tensorboard is done at row_log_interval (default 50) while in wandb it doesn’t.

GH issue: Controlling the global step in TrainResult.log and EvalResult.log · Issue #3339 · Lightning-AI/lightning · GitHub

Why not for wandb?
I’m looking at the trainer code and it is applied to whatever logger is used:

should_log_metrics = (batch_idx + 1) % self.row_log_interval == 0 or self.should_stop
        if should_log_metrics or self.fast_dev_run:
            # logs user requested information to logger
            metrics = batch_output.batch_log_metrics
            grad_norm_dic = batch_output.grad_norm_dic
            if len(metrics) > 0 or len(grad_norm_dic) > 0:
                self.log_metrics(metrics, grad_norm_dic)

row_log_interval is for how many steps should be skipped before adding a new row to the summary.
log_save_interval is the interval at which this summary is actually written to disk. this is the slowest part, and determines how often you can “refresh” your tensorboard.

log_save_interval may not apply to all loggers if they do their own thing, like sending the data to the cloud.

By summary you mean all the logs from previous steps are stored and at log_save_interval, it saves them all to the disk the clear up summary state of TensorboardLogger? So with this logger if I set row_log_interval=20 and log_save_interval=1000 for total batches of 100, it won’t create any logs, right?

Yes, row_log_interval applies to all logger.
I changed x-axis to step in the wandb plot so it showed from 0, 1, 2, …

So I miss understood it