Weird behavior in lightning logging

I’m using Pycharm 2022.3.2 in Windows, with Anaconda Environment, python version 3.9, pytorch version 1.13.1, pytorch lightning version 1.9.0

I transferred my previous pytorch code to pytorch lightning code last night, and my previous pytorch code has set logging module like this

(image1, please see my reply)

normally when i run the pytorch lightning trainer.fit, it will give me console log like this:

(image2)

which means it outputs the log to INFO:pytorch_lightning, and this is what i want, because i will save those logs to a file.

but today when i run the code again, i got console like this:
(image3)

which means the console log didn’t use the global logging setting.

i didn’t import logging module in other python files, so the setting should be the same as the normal code, the only thing i did today was adding a new file.

I’ve tried to delete the file, searched on google, create a brand-new trainer, but none of them works. There’s only one documentation about the logging ( Console logging — PyTorch Lightning 1.9.0 documentation (pytorch-lightning.readthedocs.io)), i’ve tried change level, didn’t work. I’ve also checked the source code of pytorch lightning, i found rank_zero do use logging module, so normally it should follow the global logging setting, and since this code worked before, i don’t think my environment is broken.

Now i can’t fix it so i have this post, the normal log and wrong log have been attached, please help

################# wrong log #######################

INFO:server_logger:dataset configurations:
pt_path : ./dataset/
split_method : shift
dataset_type : clip
sequence_length : 32
batch_size : 16
samples_per_epoch : 2048
visual_shape : (2, 1, 256, 256)
vector_shape : 726
action_shape : 2

INFO:server_logger:policy configurations:
encoder_model : clip_vit
feature_size : 512

INFO:server_logger:pre rl configurations:
pretrain_module : bc
lr : 5e-05
max_eval_steps : 10000
bc_epoch : 25

INFO:server_logger:global common configurations:
device : cuda:0
log_level : info
INFO:server_logger:create the ea-policy model
INFO:server_logger:trainable parameters: 18113282
INFO:server_logger:loading time: 0.09628369999999986
INFO:server_logger:new_lr: 1.6788040181225603e-06
INFO:server_logger:loading time: 0.09723110000000013

################# normal log #######################
INFO:server_logger:dataset configurations:
pt_path : ./dataset/
split_method : shift
dataset_type : clip
sequence_length : 32
batch_size : 16
samples_per_epoch : 2048
visual_shape : (2, 1, 256, 256)
vector_shape : 726
action_shape : 2

INFO:server_logger:policy configurations:
feature_size : 512

INFO:server_logger:pre rl configurations:
pretrain_module : bc
lr : 5e-05
max_eval_steps : 10000
bc_epoch : 25

INFO:server_logger:global common configurations:
device : cuda:0
log_level : info
INFO:server_logger:create the ea-policy model
INFO:server_logger:trainable parameters: 18113282
INFO:pytorch_lightning.utilities.rank_zero:Trainer already configured with model summary callbacks: [<class ‘pytorch_lightning.callbacks.model_summary.ModelSummary’>]. Skipping setting a default ModelSummary callback.
INFO:pytorch_lightning.utilities.rank_zero:GPU available: True (cuda), used: True
INFO:pytorch_lightning.utilities.rank_zero:TPU available: False, using: 0 TPU cores
INFO:pytorch_lightning.utilities.rank_zero:IPU available: False, using: 0 IPUs
INFO:pytorch_lightning.utilities.rank_zero:HPU available: False, using: 0 HPUs
INFO:pytorch_lightning.accelerators.cuda:LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
INFO:server_logger:loading time: 0.016550400000000742
INFO:pytorch_lightning.utilities.rank_zero:Trainer.fit stopped: max_steps=200 reached.
INFO:server_logger:new_lr: 9.549925860214358e-07
INFO:pytorch_lightning.accelerators.cuda:LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
INFO:pytorch_lightning.callbacks.model_summary:
| Name | Type | Params

0 | policy | SSILPolicy | 18.1 M
1 | policy.feature_encoder | SSILEncoder | 18.1 M
2 | policy.feature_encoder.model | ClipVisualTransformer | 18.1 M
3 | policy.feature_encoder.model.projection | Linear | 590 K
4 | policy.feature_encoder.model.dropout | Dropout | 0
5 | policy.feature_encoder.model.encoder_layer | TransformerEncoderLayer | 5.5 M
6 | policy.feature_encoder.model.encoder_layer.self_attn | MultiheadAttention | 2.4 M
7 | policy.feature_encoder.model.encoder_layer.self_attn.out_proj | NonDynamicallyQuantizableLinear | 590 K
8 | policy.feature_encoder.model.encoder_layer.linear1 | Linear | 1.6 M
9 | policy.feature_encoder.model.encoder_layer.dropout | Dropout | 0
10 | policy.feature_encoder.model.encoder_layer.linear2 | Linear | 1.6 M
11 | policy.feature_encoder.model.encoder_layer.norm1 | LayerNorm | 1.5 K
12 | policy.feature_encoder.model.encoder_layer.norm2 | LayerNorm | 1.5 K
13 | policy.feature_encoder.model.encoder_layer.dropout1 | Dropout | 0
14 | policy.feature_encoder.model.encoder_layer.dropout2 | Dropout | 0
15 | policy.feature_encoder.model.encoder | TransformerEncoder | 11.0 M
16 | policy.feature_encoder.model.encoder.layers | ModuleList | 11.0 M
17 | policy.feature_encoder.model.encoder.layers.0 | TransformerEncoderLayer | 5.5 M
18 | policy.feature_encoder.model.encoder.layers.0.self_attn | MultiheadAttention | 2.4 M
19 | policy.feature_encoder.model.encoder.layers.0.self_attn.out_proj | NonDynamicallyQuantizableLinear | 590 K
20 | policy.feature_encoder.model.encoder.layers.0.linear1 | Linear | 1.6 M
21 | policy.feature_encoder.model.encoder.layers.0.dropout | Dropout | 0
22 | policy.feature_encoder.model.encoder.layers.0.linear2 | Linear | 1.6 M
23 | policy.feature_encoder.model.encoder.layers.0.norm1 | LayerNorm | 1.5 K
24 | policy.feature_encoder.model.encoder.layers.0.norm2 | LayerNorm | 1.5 K
25 | policy.feature_encoder.model.encoder.layers.0.dropout1 | Dropout | 0
26 | policy.feature_encoder.model.encoder.layers.0.dropout2 | Dropout | 0
27 | policy.feature_encoder.model.encoder.layers.1 | TransformerEncoderLayer | 5.5 M
28 | policy.feature_encoder.model.encoder.layers.1.self_attn | MultiheadAttention | 2.4 M
29 | policy.feature_encoder.model.encoder.layers.1.self_attn.out_proj | NonDynamicallyQuantizableLinear | 590 K
30 | policy.feature_encoder.model.encoder.layers.1.linear1 | Linear | 1.6 M
31 | policy.feature_encoder.model.encoder.layers.1.dropout | Dropout | 0
32 | policy.feature_encoder.model.encoder.layers.1.linear2 | Linear | 1.6 M
33 | policy.feature_encoder.model.encoder.layers.1.norm1 | LayerNorm | 1.5 K
34 | policy.feature_encoder.model.encoder.layers.1.norm2 | LayerNorm | 1.5 K
35 | policy.feature_encoder.model.encoder.layers.1.dropout1 | Dropout | 0
36 | policy.feature_encoder.model.encoder.layers.1.dropout2 | Dropout | 0
37 | policy.feature_encoder.model.vector_head | Linear | 558 K
38 | policy.feature_encoder.model.extract_layer | Sequential | 393 K
39 | policy.feature_encoder.model.extract_layer.0 | Linear | 393 K
40 | policy.act_decoder | SSILDecoder | 1.0 K
41 | policy.act_decoder.decoder | Linear | 1.0 K
42 | loss_function | MSELoss | 0

18.1 M Trainable params
0 Non-trainable params
18.1 M Total params
72.453 Total estimated model params size (MB)
INFO:server_logger:loading time: 0.016168699999999703
INFO:pytorch_lightning.utilities.rank_zero:Trainer.fit stopped: max_epochs=25 reached.

image1:
image

image2:

image3: