How to update the dataloader every epoch? train_dataloader() is just called once

Hello! As mentioned in the title, I tried to create a new train dataloader every epoch. I followed possible solutions in the forum like: Changing Datamodule during training - #2 by jirka.

I did the same thing and set reload_dataloaders_every_n_epochs to 1. But my train_dataloader() function in the datamodule class isn’t called every epoch, because it only prints once. I’m trying to modify someone else’s code which only works with version 1.4.6.
Then I find lightning==1.4.6 actually does nothing even I set reload_dataloaders_every_n_epochs to 1? Because, in pl.trainer.connectors.data_conector.py, it only does:

self.trainer.reload_dataloaders_every_n_epochs = reload_dataloaders_every_n_epochs
self.trainer._is_data_prepared = False

inside on_trainer_init() function.

So I have two questions: Does reload_dataloaders_every_n_epochs really work in 1.4.6? If so, how can I update the train_dataloder every epoch? My code are just like this:

class UpdateTrainloader(pl.Callback):
        def on_train_epoch_end(self, trainer, pl_module):
            super().on_train_epoch_start(trainer=trainer, pl_module=pl_module)
            datamodule = trainer.model.datamodule
            used_samples_unique = pl_module.sequence_unique_counts.keys() ## I added this statistic variable in datamodule to collect the used data indices
            available_samples_unique = copy.deepcopy(datamodule.train_subset) ## train_subset is the initial data indices
            for i in used_samples_unique:
                available_samples_unique.remove(i)
            trainer.model.datamodule.update_train_dataloader(available_samples_unique)

class Data(pl.LightningDataModule):
        def update_train_dataloader(self, updated_train_indices):
            self.indices_train = Subset(whole_dataset, updated_train_indices)

        def train_dataloader(self):
            print("Creating the train loader...")
            return DataLoader(
                self.indices_train,
                batch_size=self.batch_size,
                shuffle=self.shuffle,
                num_workers=self.num_workers,
                pin_memory=self.pin_memory,
            )

Thank you very much!