Model
model = LSTM(input_size=2, hidden_size=100, output_size=1)
Trainer
trainer = pl.Trainer(max_epochs=100)
Train Model
trainer.fit(model, train_loader1)
trainer = pl.Trainer(max_epochs=100)
trainer.fit(model, train_loader2)
Test
trainer.test(model, dataloaders=test_loader)
device = torch.device(‘cuda’ if torch.cuda.is_available() else ‘cpu’)
model.to(device)
evaluate_and_plot(model, test_loader, device)
This code allows my model to be trained twice with different datasets without any initialization during the process ?