Loss not decreasing - first-time user

I validated that the loss decreases without using Lightning. Here is my code:

train_ds = TensorDataset(X_train,y_train)
batch_size = 10000
train_dl = DataLoader(train_ds, batch_size, drop_last = False)
class Linear(pl.LightningModule):
def init(self,input_size,output_size):
super().init()
self.model = nn.Linear(input_size,output_size)
self.loss = nn.MSELoss(reduction=‘mean’)

def forward(self,x):
return self.model(x)

def training_step(self,batch,batch_idx):
x,y = batch
loss = self.loss(self(x), y)
return loss

def configure_optimizers(self):
return torch.optim.SGD(self.parameters(),lr=0.1)

input_size = 21601
output_size = 140
model = Linear(input_size, output_size)
trainer = pl.Trainer(max_epochs = 1, log_every_n_steps=1)
trainer.fit(model=model,train_dataloaders=train_dl)

Hi, @Alex_Roberts welcome to the community first of all!

So after going through your code, I can see that your model is composed of a single Linear layer; therefore, it understands only the linear relationship in the data. I don’t know about the dataset you are using, but I suggest trying an activation function in your model to see if it works.

Also, a suggestion try adding code blocks to your code when posting it here(it makes it more readable).
Example:


print(Hello World)

1 Like

Thanks for the reply. I was wondering why the loss wasn’t going down, it should work also for a linear model (as first case) but I’m not sure what I’m doing wrong. I will post a code block the next time, seem to have trouble editing the post. Thanks for the welcome!