Lost your password? Please enter your email address. You will receive a link and will create a new password via email.
Please briefly explain why you feel this question should be reported.
Please briefly explain why you feel this answer should be reported.
Please briefly explain why you feel this user should be reported.
Why does my trained PyTorch model give different predictions every time even when I use the same input?
This happens because your model is still running in training mode, which keeps randomness active inside layers like dropout and batch normalization. PyTorch layers behave differently depending on whether the model is in training or evaluation mode. If model.eval() is not called before inference, droRead more
This happens because your model is still running in training mode, which keeps randomness active inside layers like dropout and batch normalization.
PyTorch layers behave differently depending on whether the model is in training or evaluation mode. If
model.eval()is not called before inference, dropout will randomly disable neurons and batch normalization will update running statistics, which makes predictions change on every run even with identical input.The fix is simply to switch the model to evaluation mode before inference:
model.eval()
with torch.no_grad():
output = model(input_tensor)
See lesstorch.no_grad()is important because it prevents PyTorch from tracking gradients, which also reduces memory usage and avoids subtle state changes during inference.