NaN loss
Lost your password? Please enter your email address. You will receive a link and will create a new password via email.
Please briefly explain why you feel this question should be reported.
Please briefly explain why you feel this answer should be reported.
Please briefly explain why you feel this user should be reported.
NaNs usually come from invalid numerical operations.
Common sources include division by zero, log of zero, exploding gradients, or invalid input values. In deep models, this often appears after a few unstable updates.
Start by enabling gradient clipping and lowering the learning rate. Then check your input data for NaNs or infinities before it enters the model.
If using mixed precision, confirm loss scaling is enabled correctly.
Common mistakes:
Normalizing with zero variance features
Ignoring data validation
Training with unchecked custom loss functions
NaNs are symptoms—fix the instability, not the symptom.