fluctuation in accuracy during training runs
Lost your password? Please enter your email address. You will receive a link and will create a new password via email.
Please briefly explain why you feel this question should be reported.
Please briefly explain why you feel this answer should be reported.
Please briefly explain why you feel this user should be reported.
Non-determinism is the usual culprit.
Random initialization, data shuffling, parallelism, and GPU kernels all introduce variance. Without controlled seeds, results will differ.
Set seeds across libraries and disable non-deterministic operations where possible. Expect some variance, but large swings indicate instability.
Common mistakes:
Setting only one random seed
Comparing single-run results
Ignoring hardware differences
Reproducibility requires deliberate configuration