feature sccalng breaking retrained model
Lost your password? Please enter your email address. You will receive a link and will create a new password via email.
Please briefly explain why you feel this question should be reported.
Please briefly explain why you feel this answer should be reported.
Please briefly explain why you feel this user should be reported.
If scaling parameters change between training runs, the model may receive inputs in a completely different range than expected.
This often happens when scalers are refit during retraining instead of reused, or when training and inference pipelines compute statistics differently. The model still runs, but its learned weights no longer align with the input distribution.Always persist and version feature scalers alongside the model, or recompute them using a strictly defined window. For tree-based models this matters less, but for linear models and neural networks it’s critical.
Common mistakes:
Recomputing normalization on partial datasets
Applying per-batch scaling during inference
Assuming scaling is “harmless” preprocessing
Feature scaling is part of the model contract.