Feature distributions look stable.
But prediction quality is declining.
Simple drift metrics don’t explain it.
Something deeper seems wrong.
Lost your password? Please enter your email address. You will receive a link and will create a new password via email.
Please briefly explain why you feel this question should be reported.
Please briefly explain why you feel this answer should be reported.
Please briefly explain why you feel this user should be reported.
This is a classic sign of concept drift.
Concept drift occurs when the relationship between inputs and outputs changes, even if input distributions remain similar. For example, user behavior or business rules may evolve.
Detecting it requires delayed labels, outcome monitoring, or business KPIs tied to predictions. Proxy metrics alone aren’t sufficient. In some systems, periodic retraining or challenger models help mitigate this risk.
The takeaway is that not all drift is visible in raw data.