My image classifier performs very well on bright daylight photos.
When images are darker or taken indoors, accuracy drops sharply.
The objects are still the same.
Only the lighting seems different.
Why does my vision model fail when lighting conditions change?
Nishant MishraBegginer
This happens because your model has learned lighting patterns instead of object features. Neural networks learn whatever statistical signals are most consistent in the training data, and if most images were taken under similar lighting, the network uses brightness and color as shortcuts.
When lighting changes, those shortcuts no longer hold, so the learned representations stop matching what the model expects. This causes predictions to collapse even though the objects themselves have not changed. The network is not failing — it is simply seeing a distribution shift.
The solution is to use aggressive data augmentation, such as brightness, contrast, and color jitter, so the model learns features that are invariant to lighting. This forces the CNN to focus on shapes, edges, and textures instead of raw pixel intensity.