The model classifies upright images correctly.
Rotated versions of the same images fail.
The content is the same.
Only orientation changed.
Lost your password? Please enter your email address. You will receive a link and will create a new password via email.
Please briefly explain why you feel this question should be reported.
Please briefly explain why you feel this answer should be reported.
Please briefly explain why you feel this user should be reported.
This happens because CNNs are not rotation invariant by default. They learn orientation-dependent features unless trained otherwise.
Including rotated samples during training forces the network to learn rotation-invariant representations.
Common mistakes:
No geometric augmentation
Assuming CNNs handle rotations
The practical takeaway is that invariance must be learned from data.