My CNN reaches over 95% accuracy on the training set.But on the test set it drops below 40%.The data comes from the same source.I feel the model is memorizing instead of learning.
Decode Trail Latest Questions
I am training a deep network for a regression task.The loss drops initially but then stops changing.Even after many epochs it never improves.The model is clearly underperforming.
I fine-tuned a pretrained Transformer on a small custom dataset.Training finishes without errors.But the generated outputs look random and off-topic.It feels like the model forgot everything.
I trained an LSTM for next-word prediction on text data.The training loss decreases normally.But when I generate text, it repeats the same token again and again.It feels like the model is ignoring the sentence.
My model uses both image and text inputs.It works well when both are provided.If one modality is missing, outputs become random or broken.Real-world data is often incomplete.
My image classifier performs very well on bright daylight photos.When images are darker or taken indoors, accuracy drops sharply.The objects are still the same.Only the lighting seems different.
My model recognizes actions well in static camera videos.When the camera pans or shakes, predictions become unstable.The action is the same.Only the camera motion changes.
The base model worked well before.After fine-tuning on new data, accuracy drops everywhere.Even old categories are misclassified.The model seems to have forgotten what it knew.
The reconstruction loss is very low on training images.But when I test on new data, outputs look distorted.The model seems confident but wrong.It feels like it memorized the dataset.
I added thousands of new user interactions to my training dataset.Instead of improving, the recommendation quality dropped.Users are now getting irrelevant suggestions.It feels like more data made the model less accurate.