My model gives great accuracy on my laptop.When deployed on a server, predictions become inconsistent.The same input sometimes produces different outputs.Nothing crashes, but the behavior is unreliable.
Decode Trail Latest Questions
My RNN works fine on short sequences.When I give it longer inputs, predictions become random.Loss increases with sequence length.It feels like the model forgets earlier information.
Short sequences work fine.Longer sequences cause GPU crashes.No code changes were made.Only input size increased.
My GAN generates images but they look washed out.Many samples look almost identical.Training loss looks stable.But the visual quality never improves.
My model uses both image and text inputs.It works well when both are provided.If one modality is missing, outputs become random or broken.Real-world data is often incomplete.
My CNN reaches over 95% accuracy on the training set.But on the test set it drops below 40%.The data comes from the same source.I feel the model is memorizing instead of learning.
I fine-tuned a pretrained Transformer on a small custom dataset.Training finishes without errors.But the generated outputs look random and off-topic.It feels like the model forgot everything.
I trained an LSTM for next-word prediction on text data.The training loss decreases normally.But when I generate text, it repeats the same token again and again.It feels like the model is ignoring the sentence.