My GAN generates images but they look washed out.
Many samples look almost identical.
Training loss looks stable.
But the visual quality never improves.
Lost your password? Please enter your email address. You will receive a link and will create a new password via email.
Please briefly explain why you feel this question should be reported.
Please briefly explain why you feel this answer should be reported.
Please briefly explain why you feel this user should be reported.
In this situation, the generator stops exploring new variations and keeps reusing similar patterns. This is known as mode collapse, and it is one of the most common failure modes in GAN training. Blurriness also appears when the model is averaging over many possible outputs instead of committing to sharp details.
To fix this, the balance between the generator and discriminator needs to be improved. Making the discriminator stronger, using techniques like Wasserstein loss (WGAN), gradient penalty, or spectral normalization gives more stable gradients. Adding diversity-promoting methods such as minibatch discrimination or noise injection helps prevent the generator from reusing the same outputs. In many setups, simply adjusting learning rates so the discriminator learns slightly faster than the generator already makes a big difference.