prompt failure
Lost your password? Please enter your email address. You will receive a link and will create a new password via email.
Please briefly explain why you feel this question should be reported.
Please briefly explain why you feel this answer should be reported.
Please briefly explain why you feel this user should be reported.
Prompt changes can unintentionally alter task framing, leading to valid but incorrect outputs.
LLMs are highly sensitive to instruction wording, ordering, and context length. A prompt that works during testing may fail once additional system messages or user inputs are added.
To prevent this, version-control prompts and test them with adversarial and edge-case inputs. Keep instructions explicit and avoid mixing multiple objectives in a single prompt.
If outputs suddenly degrade, diff the prompt text before blaming the model.
Common mistakes:
Relying on implicit instructions
Appending user input without separators
Assuming prompts are stable across model versions
Treat prompts as code, not static text.