tuned model inconsistency
Lost your password? Please enter your email address. You will receive a link and will create a new password via email.
Please briefly explain why you feel this question should be reported.
Please briefly explain why you feel this answer should be reported.
Please briefly explain why you feel this user should be reported.
This happens when fine-tuning introduces noise or bias that overwrites useful pretrained knowledge.
The most frequent cause is low-quality or inconsistent fine-tuning data. If your dataset is small, poorly labeled, or stylistically narrow, the model may over-specialize and lose general reasoning ability.
Another common issue is using an aggressive learning rate. Large updates can destroy pretrained representations in just a few steps.
To fix this, reduce the learning rate significantly and limit the number of trainable parameters using techniques like LoRA or partial layer freezing. Always evaluate against a held-out baseline prompt set to detect regression early.
Common mistakes:
Fine-tuning on fewer than a few thousand high-quality samples
Not validating against base model outputs
Training for too many epochs
Fine-tuning should nudge behavior, not replace core knowledge.