This happens because the model is overfitting and catastrophically forgetting pretrained knowledge. When fine-tuning on small datasets, the Transformer’s weights drift away from what they originally learned. Use a lower learning rate and freeze early layers: for param in model.base_model.parameters(Read more
This happens because the model is overfitting and catastrophically forgetting pretrained knowledge.
When fine-tuning on small datasets, the Transformer’s weights drift away from what they originally learned. Use a lower learning rate and freeze early layers:
Also use weight decay and early stopping.
Common mistakes:
Learning rate too high
Training all layers on tiny datasets
No regularization
The practical takeaway is that pretrained models need gentle fine-tuning, not aggressive retraining.
See less
Why do Lightning Web Components fail silently in production but not sandbox?
Production environments usually have stricter security settings, larger datasets, and more complex sharing rules. LWCs run entirely in user context, so differences in field-level security or record access can cause data retrieval to fail silently if error handling isn’t implemented correctly. AnotheRead more
Production environments usually have stricter security settings, larger datasets, and more complex sharing rules. LWCs run entirely in user context, so differences in field-level security or record access can cause data retrieval to fail silently if error handling isn’t implemented correctly.
Another common cause is unhandled promise rejections in JavaScript. In sandbox, test users often have broad permissions, masking issues that only appear when real users with limited access load the component.
The most reliable fix is adding robust error handling in both Apex and JavaScript, logging meaningful errors, and testing LWCs using realistic user profiles.
See lessTakeaway: LWCs rarely “break randomly”—they expose hidden permission and error-handling gaps.
Why does Apex logic behave unpredictably when multiple triggers exist?
Salesforce does not guarantee execution order between multiple triggers on different objects. When one trigger updates another object, it can cause that object’s triggers and automation to fire, sometimes recursively. This creates execution paths that are difficult to reason about just by reading coRead more
Salesforce does not guarantee execution order between multiple triggers on different objects. When one trigger updates another object, it can cause that object’s triggers and automation to fire, sometimes recursively. This creates execution paths that are difficult to reason about just by reading code.
The unpredictability increases when triggers perform updates without guarding against recursion or checking whether changes are actually required.
Most mature orgs solve this by using trigger handler frameworks, enforcing single-trigger-per-object patterns, and minimizing cross-object updates in synchronous transactions.
See lessTakeaway: Trigger behavior becomes unstable when execution order is assumed rather than controlled.
Why do sharing rules become harder to reason about over time?
Sharing rules accumulate silently. Each exception adds another layer, and Salesforce evaluates them together at runtime. Manual shares, implicit sharing, and role hierarchy effects make outcomes non-obvious. Mature orgs periodically audit and simplify sharing models instead of layering fixes indefinRead more
Sharing rules accumulate silently. Each exception adds another layer, and Salesforce evaluates them together at runtime. Manual shares, implicit sharing, and role hierarchy effects make outcomes non-obvious.
Mature orgs periodically audit and simplify sharing models instead of layering fixes indefinitely.
See lessTakeaway: Sharing models need refactoring just like code.
Why do Salesforce integrations work initially but become unstable over time?
Most integrations are built and tested with small volumes and ideal conditions. As real usage grows, API limits, retry storms, data quality issues, and unhandled edge cases start surfacing. Salesforce is especially sensitive to inefficient request patterns and excessive synchronous processing. StablRead more
Most integrations are built and tested with small volumes and ideal conditions. As real usage grows, API limits, retry storms, data quality issues, and unhandled edge cases start surfacing. Salesforce is especially sensitive to inefficient request patterns and excessive synchronous processing.
Stable integrations usually rely on batching, idempotent design, proper error handling, and asynchronous processing. Monitoring and backoff strategies are just as important as the initial implementation.
See lessTakeaway: Integration stability depends more on architecture than on initial correctness.
Why do Salesforce Flows become hard to maintain as automation grows?
Flows become hard to maintain because they scale visually, not structurally. Each new requirement adds branches, decisions, and record updates, but there’s no strong modularity like you’d have in Apex. Over time, logic that should be reusable or isolated ends up duplicated across paths, making changRead more
Flows become hard to maintain because they scale visually, not structurally. Each new requirement adds branches, decisions, and record updates, but there’s no strong modularity like you’d have in Apex. Over time, logic that should be reusable or isolated ends up duplicated across paths, making changes risky.
Teams usually handle this by splitting responsibilities: keeping Flows focused on orchestration and moving complex logic into Apex, subflows, or reusable components. Clear naming, documentation, and strict ownership rules also help slow down entropy.
Takeaway: Flows work best when they stay simple and delegate complexity elsewhere.
See lessWhy does my cloud firewall allow traffic I expected to be blocked?
Most cloud firewalls evaluate rules in a defined order, and earlier allow rules can override later deny rules. Direction also matters—outbound rules are evaluated separately from inbound ones. It’s common to focus on the presence of a rule without checking how it’s evaluated in context. OverlappingRead more
Most cloud firewalls evaluate rules in a defined order, and earlier allow rules can override later deny rules. Direction also matters—outbound rules are evaluated separately from inbound ones.
It’s common to focus on the presence of a rule without checking how it’s evaluated in context. Overlapping rules, defaults, or inherited policies can all affect the outcome.
Takeaway: Firewall behavior depends on evaluation order, not just rule intent.
See lessWhy does my application authenticate users correctly but still expose sensitive data?
This usually means authentication is working, but authorization checks are either missing or inconsistently applied. Logging a user in confirms who they are, but it doesn’t automatically restrict what they can access once inside the system. In many applications, authorization logic exists at the UIRead more
This usually means authentication is working, but authorization checks are either missing or inconsistently applied. Logging a user in confirms who they are, but it doesn’t automatically restrict what they can access once inside the system.
In many applications, authorization logic exists at the UI or controller layer but is missing in deeper layers such as business logic or database queries. That makes it possible for users to bypass restrictions by calling APIs directly or manipulating parameters.
A reliable fix involves enforcing authorization at every sensitive operation, ideally close to where data is accessed rather than only at entry points.
Takeaway: Authentication opens the door, but authorization decides which rooms stay locked.
See less