Somewhere in your Salesforce org there is a field called Account_Status__c. Its help text — if it has any — says something like "Status of the account." That sentence took someone zero seconds to write. It also communicates zero information to the rep staring at it on a Friday afternoon wondering whether to set it to "At-Risk" or "Churned."
We have generated 2.4 million field descriptions across 300+ live Salesforce orgs over the past fourteen months. The patterns in what gets approved, what gets rejected, and what gets ignored have become very clear. This post is a direct account of those patterns — not recommendations derived from first principles, but observations from watching real Salesforce admins interact with AI-generated content at scale.
The three failure modes of AI help text
Before describing what good looks like, it's worth being precise about what bad looks like. We track admin approval rates per generated description. Descriptions that fall below a 40% first-pass approval rate share one of three failure signatures.
Failure 1: The circular definition
The model restates the field name in plain English without adding context. "Account Status" becomes "The status of the account." "Annual Revenue" becomes "The annual revenue of the account in USD." These are technically accurate — which is what makes them maddening. The admin reading the description already knows what the field label says. They want to know what the field means in this org.
// Circular — approval rate: 18%
"Account Status: The current status of the account."
// Contextual — approval rate: 81%
"Account Status: Drives renewal routing and dashboard filtering.
Active = in contract. At-Risk = flag for QBR. Churned = suppress from
campaigns. Do not use Inactive — deprecated in Q1 2025."
Failure 2: The hedge cascade
The model is uncertain about the field's purpose (because the field has no existing description and a generic name), so it lists every possible interpretation. "This field may be used to track the account's status for reporting, filtering, workflows, or other business purposes." Nobody approves this. It reads like a legal disclaimer and communicates even less than the circular definition.
Failure 3: The wrong register
The model writes for a different audience than the one reading the field. A field in the Service Cloud used exclusively by Tier 2 support agents shouldn't receive help text written for a marketing ops director. Yet without additional context, the model defaults to a general Salesforce audience. Specificity requires knowing who reads this page — which the model can infer from object relationships and profile access, but only if the system prompt asks it to.
What good actually looks like
Across the descriptions with the highest admin approval rates — above 85% first-pass — we see five consistent properties:
- Operational specificity. The description explains what happens in the business when the field changes, not what the field represents abstractly. "Setting this to Closed-Won triggers the commission calculation flow and updates the forecast category on the parent Opportunity" is useful. "Indicates the stage of the opportunity" is not.
- Deprecation warnings. The highest-value thing a field description can contain, by admin feedback, is a note that a picklist value or workflow dependency has been deprecated. Admins open help text precisely because they distrust the data — give them the data they distrust.
- Conditional logic pointers. "If this field is blank, the Lead routing rule falls through to the round-robin queue." This is information the admin could derive by reading the validation rules, but help text saves them 15 minutes.
- Format constraints stated explicitly. "MM/DD/YYYY format. Do not use ISO format — the legacy reporting pipeline parses this as a string." Admins are tired of learning format conventions the hard way.
- The audience is one person. Write for the person who will read this on a Tuesday with no documentation context. If that person works in billing, say "billing." If they're a new admin, say "new admin."
"The descriptions that get rejected immediately are the ones that could have been written by someone who had never logged into the org. The ones that get approved feel like they were written by the previous admin."
Using the org as context window
The single largest quality lever is how much of the org's metadata you feed into the model before asking it to generate a description for a single field. We ran controlled experiments on this.
Condition A: model receives field name + field type + object name. Approval rate: 34%.
Condition B: model receives everything in A, plus the field's picklist values (if any) and the object's relationship map. Approval rate: 58%.
Condition C: model receives everything in B, plus the validation rules that reference this field and the page layouts it appears on. Approval rate: 72%.
Condition D: model receives everything in C, plus the five most similar already-approved descriptions in the same org. Approval rate: 84%.
Condition D is the one we ship. "Most similar already-approved descriptions" acts as a few-shot style guide that is automatically specific to the org's own documentation culture. If the org writes formal, procedural help text, the model continues in that register. If the org writes terse, imperative help text, so does the model.
// Simplified context assembly (actual pipeline uses typed interfaces)
const context = {
field: { name, type, picklistValues, object },
relationships: objectRelationshipMap(field.object),
validationRules: rulesReferencingField(field),
pageLayouts: layoutsContaining(field),
styleExamples: mostSimilarApproved(field, orgDescriptions, 5)
};
The cost of this richer context is roughly 2× the token budget per description. Our pricing model absorbs this because the alternative — admins spending time re-generating or hand-editing — costs more in admin hours than the token increase.
Designing the review queue for admin velocity
The review queue is the product. The model generates candidates; admins decide. If the queue is wrong, the whole system collapses into either rubber-stamping (approving without reading) or rejection fatigue (giving up on the tool).
We track time-to-decision per description. The median is 6 seconds for approvals, 22 seconds for edits, and 4 seconds for rejections. That tells you something important: admins know within seconds whether a description is right or wrong. The ones that take 22 seconds are the ones that need editing — which means they were close but not right. That's the quality tier we're optimizing for.
Our queue is sorted by a composite risk score: fields that appear on the most page layouts, referenced in the most automation, with the most validation rules, sort to the top. A new admin working their way through a 20,000-field backlog should be addressing the most consequential fields first. Doing the long tail of rarely-used fields first is how you burn out an admin before they see the value.
Batch review patterns
Admins who achieve the highest throughput use batch review: filter to a single object, review all descriptions for that object in one session, then move to the next. This works because the admin builds context about the object's data model while working through it — each review informs the next. Solo-field review (jumping around the org) performs significantly worse on both speed and quality.
The length problem: 255 characters is a UX constraint
Salesforce enforces a 255-character limit on field-level help text. This is simultaneously the most important fact about writing field descriptions and the most ignored one.
Our model generates at a target of 200–240 characters — leaving a buffer for editing. We do not permit descriptions that hit 255 exactly; our validation rejects them before they reach the review queue. The reason: a description that hits the hard limit was written to fill space, not to inform. The discipline of the 255-character ceiling is a feature, not a bug.
// Validation before queue insertion
function validateDescription(text: string): ValidationResult {
if (text.length > 240) return { ok: false, reason: 'exceeds_target_length' };
if (text.length < 40) return { ok: false, reason: 'too_short' };
if (isCircularDefinition(text)) return { ok: false, reason: 'circular' };
if (containsHedgeCascade(text)) return { ok: false, reason: 'hedge_cascade' };
return { ok: true };
}
The 40-character minimum is equally important. "See documentation" is 18 characters and tells the admin nothing useful. "Update via integration only — do not edit manually." is 50 characters and prevents a data corruption incident.
What you can do right now, before buying anything
If you run a Salesforce org and want better help text but aren't ready to evaluate a tool, here's a practical starting point.
First, export every field in your org that has no help text using a SOQL query against FieldDefinition. Sort descending by the number of page layouts the field appears on. This is your triage list. Start from the top.
SELECT QualifiedApiName, Label, DataType
FROM FieldDefinition
WHERE EntityDefinition.QualifiedApiName = 'Opportunity'
AND InlineHelpText = NULL
ORDER BY LastModifiedDate DESC
Second, for each field you choose to document, write one sentence that a new admin could act on. If you can't write that sentence, the field's purpose is ambiguous — which is the real problem. The help text is downstream of org clarity, not upstream of it.
Third, set a style convention and write it down. "All help text is imperative. All help text mentions who uses this field. All help text notes related automation." Once the convention exists, maintaining it becomes mechanical — for humans or models.
If you want to see what OrgLens generates for your org's highest-priority undocumented fields, start a free audit below. We'll show you 25 examples before you commit to anything.