You've been handed the keys to a Salesforce org. Maybe it's post-acquisition. Maybe the previous admin left. Maybe you're a consultant and this is Tuesday. Either way, you need to know what you're dealing with before you touch anything — because orgs that have been in production for 8+ years tend to contain surprises that aren't in the documentation, which often doesn't exist.
This is the checklist I run on every inherited org. It takes 2–4 hours to complete manually, or about 20 minutes if you use OrgLens's audit scan. Either way, do it before deploying anything.
1. Field coverage: how much is documented?
Pull the full field inventory from the Metadata API and count what percentage of fields have non-empty help text. This single number tells you more about the org's governance health than anything else.
SELECT COUNT()
FROM FieldDefinition
WHERE EntityDefinition.QualifiedApiName IN ('Account','Contact','Opportunity','Lead')
AND InlineHelpText != NULL
Anything above 60% field coverage is decent. 30–60% is typical. Below 30%, treat the org as undocumented and plan accordingly. The specific objects to check first: Account, Contact, Opportunity, Lead, and any custom objects that appear in more than 3 page layouts.
What it means when coverage is low: admins are operating on institutional memory that exists only in the heads of people who may no longer work there. Any process that depends on that knowledge is a single departure away from failure.
2. Deprecated picklist values in active use
Every org accumulates picklist values that were supposed to be retired but weren't, because retiring a picklist value requires recoding every record that uses it. The result: picklist values that appear on page layouts, drive automation, and get selected by reps — but are supposed to mean something different now, or nothing at all.
Check for picklist values that contain any of these strings: "DO NOT USE", "OLD", "DEPRECATED", "LEGACY", "TEMP", "TEST", "DELETE". You will find them in production orgs with thousands of active records using those values.
What it means when you find them: reports and dashboards that filter on these values are counting incorrectly. Automation that triggers on these values may be running unexpected paths. Fix the underlying records before removing the value.
3. Zombie automation
Workflows, Process Builder flows, and Flow Builder flows that are active but haven't triggered in 90+ days. These aren't safe to delete — they might be seasonal or triggered by edge cases — but they need to be audited. The danger is the inverse: automation that should be triggering but isn't, because a field name changed or a criteria condition is no longer being met.
Check: active Process Builder processes on objects where the last trigger date (from the Process Analytics page) is more than 90 days ago. Check: active record-triggered flows where the triggering field's picklist values have changed since the flow was last modified.
4. Profile and permission set bloat
Count the number of profiles and permission sets. More than 20 profiles is a management problem. More than 100 permission sets without clear naming conventions is an audit problem. The specific risk: profiles that grant modify-all on sensitive objects, created during setup and never restricted.
SELECT Profile.Name, COUNT(Id) UserCount
FROM User
WHERE IsActive = true
GROUP BY Profile.Name
ORDER BY COUNT(Id) DESC
Any profile with modify-all data access and fewer than 3 active users should be reviewed immediately. Often these are legacy admin profiles from the initial implementation.
"We found a profile called 'Integration User' with modify-all on every object and 47 active users assigned to it. It had been used as a catch-all for onboarding new hires. Nobody had noticed because the profile name sounded technical."
5. Shadow fields
Fields that exist on an object but don't appear on any page layout, aren't referenced in any automation or validation rule, and have zero non-null values in the last 12 months. These are truly orphaned — they might be safe to delete, or they might be used by an integration that queries them directly without going through a page layout.
Before deleting any shadow field, check whether it appears in any Apex class, Visualforce page, or Connected App query log. Fields that are queried by integrations don't appear on page layouts but are very much in use.
6. Undocumented validation rules
Validation rules that have an error message of "Invalid." or "Error." or the field API name itself. These block saves without telling the user why. In inherited orgs, they're often the source of tickets that say "Salesforce won't let me save the record" with no further context.
Every validation rule must have an error message that explains the constraint in plain language. "Discount percentage cannot exceed 40% without VP approval. Contact your manager to request an exception." is useful. "DISCOUNT_TOO_HIGH" is not.
7. Formula field chain depth
A formula field that references another formula field that references another formula field — and so on. Deep chains degrade performance on record load and can cause timeout errors on reports. Salesforce allows up to 10 levels of cross-object formula references, but performance starts degrading visibly at 4–5 levels.
Any formula field chain deeper than 3 levels should be documented and reviewed. Not necessarily flattened — sometimes the chain represents a legitimate business logic hierarchy — but understood.
8. API usage ceiling proximity
Check your current API usage against your daily limit in System Overview. Orgs running at above 70% of their daily API limit have no headroom for integration failures, data loads, or tooling like OrgLens. If you're close to the ceiling, identify the top consumers before adding any new integrations.
9. Governor limit exposure in Apex
Any Apex trigger or class that performs SOQL queries inside a loop is a future governor limit exception. Search for SOQL patterns inside for loops:
// Pattern to search for (simplified)
for (...) {
[SELECT ... FROM ...] // governor limit violation waiting to happen
}
These won't fail in testing with 5 records. They'll fail in production at 201 records during a data load. They always appear at the worst possible time.
10. Data quality benchmarks
Run a sample count of required fields with null values on your top objects. Required fields can't be null in new records, but in any org that's been migrated or bulk-loaded, they often are in legacy records. These nulls cause validation rule failures, automation errors, and reporting gaps that are blamed on "Salesforce being unreliable."
11. Installed package inventory
List every installed managed package, its version, and when it was last updated. Packages that haven't been updated in 2+ years are accumulating unpatched vulnerabilities and may be using deprecated APIs. Packages from vendors that no longer exist need to be migrated.
Compare your installed package list against the AppExchange retirement announcements. More vendors than you'd expect stop supporting packages without explicitly telling their customers.
12. User adoption signals
Query the login history for the past 90 days. What percentage of active users have logged in at least once? Below 60% active login rate means users have found workarounds — spreadsheets, email threads, other tools. Those workarounds contain data that isn't in Salesforce, which means your Salesforce data is incomplete by definition.
SELECT COUNT(DISTINCT UserId) FROM LoginHistory
WHERE LoginTime = LAST_N_DAYS:90
Divide by your active user count. The result is your 90-day active rate. Anything below 0.7 warrants a user interview to understand what they're doing instead of using Salesforce.
If you run through all 12 of these checks manually, set aside a full working day. The analysis isn't hard, but the data collection across Metadata API, SOQL, and Apex source is repetitive.
OrgLens automates checks 1, 2, 5, 6, 7, and 9 automatically at scan time. Checks 3, 4, 8, 10, 11, and 12 are available as report exports on the Enterprise plan. Start with a free scan to see what surfaces in the first 12 minutes.