TOOL-008 — Product Adoption Pattern Recognizer
Tier 3 Specialist Tool · Stateless · Recognizes feature-by-feature adoption patterns for an account · Different shape than TOOL-004 — this is feature engagement, not consumption volume
Tier 3 · Tool
Specced · v31
Post-sales · Activation
Haiku
Purpose
Reads per-account feature engagement telemetry (which product features are used, by how many users, how often, in what sequence) and classifies the account's adoption pattern: deeply integrated / surface-only / siloed-by-team / declining / activating. Different from TOOL-004 (consumption volume forecasting) — this tool is about which features are used, not how much. Used by AGT-501 customer health (as activation dimension augment) and AGT-902 Account Brain for "is this account actually getting value?" diagnostic queries.
Closes the post-sales gap on feature-level adoption that the v26 evaluation flagged as missing. Today AGT-501 has a seat utilization dimension (how many seats are active) but no feature-engagement dimension (which features those seats are using). That distinction matters for predicting churn/expansion much earlier than seat metrics alone.
Input schema
{
"account_id": "uuid",
"feature_engagement_telemetry": {
"trailing_window_days": 90,
"feature_usage": [
{
"feature_id": "string",
"feature_category": "core" | "advanced" | "integration" | "admin" | "experimental",
"first_use_at": "ISO 8601 | null",
"last_use_at": "ISO 8601 | null",
"use_event_count_in_window": 0,
"distinct_users_in_window": 0,
"users_pct_of_active": 0.0 // % of account's active users using this feature
}
]
},
"account_context": {
"contract_start_date": "ISO 8601",
"active_seats": 0,
"licensed_seats": 0,
"primary_use_case": "string",
"ideal_feature_set_for_use_case": ["feature_id"] // optional —
// expected features per ICP segment
},
"comparison_baseline": { // optional — expected pattern per cohort
"cohort_definition": "string",
"cohort_typical_feature_breadth": 0, // # features used by median similar account
"cohort_typical_users_per_feature_pct": 0.0
}
}
Output schema
{
"tool_call_id": "uuid",
"adoption_pattern": {
"primary_pattern": "deeply_integrated" | "surface_only" |
"siloed_by_team" | "declining" | "activating",
"secondary_pattern": "string | null", // when account fits two patterns
"pattern_confidence": "high" | "medium" | "low"
},
"adoption_metrics": {
"feature_breadth": 0, // # distinct features used in window
"feature_breadth_vs_cohort": 0.0, // ratio vs baseline
"core_feature_adoption_pct": 0.0, // % of core features used
"advanced_feature_adoption_pct": 0.0, // % of advanced features used
"integration_feature_count": 0,
"feature_concentration_index": 0.0, // 0-1, low = broad use, high = concentrated
"newly_adopted_features_in_window": 0,
"abandoned_features_in_window": 0 // features that lost users
},
"key_observations": [
{
"observation": "string",
"supporting_metric": "string",
"interpretation": "string"
}
],
"implications_for_caller": {
"expansion_signal": "strong" | "moderate" | "none",
"churn_signal": "strong" | "moderate" | "none",
"intervention_recommendation": "string | null"
},
"ungrounded_assumptions": ["string"],
"data_completeness": "high" | "medium" | "low",
"tool_metadata": {
"model": "claude-haiku-4-5",
"input_tokens": 0, "output_tokens": 0,
"cost_usd_estimate": 0.0,
"latency_ms": 0
}
}
Adoption pattern taxonomy
| Pattern | What it looks like | Implication |
| deeply_integrated | Broad feature breadth + high users-per-feature + multiple integration features active. Account uses the product as system-of-record for the use case. | Strong renewal signal. Strong expansion candidate. Low churn risk. |
| surface_only | Narrow breadth + only core features + low users-per-feature. Account uses the product but hasn't gone deep. | At-risk on renewal — competitor swap-out is easier. Activation play candidate — can grow into deeply_integrated. |
| siloed_by_team | Decent breadth but concentrated — one team uses many features, other teams use few or none. High feature_concentration_index. | Cross-team expansion signal. Risky on champion loss — if the active team's champion leaves, account drops to surface_only fast. |
| declining | Abandoned features > newly adopted features in window. Users-per-feature trending down. | Strong churn signal. Intervention candidate. AGT-503 expansion plays should be suppressed. |
| activating | Newly-onboarded account with rapidly expanding feature breadth + new users adopting. Trajectory positive. | Onboarding success signal. Activation investment paying off. AGT-501 health should reflect upward trend. |
Called by
| Caller | Invocation context |
| AGT-501 Customer Health Monitor | Daily batch on per-account basis. Output augments AGT-501's behavioral dimensions with a "feature engagement" dimension (separate from seat utilization). |
| AGT-902 Account Brain | "Is this account actually getting value?" queries. Brain integrates pattern + key_observations into BrainAnalysisLog narrative. |
| AGT-503 Expansion Trigger | Augments expansion qualification — deeply_integrated and siloed_by_team accounts get higher expansion-readiness signal; declining accounts get expansion suppressed. |
| AGT-601 Onboarding Orchestrator | For activating accounts — tracks onboarding success via feature breadth growth. |
Design principles
- Pattern recognition, not score generation. Output is a characterization, not a single number. AGT-501 owns score composition; this tool informs one dimension of that score.
- Cohort-relative when possible. Feature breadth means little in absolute terms ("they use 8 features") — meaningful only relative to expected breadth for similar accounts. The
comparison_baseline input enables this; tool degrades gracefully when baseline absent.
- Numerical work in code, characterization in LLM. Same pattern as TOOL-004 and TOOL-005. Concentration index, breadth ratios, abandonment counts — all computed deterministically. LLM produces the pattern label + observations + interpretation.
- Distinguish "siloed_by_team" from "concentrated usage." A small account where 3 users use many features is not siloed — it's appropriately scoped. Siloed_by_team requires multi-team account context (active_seats > some threshold). Tool is honest when account size doesn't permit the distinction.
- Onboarding-aware. An account 30 days into contract with low feature breadth is not "surface_only" — it's "activating" or pre-pattern. Tool factors contract_start_date.
Cost ceiling
| Constraint | Value |
| Per-call input budget | 10K tokens (feature telemetry can be sparse for small accounts, dense for large) |
| Per-call output budget | 1.5K tokens |
| Default model | Haiku (numerical + classification) |
| Per-call cost estimate | ~$0.01 |
| Monthly cap | $300/mo (~15,000 calls) |
| Frequency | Daily batch from AGT-501 across active accounts. Higher volume than other tools but Haiku keeps cost contained. |
Eval criteria
| Criterion | Pass threshold |
| Schema compliance | 100% (hard) |
| Pattern classification accuracy | 20 retrospective accounts with known retrospective adoption profiles. ≥ 75% pattern match. |
| Churn signal calibration | 10 accounts that retrospectively churned. % where TOOL-008 flagged churn_signal as moderate or strong ≥ 12mo before churn. ≥ 60%. |
| Onboarding-aware classification | 5 accounts in first 60 days of contract. % correctly labeled "activating" rather than "surface_only" — 100% (hard, eval-enforced). |
| Cohort-baseline degradation | 5 cases without baseline data. Tool produces output with data_completeness = 'medium' or 'low'; does not fabricate cohort comparisons. 100% (hard). |
| P95 latency | ≤ 2s |