TOOL-009 — Activation / Time-to-Value Analyzer

Tier 3 Specialist Tool · Stateless · Measures time-to-value milestones longitudinally per account · Distinct from TOOL-008 (current state snapshot) — this is timing analysis
Tier 3 · Tool Specced · v32 Post-sales · Activation Haiku
Purpose

Measures longitudinal time-to-value (TTV) milestones for an account against expected timing benchmarks. Where TOOL-008 classifies current adoption pattern (snapshot), TOOL-009 measures timing: did this account hit first-value within X days, did integration light up within Y days, did multi-team adoption emerge within Z days? Output is per-milestone delta vs. benchmark + projected achievement of remaining milestones based on trajectory.

Closes the timing-side of the post-sales activation gap. TOOL-008 answers "what does this account's adoption look like today?" TOOL-009 answers "is this account on the activation trajectory we expected, and if not, where is it slipping?"
Input schema
{ "account_id": "uuid", "contract_start_date": "ISO 8601", "milestone_definitions": [ // configured per ICP segment { "milestone_id": "string", "milestone_name": "string", // "first_api_call" / "first_workflow_complete" // "5_users_active" / "integration_live" "expected_days_to_achieve": 0, // benchmark for this segment "achievement_signal": "string" // which event/state indicates achievement } ], "achievement_history": [ // observed achievements { "milestone_id": "string", "achieved_at": "ISO 8601 | null", // null if not yet achieved "elapsed_days_at_achievement": 0 } ], "activity_signals_remaining_milestones": { "trailing_30d_activity_score": 0.0, // 0-1 "feature_adoption_velocity": 0.0, // features added per week trailing 30d "user_growth_rate": 0.0 }, "comparison_cohort": { // similar accounts at similar contract age "cohort_definition": "string", "cohort_p50_days_per_milestone": [ { "milestone_id": "string", "p50_days": 0 } ] } }
Output schema
{ "tool_call_id": "uuid", "milestone_status": [ { "milestone_id": "string", "status": "achieved_on_time" | "achieved_late" | "in_progress_on_track" | "in_progress_at_risk" | "in_progress_off_track" | "stalled", "delta_days_vs_benchmark": 0, // negative = early; positive = late "delta_pct_vs_cohort_p50": 0.0 } ], "overall_ttv_pattern": "ahead" | "on_track" | "lagging" | "stalled", "projected_remaining_milestones": [ { "milestone_id": "string", "projected_days_to_achieve": 0, "projected_vs_benchmark": "ahead" | "on_target" | "late" | "very_late", "projection_confidence": "high" | "medium" | "low" } ], "key_observations": [ { "observation": "string", "supporting_metric": "string", "implication": "string" } ], "intervention_recommendations": [ { "trigger": "string", // what observation triggered this "intervention_type": "csm_outreach" | "ie_check_in" | "workflow_acceleration" | "no_action_needed", "urgency": "high" | "medium" | "low" } ], "ungrounded_assumptions": ["string"], "data_completeness": "high" | "medium" | "low", "tool_metadata": { "model": "claude-haiku-4-5", "input_tokens": 0, "output_tokens": 0, "cost_usd_estimate": 0.0, "latency_ms": 0 } }
Called by
CallerInvocation context
AGT-601 Onboarding OrchestratorWeekly batch on accounts in onboarding window. Output drives onboarding interventions.
AGT-501 Customer HealthFor accounts past onboarding but still ramping (months 3–9 post-contract). Output augments activation dimension distinct from TOOL-008.
AGT-902 Account BrainFor "is this account activating on time?" queries during meeting prep or hand-off.
AGT-704 Business Review OrchestratorAggregate cohort-level TTV trends for QBR retention health section.
Design principles
  1. Milestones are configured, not invented. The tool reads milestone definitions from input — it does not decide what TTV milestones matter for the segment. RevOps + Product define the milestones; tool measures against them.
  2. Numerical comparison + LLM characterization. Same hybrid pattern as TOOL-004 / TOOL-005 / TOOL-008. Time-delta math runs deterministically; LLM produces the pattern label, observations, and intervention recommendations.
  3. Cohort-relative when baseline available. Absolute timing thresholds matter less than relative-to-cohort timing. A 30-day delay from benchmark is fine if the cohort p50 is at 35; same 30-day delay is concerning if cohort p50 is at 14.
  4. Honest about projection confidence. Projecting future milestone achievement from current activity signals is inherently uncertain. Tool emits projections only when activity signals are strong enough; otherwise marks confidence as low or omits the projection.
  5. Distinct from TOOL-008. TOOL-008 is descriptive (current state of feature adoption); TOOL-009 is longitudinal (timing of milestone achievement). Both can be called for the same account; they answer different questions.
Cost ceiling
ConstraintValue
Per-call input budget8K tokens
Per-call output budget1.5K tokens
Default modelHaiku
Per-call cost~$0.01
Monthly cap$200/mo (~10,000 calls)
FrequencyModerate — AGT-601 weekly batch on onboarding accounts; AGT-501 monthly batch on ramping accounts.
Eval criteria
CriterionPass threshold
Schema compliance100% (hard)
Milestone status classification accuracy15 retrospective accounts; ≥ 80% of milestone statuses match retrospective ground truth
Projection calibrationFor projected milestones with known retrospective achievement: % where actual achievement fell within projected range ≥ 70%
Low-confidence honestyFor 5 cases with deliberately weak activity signals: tool returns projection_confidence = 'low' (not 'high'). 100% (hard).
Cohort-baseline degradation5 cases without cohort baseline: tool returns absolute deltas only, marks data_completeness = 'medium' or 'low'. 100% (hard).
P95 latency≤ 2s