Discovery & Research
Baseline inputs, stakeholder mapping, interview plan, and key assumptions
Before designing the copilot, we need to understand what exists, who cares, and what we're betting on. This section covers Grab's existing infrastructure, the 11 stakeholder groups whose buy-in determines success, a structured interview plan, and the 6 assumptions that must hold for the selective friction model to work.
Baseline Research
What Grab already has — public sources and engineering disclosures that anchor Workstream 3 design.
Workforce Routing
Grab Engineering Blog, 20208 countries, skill-based + priority routing with safety bumping. Dynamic queue management.
In-House Chat Platform
Grab Engineering Blog, 2020Persistent sessions, unified customer context, integrated CRM — agents see full history.
Automated FAQ Tier
Grab Engineering Blog, 2020AI-powered first line handles repetitive inquiries (balance, status checks).
AI Merchant Assistant & Driver Companion
Anthropic + Grab, 202525% negative sentiment reduction, +5.7ppt resolution rate for merchants. 250k+ drivers supported.
AI-Led Profit Target
The Star / Reuters, Feb 2026Grab targets tripling profit by 2028 via AI and new services.
Policy & Control Landscape
Grab public policy pagesCancellation policies, refund timelines (GrabFood/Mart/Pay), safety zero-tolerance.
Key Design Implications
Stakeholder Analysis
11 stakeholder groups whose alignment determines whether the copilot works.
Support Agents
Primary userSupervisors
Queue reviewerCustomers
Indirect beneficiaryPolicy / Compliance
Rule authorityLegal / Privacy
Data governanceOps Managers
Market-level ownerQA / Training
Quality feedbackFinance
Budget authorityEngineering / ML
Build teamWorkstream 1 & 2
Cross-workstreamMerchants / Drivers
IndirectInterview Plan
Structured discovery across 7 stakeholder groups, 26–35 interviews total.
Markets: SG (pilot) + one expansion market (MY or ID)
Format: 30–45 min semi-structured; recorded with consent; notes shared within 48h
| Group | Interviews | Selection Criteria | Questions |
|---|---|---|---|
| Support agents | 12–15 | Mix of tenured (>6m) + newer; high-override + low-override; SG + expansion market | 12 |
| Supervisors | 4–6 | Currently handling escalation queues; mix of SG and expansion market | 10 |
| Ops managers | 3–4 | SG + expansion; at least one with cross-market visibility | 9 |
| Policy / compliance | 2–3 | Regional compliance lead + market-level policy owner | 9 |
| QA / training | 2–3 | QA leads covering pilot product lanes | 8 |
| Legal / privacy | 2 | Grab Legal (data privacy counsel) + market-level contact | 7 |
| Finance | 1–2 | Support cost owner or FP&A covering support operations | 5 |
Sample Discovery Questions
“When you override a suggestion, how do you decide what to do instead? What information do you wish you had?”
“If we asked you to tag overrides with a reason (<5 seconds), would that feel reasonable or like surveillance?”
“What signals tell you an override is genuinely risky vs. a reasonable judgment call?”
“If we flag 5% of overrides as high-risk for supervisor review, does that feel like too many, too few, or about right?”
“Does Grab's existing support data processing consent cover logging agent deviation data?”
“If the copilot reduces cost per case by X%, how would Finance want that attributed?”
Interview Output Artifacts
Assumptions Tracker
6 assumptions that must hold for the selective friction model to work. P0 is existential.
Better contextual suggestions raise agent adoption above 70%
Most policy-risk can be captured by <20 high-signal rules (>80% coverage)
Existential: if this fails, the selective-friction model shifts to heavier ML classification.
Selective gates won't hurt resolution time (P50 regression <5%)
Post-resolution audits detect quality-risk with QA agreement >80%
Cost-aware routing keeps LLM cost <$0.01/case at 1M tickets
Policy packs maintain <48h freshness SLA
Dependency notes
- •Assumption #2 is existential — validate first in Spike 2 (weeks 5–8)
- •Assumptions #1 and #3 validated together in pilot — shared cohort and timeline
- •Assumption #5 depends on Spike 1 (policy retrieval) and model routing implementation