Research Operations Platform

Research that compounds.

Most AI starts with a blank prompt. Qori starts with your stakeholder interviews, desk research, and discovery synthesis. Every document builds on the last. Nothing invented. Everything traced.

Start a study →See how it works
Hover the underlined text to see evidence chains
Research Brief
Research Brief
VA Mobile App Claims
Navigation Study
4.2 ★Generated Feb 18, 2025 · 7 sources

Research Objectives

This study investigates how Veterans navigate claims status information in the VA Health and Benefits mobile app, focusing on three user journeys identified through convergent evidence from desk research and stakeholder interviews.

Background

Analysis of 487 help desk tickets from Q3 2024 revealed that 38% of all support contacts relate to claims navigation. App analytics show 45% abandonment on the claims status page, with average time-to-find exceeding 47 seconds for a task that benchmarks at 12.

“Veterans are calling us because they can't find their claim. It's right there in the app, but the way we've structured the navigation — it's hidden.”

— Product Manager, Claims and Appeals

Hypotheses

Veterans cannot locate claims status because the hamburger menu hides critical services behind 2+ taps that don't match their mental model of service organization. We will validate this through task-based observation across 6 participants.

Coaching
Quality
↳ Research Objectives
Strong connection between questions and evidence. Each objective traces back to desk research or stakeholder data.
NNg — Research Planning
Methodology
↳ Hypotheses
All three hypotheses are testable and specific — they name who, what, and why. This is what good hypotheses look like.
Portigal — Interviewing Users
One Finding, Five Transformations

Watch evidence flow through the chain

A barrier from desk research becomes a probe, an observation, an issue, and finally a ticket in your backlog.

🚧
💬
👁
⚠️
🚧
Discovery
Barrier
Step 1 of 5
Hamburger menu hides critical services
Found in 3 of 4 desk research sources

Generic AI generates text. Qori builds evidence chains.

How Qori Works

Six steps. One evidence chain. Every document feeds the next.

Qori doesn't generate documents from thin air. It builds them from YOUR evidence — each step extracting variables that compound into the next document. Nothing is invented.

Step 1
Kickoff
Four questions about your study. What you're investigating, who needs it, what you already know, and how you want to start.
~60 seconds
Step 2
Discovery
Upload past research, help desk data, analytics exports. Add stakeholders — Qori generates interview guides from your evidence. Capture notes. Synthesize everything into one document.
Days to weeks — your pace
Step 3
Study Design
Qori surfaces journeys, barriers, and hypotheses from your discovery — pre-filled, not blank. You select, edit, and refine. A coaching layer explains why each recommendation was made.
~10 minutes
Step 4
Planning
Research brief, discussion guide, screener, and research plan — generated from everything upstream. Methodology-compliant. Every claim evidence-traced.
~4 minutes to generate all four
Step 5
Fieldwork
Capture session notes. Tag observations to barriers. Qori extracts atomic findings from every session automatically.
Step 6
Analysis & Action
Usability issues with severity ratings. Journey maps. The research readout writes itself. Push findings to GitHub as dev-ready issues with full evidence chains.
The Difference

What changes when research compounds.

Without Qori

Copy the last study's guide

Find-and-replace product names. Hope the methodology still applies. Miss the nuance.

With Qori

Guide from YOUR barriers

Discussion guide generated from the specific barriers your discovery identified. Every probe traces to evidence.

Without Qori

Synthesis in a slide deck

Three weeks of stakeholder interviews reduced to bullet points. Context lost. Nuance flattened.

With Qori

Synthesis that remembers

Every finding linked to its source. Conflicting viewpoints reconciled. Patterns identified across all evidence.

Without Qori

Findings in a PDF

Research report emailed to stakeholders. Read once. Filed. Development continues unchanged.

With Qori

Findings in the backlog

Each usability issue becomes a GitHub issue with severity, evidence, affected journey, and recommended fix. Research reaches the code.

The Honest Comparison

Same prompt. Different results.

We asked both tools to generate a discussion guide for a VA Mobile App navigation study. One had evidence. One had a prompt.

🤖
Generic AI
From a prompt
Discussion Guide Probe:

“Can you walk me through how you typically navigate the app? What features do you use most often?”

Generic question that applies to any app
No connection to known barriers
Won't reveal specific navigation issues
No traceability to research objectives
Recommended sample size:

“5-8 participants is generally recommended for usability testing.”

Generic advice, not calculated for your study
Qori
From your evidence
Discussion Guide Probe:

“Show me how you'd check the status of your disability claim. Talk me through what you're looking for as you go.”

↳ Targets: Barrier B-1 (hamburger menu), Journey J-1 (claims status), Hypothesis H-1 (mental model mismatch)
Specific to YOUR identified barrier
Task-based, not opinion-based
Traceable to desk research findings
Will validate or invalidate hypothesis
Calculated sample size:

“6 participants: 3 journeys × 60min sessions ÷ 15min per journey = sufficient coverage for saturation.”

Derived from your session format and journey count

Generic AI generates plausible research documents. Qori generates evidence-based research documents. The difference shows up in the quality of your findings.

The Evidence Chain

Every document feeds the next. Nothing starts blank.

Qori extracts variables from each document and transforms them for the next. A barrier in your synthesis becomes a probe in your discussion guide, an observation tag in fieldwork, and a severity-rated issue in your readout. One chain. Full traceability.

📄
Desk Research
🗣
Stakeholder Guides
Synthesis
📋
Brief
💬
Discussion Guide
🎯
Sessions
📊
Readout
GitHub Issues
The Coaching Layer

Qori teaches while you work.

Every recommendation cites methodology sources. Every pre-fill explains its reasoning. The coaching layer makes junior researchers stronger and saves senior researchers from explaining the basics.

4 of 6 — Study Design
Methodology
Your barriers are all observable behaviors — things you need to watch people do. That points to usability testing over interviews or surveys. Remote lets you see their real devices and real context.
NNg — When to Use Which Method
Moderated Usability TestingUnmoderated TestingUser InterviewsContextual Inquiry
RemoteIn-person60 min45 min90 min
Built for Government

Compliance isn't an afterthought.

Qori was built for VA researchers. Every feature accounts for federal requirements — from PII handling to records management.

🔒

PII Auto-Redaction

Names, SSNs, addresses detected and redacted before any AI processing. Your participants' privacy is protected by default.

📋

NARA Records Compliance

Full audit trail. Every generation logged with timestamp, inputs, model version. Meets federal records management requirements.

WCAG AA Accessible

Every interface meets Section 508. All text passes 4.5:1 contrast. Full keyboard navigation. Screen reader compatible.

☁️

GovCloud Ready

AWS Bedrock GovCloud and Azure OpenAI Government. Your data stays in authorized boundaries.

Build your evidence chain.

Start with desk research or stakeholder interviews. Qori builds the chain from there — every finding traceable to its source.

Start a study →