Product teams often carry quiet assumptions:
“Users will think like us.”
“This is basic, everyone knows this.”
“We can explain it if needed.”
These assumptions feel reasonable because they are grounded in our own context.
But user research repeatedly shows how fragile they are.
Dan Russell, a long-time search UX researcher at Google, once shared an example that illustrates this gap clearly.
While interviewing a bus driver preparing for a certification test, he observed her scrolling through a 100-page web document line by line, looking for a specific rule.
When he asked why she did not use the browser’s search shortcut, her answer was simple.
She did not know it existed.
To product teams, this sounds surprising. In real usage, it is common.
What feels “basic” is often not universal knowledge, but familiarity shaped by tech-savvy product teams’ tools, habits, and mental models.
That familiarity is context.
Table of Contents
- 1. What “Context of Use” Really Means in Product Management
- 3) The Four Elements of Context of Use
- 3) When Context Is Missing vs. When Context Is Clear
1. What “Context of Use” Really Means in Product Management
A product has no meaning on its own.
This sounds abstract, but it is one of the most practical ideas a product manager can internalize.
- A feature is not useful by default.
- A UI is not usable by default.
- A workflow is not intuitive by default.
They only become meaningful inside a specific context of use.
1) Products Only Make Sense Inside Real Situations
Inside product teams, we often evaluate things in isolation.
- Is this feature clear?
- Is this flow simple?
- Is this interaction efficient?
But users never encounter products in isolation.
They encounter them:
- while trying to complete a task, (=task and goal)
- with limited time and attention, (=cognitive load)
- using imperfect tools, (=equipment limitation)
- inside noisy physical and social environments? (=environmental contrainsts)
A product that feels obvious in a design review can feel confusing, irrelevant, or even stressful in real life.
2) Why Asking Users What They Want Often Fails
“Let’s just ask users what they want.”
This sentence might sound user-centric. In practice, it can derail good discovery if it leads to solution shopping.
Surveys, interviews, and focus groups are filled with well-intentioned questions that produce confident answers and misleading conclusions.
The problem is not that users lie.
The problem is that people’s stated preferences often diverge from what they do in real situations.
(1) People Don’t Know What They Want
When users answer “what do you want?” questions, they usually do one of three things:
- Describe an ideal version of the world
- Describe a feature they saw somewhere else
- Rationalize their current behavior after the fact
None of these reflect how they actually behave in real situations.
This is not a user flaw. It is how human cognition works.
Most behavior is habitual, contextual, and reactive.
(2) Surveys and Focus Groups: The Limits of Opinion-Based User Research
Surveys and focus groups have built-in constraints.
- They remove users from their real environment
- They force reflection instead of action
- They reward confident articulation, not accuracy
As a result, they capture opinions, not reality.
This is why teams sometimes ship features that sound good in opinion-based research but underperform after launch.
(3) Goals vs Solutions: Distinguishing Goals from Proposed Solutions
One of the most common discovery mistakes is confusing goals with solutions.
When users say:
- “I want a faster dashboard”
- “I need more filters”
- “I wish this was automated”
They are offering solutions. Solutions are shaped by what users already know.
They are limited by existing tools and mental models.
The real value lies underneath.
- Why do they need this?
- What are they trying to achieve?
- What breaks in their current workflow?
(4) What Behavior in Context Reveals Instead
Instead of asking what users want, PMs should focus on understanding behavior.
There are four questions that matter far more:
- What are users trying to accomplish?
- How are they doing it today?
- Where do they struggle or slow down?
- What workarounds have they created?
These questions reveal constraints, not preferences.
Constraints are where product opportunities live.
3) The Four Elements of Context of Use
| Element | What It Covers | Why It Matters |
|---|---|---|
| Users | Who is using the product, including experience level, mental models, prior knowledge, and expectations. | Users with similar demographics can behave very differently depending on what they know and assume. |
| Tasks | What the user is actually trying to accomplish, beyond what the feature or UI suggests. | The product is often only a small step within a larger goal, which shapes how it is used. |
| Equipment | Devices, software, input methods, and workarounds involved in completing the task. | Tool constraints strongly influence behavior and often explain unexpected user actions. |
| Environment | Physical and social conditions such as noise, interruptions, pressure, or hierarchy. | Even simple workflows can fail when the surrounding environment is not considered. |
The context of use describes the full situation in which a product is used.
It is not just about the user. It is about the system around the user.
A useful way to break it down is into four elements.
(1) Users
Who is using the product?
- Their experience level
- Their mental models
- What they already know
- What they assume the product will do
Two users can look identical in demographics and behave completely differently because their mental models differ.
(2) Tasks
What is the user actually trying to do?
Not what we think they are doing. Not what the feature name suggests.
But the real goal behind their behavior.
Often, the product is only a small step inside a much larger task.
(3) Equipment
What tools are involved?
- Device type
- Software environment
- Input methods
- Workarounds and shortcuts
Constraints here shape behavior more than we expect.
(4) Environment
Where is the product used?
- Physical environment (noise, movement, interruptions)
- Social environment (pressure, hierarchy, collaboration)
A “simple” workflow can break down completely in the wrong environment.
This leads to a core principle:
Usability depends on the context of use.
Without context, usability is a theoretical concept.
3) When Context Is Missing vs. When Context Is Clear
| Decision Area | When Context Is Missing | When Context Is Clear |
|---|---|---|
| Product Discussions | Feature-based arguments such as “Users need this” or “Competitors have it” | Discussions grounded in observed workflows and real situations |
| Prioritization | Everything feels important and trade-offs feel arbitrary | Context narrows options and clarifies why something matters now |
| Decision Dynamics | Seniority and intuition dominate decisions | Observed reality becomes the shared reference point |
| Execution Quality | Overbuilding and slow execution due to unclear constraints | Focused execution shaped by real constraints |
| PRDs and Requirements | Feature lists without situational grounding | Requirements framed as constraints and desired outcomes |
| Team Trust | Repeated justification and debate over decisions | Shared understanding and trust in judgment |
| Language Used | “I think” or “This feels confusing” | “We observed” or “This breaks the workflow” |
When context is missing, teams rely on opinions, instincts, and past experience to fill the gap. Decisions still happen, but they often feel heavy, fragile, and difficult to defend.
When context is clear and shared, the nature of decision-making changes.
Teams stop debating what they believe and start reasoning from what they have observed.
Context does not remove disagreement. It changes what the disagreement is about.
Instead of arguing over preferences, teams discuss interpretations of the same reality.
That shift alone is often enough to unblock decisions, speed up execution, and build durable trust across the team.
Context as a Shared Language
One underrated benefit of context is that it becomes a common reference point.
Instead of saying:
- “I think this is confusing”
Teams say:
- “This breaks the workflow we observed”
- “This adds friction at a critical moment”
This changes the tone of collaboration.
2. Contextual Inquiry: How to Understand Real User Behavior
Most product research is built on conversations.
Contextual inquiry is built on reality. Instead of asking users to remember, explain, or imagine, it places the PM directly inside the user’s real working context.
1) What Is Contextual Inquiry?
Contextual inquiry is a research approach where you:
- Observe users in their real environment
- Watch them perform real tasks
- Ask questions in the moment, not after the fact
The goal is not to validate ideas. The goal is to understand how work actually happens.
Not how users describe it. Not how teams assume it works.
Traditional interviews or surveys rely on memory and articulation. Contextual inquiry relies on behavior, environment, interruptions, workarounds.
This is why it consistently reveals things no survey ever will.
People rarely mention:
- steps they take for granted,
- tools they hate but depend on,
- shortcuts they built over time.
You only see those when you watch.
2) The Four Principles of Contextual Inquiry
Contextual inquiry is not “just shadowing.” It’s commonly taught with four core principles.
| Principle | What It Focuses On | Why It Matters |
|---|---|---|
| Context | Understanding how work happens in the user’s real environment, including surroundings, sequences, and situational constraints. | Without the real environment, critical constraints and workarounds disappear, leading to incomplete understanding. |
| Partnership | Treating the user as the expert and the researcher as the learner through observation and in-the-moment questioning. | Real context emerges during real work, which increases the reliability of qualitative data. |
| Interpretation | Assigning meaning to observed behavior by forming hypotheses and validating them with users. | Prevents assumptions and overconfident conclusions that are detached from actual user intent. |
| Focus | Defining a clear task, workflow, or problem space before observation begins. | Without focus, observations turn into noise and insights lose decision-making value. |
These four principles define how contextual inquiry is conducted, not what it produces.
Together, they ensure that observations remain grounded in real work, interpreted carefully, and collected with a clear purpose.
3) Why These Principles Matter
| What Contextual Inquiry Reveals | Description |
|---|---|
| Workflows | The real sequence of actions users take, including detours and repetitions. |
| Tools & Artifacts | Devices, software, notes, and workarounds users rely on to get work done. |
| Environment | Physical and social conditions that shape behavior. |
| Patterns | Recurring behaviors and breakdowns identified across multiple users through synthesis methods such as affinity mapping. |
By following these principles across multiple sessions, contextual inquiry moves beyond individual anecdotes and reveals structural patterns in how work is actually done.
4) Observation vs Interpretation in User Research
| Dimension | Observation | Interpretation |
|---|---|---|
| Definition | What you directly see or hear during research | The meaning or explanation you assign to what you observed |
| Nature | Factual, concrete, verifiable | Hypothetical, assumptive, needs validation |
| Example | “The user copies data into a spreadsheet before submitting.” | “The user does not trust the system.” |
| Risk | Low | High, if treated as fact |
The same observation can support multiple interpretations. Without validation, interpretations easily turn into assumptions.
Example:
Observation
- The user copies data into a spreadsheet before submitting.
Possible interpretations
- Compliance requires an external record
- Their manager asks for a backup
- The system times out
- A past error made them cautious
Until these interpretations are confirmed, they remain guesses.
5) Why Real User Behavior Feels Messy (and Why That’s Good)
Researchers often describe contextual research as uncomfortable.
- The workflows are inconsistent
- The behavior is inefficient
- The reasons are emotional or political
This is the “complex and dirty truth” of real work.
But this mess is not a problem. It is the raw material of good product decisions. Clean narratives usually come later.
Reality comes first.
3. Contextual Design: Turning Research Into Shared Understanding
Contextual inquiry produces raw, messy data. On its own, that data is not very useful.
What makes it valuable is how it gets translated into shared artifacts that teams can reason about together. This is where contextual design comes in.
Without contextual design, user understanding stays implicit.
- In someone’s notes
- In a researcher’s head
- In scattered anecdotes
1) From Contextual Inquiry to Contextual Design
Contextual design is not a research method. It is a way to structure and externalize what was learned from contextual inquiry.
Its role is simple:
- Turn observations into visible models
- Turn individual insights into shared understanding
- Turn messy reality into something teams can work with
The output is not “a design.” The output is clarity.
Contextual design makes it explicit through artifacts such as workflow models, affinity diagrams, storyboards.
Teams can see:
- how work actually flows,
- where breakdowns occur,
- which constraints repeat across users.
This is the point where context becomes discussable.
2) Design Artifacts That Make Context Visible
Once context is externalized, conversations change.
Teams stop debating preferences. They start debating interpretations of the same reality.
Instead of:
- “This feels simpler”
- “I don’t think users would do that”
Discussions sound like:
- “This step causes repeated handoffs”
- “This workaround appears in multiple sessions”
The argument is no longer about taste. It is about meaning.
3) Contextual Design as a Team Decision-Making Tool
Seen this way, contextual design is not owned by UX.
It sits between:
- research,
- product,
- design,
- engineering.
It allows teams to reason from the same evidence,
even when they disagree on solutions.
4. From Context to Decisions
Context is only valuable if it changes decisions.
Watching users work is not the goal. Shipping better products is.
The bridge between the two is a disciplined interpretation process.
| Transition | What Happens | Key Question |
|---|---|---|
| Observation → Insight | Concrete behaviors are captured without assigning meaning. Initial hypotheses begin to form. | Why does this behavior exist? |
| Insight → Pattern | Repeated insights across sessions reveal structural similarities. | Is this recurring or situational? |
| Pattern → Problem Definition | Patterns are framed as context-specific problems without suggesting solutions. | What is breaking, and in what context? |
| Problem Definition → Solution Framing | Possible solutions are evaluated against observed reality and constraints. | Does this solve the observed breakdown? |
1) Observation → Insight
Everything starts with observation.
An observation is something you can point to.
- “The user switches tools three times during one task.”
- “They pause before clicking submit.”
- “They write notes outside the system.”
No meaning yet.
Just reality.
Insights emerge when you ask why this behavior exists.
- Why the tool switching?
- Why the hesitation?
- Why the external notes?
This is where hypotheses form, but they are still fragile.
2) Insight → Pattern
One insight is interesting. Repeated insights form patterns.
Patterns answer questions like:
- Is this common or rare?
- Is this situational or structural?
- Does it appear across different users?
This is why contextual inquiry usually involves multiple sessions.
Patterns turn anecdotes into evidence.
3) Pattern → Problem Definition
Patterns should not jump directly to solutions. They should be framed as problems.
Good problem definitions:
- describe a breakdown,
- specify a context,
- avoid proposing a fix.
Example:
“Users lose confidence before submitting because they cannot verify data without leaving the workflow.”
This is actionable without prescribing a feature.
4) Problem Definition → Solution Framing
Only after the problem is clear should solutions enter the room.
At this stage, solutions are evaluated against context:
- Does this reduce the observed breakdown?
- Does it fit the real workflow?
- Does it introduce new friction?
Context becomes the filter, not taste.
Making Patterns Visible: Affinity Mapping
One practical technique PMs can use is affinity mapping, also known as the KJ technique.
- Write observations individually
- Group them by similarity
- Label emerging themes
This externalizes thinking and prevents premature conclusions and invites team participation.
5. Common Mistakes When Context Is Ignored
Most people agree that context matters. Many still break it in practice.
Not because they are careless, but because delivery pressure quietly reshapes behavior.
Mistake 1: Meeting Room Research
This happens when research is detached from reality.
- Interviews over Zoom only
- Hypothetical questions
- Slide-based personas replacing real users
The environment disappears.
So do interruptions, workarounds, and constraints.
The result looks clean and professional, but it is incomplete.
Context does not survive conference rooms.
Mistake 2: Starting With Summaries Instead of Observation
PMs love synthesis too much, jumping straight to:
- insights,
- themes,
- recommendations
often skips the most important step: shared observation.
When teams never see raw behavior, they trust conclusions less and debate more.
Summaries should come last, not first.
Mistake 3: Jumping to Solutions Too Early
This is the most common failure mode.
The moment a problem sounds familiar, PMs rush to fixes.
- “We just need a shortcut.”
- “Let’s automate this.”
- “We can add a toggle.”
But premature solutions flatten context. They lock thinking before the problem is fully understood.
Mistake 4: The Myth of the “Representative User”
Personas can help, but creating a single “average user” often erases reality.
Real users:
- contradict each other,
- behave inconsistently,
- adapt to constraints differently.
Contextual research reveals variation.
Over-simplification hides it.
Faster synthesis, clearer narratives, and easier alignment might feel efficient, but speed without context creates false confidence.
6. In-Depth Interview: When Contextual Inquiry Isn’t Fully Possible
In practice, fully immersive contextual inquiry is not always feasible.
- Access can be limited.
- Environments can be sensitive.
- Time and resources are often constrained.
In many product teams, especially in B2B or internal tools, simply “watching users work” is not an option.
This does not mean context must be abandoned.
It means it must be approximated carefully.
How In-Depth Interviews Can Still Capture Context
In-depth interviews can be useful when they are designed to reconstruct real situations, not to collect opinions.
The key difference is how the interview is conducted.
Helpful interviews focus on:
- specific recent events,
- concrete actions,
- tools that were actually used,
- moments where things broke or slowed down.
Questions shift from:
- “What do you want?”
- “What do you think about this feature?”
to:
- “Can you walk me through the last time this happened?”
- “What was open on your screen at that moment?”
- “What did you do right after this step failed?”
The closer the conversation stays to real behavior, the closer it gets to context.
Wrapping Up: Why Context Explains Product Success and Failure
Products do not exist on their own.
They are used
- by specific users,
- to complete specific tasks,
- with specific tools,
- inside specific environments.
When these conditions are ignored or oversimplified, usability becomes theoretical.
What looks clear in a roadmap or design review often breaks down in real situations. This is why products that are “correct” can still feel irrelevant.
Want to Learn How to Uncover Customer Context Through Interviews?
Understanding context is only half the work.
The next step is learning how to surface that context through the right interview questions.
If you want to learn how to run customer interviews that reveal real behavior, constraints, and decision-making context, read the next article here:
👉 https://productwithmustache.com/customer-interview/

