PROCESS · METHODOLOGY

Good execution of the wrong thing is still failure.

Solve the Right Problem is a discipline I apply before any design work starts: a set of habits for making sure the team is pointed at the actual problem, not the one written in the brief three months ago. It comes from watching teams ship polished, well-crafted features that nobody used, because nobody stopped to ask whether the brief was right in the first place.

I started doing this after noticing a pattern. The projects that failed weren't the ones with bad design. They were the ones where the team executed well against unclear requirements, undefined success metrics, or assumptions nobody had tested. The design was fine. The problem was wrong.

WHY IT EXISTS

The most expensive mistake in product design is a beautiful answer to the wrong question.

I've worked on teams that had everything: good designers, strong engineers, clear timelines, executive buy-in. And the feature still flopped. The problem statement was wrong from the start. Someone wrote a brief based on assumptions. The team executed against it. Nobody paused to check whether the assumptions held up.

This is a framing failure. The brief said "build X." Everyone built X. But X was what someone upstream thought the user needed, filtered through three layers of interpretation, not what the user actually needed.

PROBLEM 1
Requirements drift from reality
Briefs are written weeks or months before design begins. By the time the team picks them up, the context has shifted. User needs have changed, competitors have moved, or the original assumption was never validated. Teams that don't re-examine the brief end up solving a stale version of the problem.
PROBLEM 2
Success is undefined until it's too late
Ask five people on the team what success looks like for this feature, and you'll get five different answers. When success isn't defined upfront, design reviews turn into opinion battles. Everyone's optimising for a different outcome, and nobody notices until launch.
PROBLEM 3
Disciplines work in sequence, not in parallel
Design hands off to engineering. Engineering discovers constraints that change the scope. Product adjusts the brief. Design reworks. That's a relay race where the baton keeps getting dropped. The root cause is usually the same: nobody sat in the same room and agreed on what they were building, and why, before the work started.
THE PRACTICE

Five questions I ask before I open Figma.

This isn't a workshop or a framework you can download. It's a set of questions I bring into every kickoff, every brief review, every time someone says "we need to build this." The point is to make sure speed is pointed in the right direction.

01
What problem are we actually solving?
What is the user struggling with, in their own words? Not what the brief says. Not what the stakeholder requested. If the team can't answer this without reading the brief, we're not ready to design.
02
How do we know this is the right problem?
What evidence do we have? User research, support tickets, analytics, direct observation? If the answer is "the PM told us" or "it came from leadership," that's an assumption that needs checking, not evidence.
03
What does success look like, and for whom?
Success for the business and success for the user aren't always the same thing. If we haven't defined both, we'll optimise for whichever one shouts loudest in the review. Define the metric. Define the outcome. Write it down before the first design review.
04
What are we NOT building?
Scope creep starts when the boundaries aren't drawn. Naming what's out of scope is just as important as naming what's in. It protects the team from solving adjacent problems that dilute the work.
05
Who needs to be in the room for this to work?
If engineering isn't involved early, you'll design something that can't be built on time. If product isn't aligned, you'll get conflicting feedback in review. And if the user's voice isn't represented through research, data, or direct contact, you're guessing. Getting the right people in the room early is the single cheapest way to avoid rework later.
Add media here
Suggested: A real kickoff whiteboard, FigJam screenshot, or brief mark-up showing these questions applied to an actual project
THE KEY INSIGHT

Pushing back on a brief is doing the job.

Early in my career, I treated briefs as instructions. Someone wrote it, I executed it. If the result didn't land, I assumed I'd designed it wrong. It took a few failures to realise that sometimes the brief is wrong. The most valuable thing a designer can do is say so, with evidence.

A good brief survives scrutiny. A bad one falls apart when you ask "how do we know this is what the user needs?" You want that to happen before the sprint, not after launch.

WHY DESIGNERS SHOULD PUSH BACK

Designers sit at the intersection of user needs, business goals, and technical constraints. That position gives you a perspective nobody else on the team has. When something in the brief doesn't add up (the user need feels assumed, the success metric is missing, the scope has quietly doubled) you're often the first person to notice. Staying quiet about it lets the team walk into a wall.

IN PRACTICE

Three times the brief was wrong, and asking questions changed what got built.

Every project has a moment when the stated problem turns out to be different from the actual problem. The discipline's job is to surface that in a kickoff, not in QA. Here are three times pushing back on the brief changed the outcome.

THE BRIEF
Redesign the Back Office homepage to improve navigation. Sellers were struggling to find the tools they needed. The ask was a cleaner layout, better visual hierarchy, and a more organised menu structure.
WHAT I QUESTIONED
Where were sellers actually coming from when they arrived at the homepage? Analytics showed the majority were landing directly on tool pages via notifications — not navigating from the homepage at all. The homepage wasn't a navigation hub. It was a dead end people passed through on the way to somewhere else.
WHAT ACTUALLY CHANGED

The brief was asking us to optimise a journey that wasn't happening. Instead of reorganising navigation, we redesigned the homepage around the question: what does a seller need to see the moment they log in? The output was a task-oriented dashboard — surfacing the actions that needed attention, not a menu of every tool that existed. A better navigation structure would have solved nothing.

THE BRIEF
Build a deal reporting view for sellers — show them their progress against bonus targets, make the reward visible, and drive more sellers to opt in to the deals programme.
WHAT I QUESTIONED
Why weren't sellers opting in? The brief assumed the problem was visibility of the reward. But talking to sellers surfaced two separate blockers: some couldn't see their current performance clearly enough to act on it, and others couldn't fit deal participation into how they already planned their sourcing cycles. These were different problems. The brief had collapsed them into one.
WHAT ACTUALLY CHANGED

One reporting feature became two distinct UI treatments — a performance tracker showing live progress against the bonus threshold, and a forward-planning view mapping upcoming sourcing decisions to deal targets. Treating them as a single problem would have produced a view that half-solved both. Separating them meant each one could actually do its job.

THE BRIEF
The clinical summary screen in SoteriaMe needed more data. Clinicians were missing context when reviewing patients — the ask was to surface additional fields and expand the amount of information available at a glance.
WHAT I QUESTIONED
Who was reading this screen, and when? It turned out the summary screen was being used primarily during handover — a two-minute window where one clinician hands a patient's care to another. In that context, more data made the screen harder to use, not easier. The problem wasn't missing information. It was that the existing information wasn't scannable under time pressure.
WHAT ACTUALLY CHANGED

The design brief flipped from "add more fields" to "make the critical fields impossible to miss." We reduced visible information, introduced a clear visual hierarchy for urgent flags, and structured the layout around the handover workflow. Clinicians got what they actually needed — not more data, but the right data in the right order at the right moment.

WHY IT WORKS

Teams that skip problem definition don't save time. They borrow it.

The rework, the pivots, the features that get quietly sunsetted three months after launch: that's the cost of not asking whether the brief was right.

🎯
Design effort lands on the right problem
When the team has agreed on what they're solving and why, every design decision has a reference point. Reviews get faster because there's less time debating opinions and more time improving the work.
🔄
Rework drops because alignment is real
Most rework happens because someone's version of the goal didn't match someone else's. Defining success upfront means fewer late-stage surprises and fewer "that's not what I meant" moments in review.
🤝
Engineers and PMs trust the design direction
When you can show that the design is solving a validated problem with a defined success metric, engineering builds with confidence and product defends the work in stakeholder reviews.
The best version of this discipline is the habit of asking "are we solving the right problem?" often enough that the team stops being surprised by the question, and starts expecting it.
SEE IT APPLIED

Narrative-First Design

The companion methodology: how I align cross-functional teams on the user story before any design work begins.

Read the methodology →