Explanation
What it is
User-Centred Design vs Business KPIs describes the enduring tension between designing for human needs and optimising for organisational performance.
While user-centred design prioritises empathy, accessibility, and value-in-use, business KPIs focus on measurable outputs — conversion, retention, or revenue.
The clash arises when what benefits the user in the long term doesn’t map neatly to what the business can quantify in the short term.
When to use it
- When product or UX teams struggle to justify user research or design quality against revenue targets.
- When leadership questions the ROI of human-centred initiatives.
- When teams need to reframe KPIs to reflect user-defined success rather than only business-defined performance.
Why it matters
Balancing empathy with evidence is central to sustainable product design.
Misaligned KPIs can reduce design to decoration, rewarding shallow metrics at the expense of trust and loyalty.
When the two logics align, metrics reinforce meaning — guiding teams to create value that is both measured and felt.
Reference
Definitions
User-Centred Design (UCD)
A design philosophy and process that prioritises the needs, goals, and context of end users throughout the product lifecycle. It emphasises empathy, iteration, and usability over purely aesthetic or technical concerns.
Key Performance Indicators (KPI)
A quantifiable measure that tracks performance against strategic objectives. In business, KPIs translate goals into metrics, but can become distortive when they reduce complex outcomes into narrow indicators.
Outcomes vs Metrics
Outputs measure what was produced (e.g., number of features shipped); Outcomes measure what changed as a result (e.g., user adoption, satisfaction). Effective design governance balances both.
UX Maturity
The extent to which an organisation embeds user experience practices into strategy, process, and culture — ranging from ad-hoc usability testing to systemic human-centred decision-making.
Objectives & Key Results (OKRs)
A goal-setting framework that pairs qualitative objectives (“what we want to achieve”) with measurable key results (“how we know we’re succeeding”). When misapplied, OKRs can mirror KPI bias, rewarding delivery over impact.
North Star Metric
A single guiding measure that reflects the core value delivered to users. It unifies teams across functions, but risks oversimplification if not revisited as the product evolves.
Proxy Metric
A secondary indicator used to approximate an outcome that’s difficult to measure directly. Proxy metrics are convenient but prone to Goodhart’s Law — once targeted, they lose reliability.
Design Debt
The accumulation of UX compromises made for speed or convenience, often visible when KPI pressure deprioritises usability or accessibility improvements.
Canonical Sources
- ISO 9241-210: Human-centred design for interactive systems (2019)
- Norman, D. (2013). The Design of Everyday Things. Basic Books.
- Nielsen Norman Group. The Six Stages of UX Maturity (2021).
- Doerr, J. (2018). Measure What Matters: OKRs, the Simple Idea that Drives 10x Growth. Penguin.
- Goodhart, C. (1975). Problems of Monetary Management: The U.K. Experience. Papers in Monetary Economics.
Notes & Caveats
- The conflict between UCD and KPIs is not binary: tension can be productive when metrics evolve to measure user-centred outcomes.
- Organisations with low UX maturity often treat UCD as a cost centre rather than a value driver, limiting its influence.
- Metrics are not inherently bad — misalignment, not measurement, is the root of dysfunction.
- Maturity grows when teams co-own metrics across disciplines: design informs measurement; measurement validates design.
- OKRs and KPIs serve different purposes:
- KPIs monitor performance, while OKRs motivate progress.
- KPIs stabilise operations through consistency.
- OKRs stretch ambition through experimentation.
- Confusing the two erodes both accountability and innovation.
How-To
Objective
To reconcile user-centred design principles with performance measurement frameworks so that KPIs become evidence of value, not obstacles to it.
Steps
- Audit existing KPIs
Identify which metrics currently drive behaviour. Note which are output-based (e.g., feature count, conversion) versus outcome-based (e.g., satisfaction, retention). - Map user outcomes
Translate key user needs into measurable signals (e.g., task success, perceived control, emotional resonance). Align these to business goals. - Reframe or retire misaligned metrics
For each KPI that rewards speed or volume, propose an accompanying quality or empathy measure that captures user experience impact. - Embed shared ownership
Ensure design, product, and business leads co-author measurement frameworks and review results together. This prevents the “design vs. metrics” dichotomy. - Institutionalise learning loops
Use retrospectives and experimentation to refine both metrics and methods. Each iteration should ask: What did this KPI teach us about the user?
Tips
- Combine quantitative analytics with qualitative feedback — numbers describe what happened, but stories explain why.
- Anchor at least one KPI per initiative directly to user-defined success (e.g., “I achieved my goal quickly and easily”).
- Maintain a single North Star Metric that reflects user value creation; use supporting KPIs to explain how it’s achieved.
- Build a metrics dashboard that visualises outcomes alongside outputs to reinforce balanced decision-making.
Pitfalls
Treating design quality as “unmeasurable”
Use proxy signals like error rate, completion time, or NPS to quantify experience.
Isolating design metrics from business review
Present UX and KPI data side-by-side in the same governance rituals.
Allowing KPIs to ossify
Review metrics quarterly; retire those that no longer reflect user reality.
Overloading with too many indicators
Prioritise a handful of metrics that truly capture value creation and trust.
Acceptance criteria
- KPI framework includes at least one user outcome measure.
- Design and product teams share ownership of metric review.
- Performance dashboards integrate qualitative and quantitative insight.
- Retrospectives document how metric shifts inform subsequent design decisions.
Tutorial
Scenario
A UX lead at a fintech startup is asked to justify a proposed redesign of the onboarding flow.
The redesign promises higher user trust and reduced abandonment, but leadership questions its business impact.
The company’s KPIs track conversion speed and sign-ups per day — metrics that reward frictionless acquisition, not long-term engagement.
The UX lead must bridge empathy and evidence.
Steps
- Audit existing KPIs
The UX lead maps current metrics: time to first transaction, account creation rate, churn after 7 days.
She realises none measure trust — the central driver of long-term retention in finance. - Map user outcomes
Through interviews, she identifies the key user question: “Do I feel safe giving you my data?” This becomes the design’s outcome anchor.
She links it to measurable indicators such as verification completion, first-deposit rate, and voluntary data sharing. - Reframe or retire misaligned metrics
In collaboration with Product and Growth, she reframes conversion speed as conversion confidence — measuring success by informed completion rather than haste.
A short post-onboarding survey captures user sentiment. - Embed shared ownership
The UX lead schedules a recurring “Metrics Roundtable” with PMs and analysts. Together, they review KPI dashboards alongside qualitative findings.
The shared lens reframes design as an investment in trust, not an indulgence in aesthetics. - Institutionalise learning loops
After two months, the new flow shows slightly slower onboarding but a 30% increase in verified deposits.
Leadership adopts “trust yield” as a new KPI — blending user-centred insight with measurable business performance.
Result
- Before
KPIs rewarded speed over security; user confidence was unmeasured. - After
KPIs reflect user trust as a strategic asset, linking usability to financial retention. - Artefact Snapshot
“Onboarding KPI Dashboard 2.0” — lives in the company’s product analytics suite, with columns for both user outcomes and business performance indicators. - Delta
Decision-making time improved, debates reduced, and stakeholder alignment strengthened — measurable trust became the shared north star.
Variations
- If operating at enterprise scale, introduce dual-level metrics: team-level UX outcomes (e.g., task success rate) feeding into organisational KPIs (e.g., NPS, retention).
- If the team lacks analytics capability, start with qualitative proxies — support tickets, verbatims, and usability logs — until quantification is feasible.
- If leadership resists non-financial metrics, pair each user measure with a commercial correlate (e.g., trust → deposit volume, satisfaction → repeat purchase).
- If using OKRs instead of KPIs, ensure Key Results express observable behaviour change, not mere feature delivery.
- If multiple products share KPIs, run a cross-journey audit to uncover where metrics conflict or cannibalise each other.