← All work
Back Market · Seller Back Office In progress

Building AI into the tools professional sellers rely on

A programme to bring AI into Back Market's seller platform. Started with FAQ automation, built to ML-driven pricing tools. The one time we skipped the trust sequence, it showed.

RoleSenior Product Designer
Timeline2024 – 2025
PlatformSeller Back Office (Web)
Scope5 AI features shipped or in build
At a Glance
The Problem

1,700+ professional sellers pricing blind. No visibility into competitor activity, demand signals, or sales velocity. 65% reported struggling to find the right price.

My Role

Led the design of five AI features from low-risk FAQ automation through to ML-driven pricing tools.

The Approach

Deliberate escalation. Each feature earned the trust required for the next. The one feature that skipped the sequence caused the programme's most significant trust failure.

Programme timeline
Q1 2024
AI Support Assistant
Level 1
Lowest-risk entry point. FAQ lookup via an in-product chat panel. Proved AI could be useful in the Back Office without touching seller money.
Q2 2024
Seller pricing research
Structured interviews across all GMV segments. 65% of sellers reported struggling to price competitively. Automation Spectrum framework drafted.
Q3 2024
SMP V1
Level 2
First pricing AI. Single-product recommendation with model confidence scores. Learned quickly that sellers didn't understand "73% confident."
Q4 2024
SMP V2–V3 + BackPricer
Levels 2–3
Outcome-framed recommendations replaced confidence scores. BackPricer launched alongside, giving sellers floor and ceiling bounds for automated repricing. Trust built from SMP made this step possible.
Q2 2025
Pricing Opportunities
Levels 2–3
Catalogue-level intelligence surfacing underpriced listings and missed opportunities. Built on the trust BackPricer had earned.
Q4 2025
Dynamic Pricing Up
Level 4
Full automation without consent. Launched before the programme had earned the trust for this level. The fallout became the programme's clearest lesson.
The Problem

1,700 sellers pricing blind on a platform that had the data to help them

Back Market is Europe's largest marketplace for refurbished electronics, powered by 1,700+ professional resellers across Europe and North America. These aren't casual listers — they're businesses managing thousands of listings, repricing daily, and processing high volumes of orders. Their primary tool is the Seller Back Office: a web platform for pricing, inventory, orders, logistics, and performance analytics. It's open every working day.

How the marketplace works
S
1,700+ Sellers
Professional refurbishers
managing thousands of listings,
repricing daily
sets price
Back Market Platform
BackBox algorithm
AI pricing (SMP, DPU, Deals)
Take rate + adjustment fees
shows price
B
Buyers
Customers see
customer-pays price
on product pages
Support layer
SSMs alongside sellers
Intelligence layer
Data Science / ML feeding AI systems
The core tension: Back Market's business depends on sellers pricing competitively. The platform holds the data sellers need — sales velocity, competitor pricing, demand signals — but until this programme, almost none of it was reaching them in a form they could act on.

Why the problem required AI

Sellers price using a mental formula: sourcing cost + operating costs + return risk + target margin. The problem is that several variables determining whether that price is competitive were either unavailable or required guesswork:

The seller's mental pricing model
Sourcing cost
Known ✓
+
Cost of business
Commission + CCBM + fees
+
Return risk
Estimated ✓
+
Target margin
Known ✓
=
Selling price
Competitive? Unknown
Four variables sellers couldn't answer
Price elasticity
No data
BackBox position
Not surfaced
Sales forecast
Guesswork
Net margin
Manual calc

Price elasticity of demand

If I lower my price by €10, how much more market share can I gain? Sellers had no way to answer this — they experimented manually and observed what happened.

Competitor landscape

How many other sellers am I competing against on this specific product? Are they better priced than me? This data existed inside Back Market — it just wasn't surfaced to sellers.

Speed-of-sales forecasts

Will this stock sell in a week or sit for two months? Without a forecast, sellers couldn't connect pricing decisions to cash flow — especially for stock they'd already bought.

Net earnings calculations

Sellers think in absolute margin per unit (€ in, € out) — not in commission percentages. Converting a Back Market commission rate into actual earnings required manual calculation for every product.

Back Market already had the data to answer these questions. The problem wasn't a data gap. It was a product gap: the intelligence existed but none of it was reaching sellers in a form they could act on.

Research finding: The gap was consistent across every GMV segment, from the smallest resellers to the largest. ML was justified because the patterns that help sellers (elasticity, seasonality, competitive positioning) are too complex and too dynamic for static rules.
The Automation Spectrum

The single design question that shaped every feature

I mapped the decisions sellers make daily, the information each required, and what Back Market could credibly provide. For each opportunity, the same question emerged: should we show the intelligence and let sellers act, or should the system act on their behalf? It turned out to be the most consequential design decision in the programme.

The stakes were unusually high. Back Market charges the highest commission rates in the market, and sellers assume the platform defaults to the customer. Any AI acting without explicit consent would be interpreted as the platform acting in its own interest.

What sellers expected
Seller sets price in BO
€280
Customer sees on product page
€280
✓ Price matches expectation
What DPU actually did
Seller sets price in BO
€280
DPU adds adjustment fee: +€22
Customer sees on product page
€302
✗ Seller discovers via orders page
Sellers were not told the adjustment fee existed until after it was running. Trust damage was immediate.

The Automation Spectrum

To navigate this, I proposed a four-level spectrum — from passive data surfacing to full automation — and mapped every AI feature onto it. The spectrum became the governing framework for the entire programme: a feature can only occupy a level of automation proportionate to the trust the programme has earned with the audience it's serving.

Level 1

Surface data

Make the invisible visible. No recommendation. No action. Just information the seller didn't have before — competitor pricing, sales velocity, demand trends.

Level 2

Recommend an action

AI suggests, seller decides. No automation. Anchored to a concrete outcome forecast — not "our model suggests €X" but "at €X, we estimate 12–18 sales in 7 days."

Level 3

Automate with consent

Seller configures the rules explicitly — floor, ceiling, window. System acts within those bounds. The automation is sanctioned, which is why it works.

Level 4

Automate fully

System acts without per-feature seller configuration. Only appropriate with a long, proven track record — and explicit opt-in, never opt-out.

The governing principle: Build for the level of trust you've earned, not the level of automation you want. Dynamic Pricing Up jumped to Level 4 before the programme had earned it. BackPricer succeeded at Level 3 because sellers had already experienced the underlying intelligence as a recommendation at Level 2.
What We Shipped

Each feature answered the same design question at a different level of automation

How a recommendation evolves

SMP went through three major versions. Each iteration moved the framing further from the algorithm and closer to the seller's mental model.

Design iteration
How the SMP recommendation evolved across three versions
V1 — Launch
Model-centric framing
Showed recommended price + model confidence score. Sellers didn't understand confidence percentages — "73% confident" felt uncertain rather than reassuring.
Problem: Low engagement, high confusion
V2 — Outcome framing
Sales forecast anchoring
Replaced confidence scores with outcome forecasts: "at €X, we estimate 12–18 sales in 7 days." Engagement improved, but sellers wanted to compare against their current price.
Progress: Better comprehension, still passive
V3 — Current
Comparative + one-click apply
Added current-price comparison, delta indicator, and a single "Apply price" button. Framing shifted from "here's a suggestion" to "here's what changes if you act."
Result: Highest engagement version
Key learning: Sellers respond to consequence framing ("what changes") over model framing ("how confident"). Each iteration moved further from the algorithm and closer to the seller's mental model.
SMP recommendation card evolution — V1 model-centric, V2 outcome framing, V3 comparative with one-click apply
SMP price recommendation shown inline on a seller's product management page

Where AI lives in the product

AI currently surfaces across five distinct locations in the Back Office. Each was designed independently, but the programme is moving toward a coherent architecture where intelligence is present throughout the seller workflow.

Where AI surfaces in the Seller Back Office
Back Market
Seller Back Office
Home
Insights
Customer Care
Listings ★
Orders
Opportunities ★
Money
Options
Seller Support
① Listings page
SMP recommended price
Inline price suggestion + sales forecast per BackBox
② Listings page
BackPricer toggle
Automated repricing within seller-configured bounds
③ Opportunities page
Pricing Opportunities
Quick Wins · Listings in Trouble · BackBoxes to Boost
④ Orders page
DPU adjustment visible
Customer-pays price shown — where sellers discover DPU
⑤ AI Assistant
Floating chat · any page
★ = pages with primary AI touchpoints
AI Support Assistant chat drawer floating over the Seller Back Office listings page
Pricing Opportunities interface showing Underpriced, Overpriced, and Volume categories with bulk actions

Working with the ML team

Every feature required close collaboration with Back Market's data science team. The hardest negotiation was the cold-start problem: I pushed for a minimum 90-day accuracy window before surfacing track record data in the UI. The ML team resisted because it delayed a key engagement metric. We compromised on surfacing provisional accuracy earlier with explicit "early data" labelling. It was the right call. Showing unqualified numbers early would have undermined the trust the programme was built on.

Designing for Trust

Four requirements for trust

Research surfaced four trust requirements. Each one shaped specific design decisions across the product.

What sellers require to trust AI pricing
Seller control 0%
Ability to accept or reject every recommendation
Transparent explanation 0%
Understanding the reasoning behind a suggestion
Proven accuracy 0%
Demonstrated track record over time
Source: Structured interviews across all GMV segments. "What would you need to trust an AI pricing recommendation?"
Consent before any automated action

Not a preference — a prerequisite. Sellers who discovered automation through its consequences (orders at prices they didn't set) stopped trusting every AI feature on the platform. DPU proved that no amount of good design recovers from a consent violation.

The research quote that became our design brief: "Sellers need the tools to make well-informed decisions — they don't trust Back Market to act in their best interest."
Reflection

What it actually takes to get AI right

Every AI initiative in 2024 had pressure behind it — pressure to ship features with "AI" in the name, show progress to the board, keep pace with competitors. We pushed back on that. The question was never how do we add AI to the Back Office — it was where does AI actually make a seller's job easier, and how do we earn the right to put it there? That framing shaped every decision in this programme. The data below shows where it worked and where we still fell short.

67%
of SMP price recommendation clicks come from sellers already winning their category
The feature was designed to help losing sellers. It's being used predominantly by winning sellers — as a confirmation signal, or a tool to identify when they can go further.
915
CSV data downloads vs 0.2% in-product action rate on the same data
Sellers took the data and modelled decisions externally rather than trusting the in-product action. "I want your data. I'll do my own analysis."
€15
Average gap between AI-recommended price and what sellers actually set
This is the 2026 North Star metric — the gap we're designing to close. Success isn't measured by how many sellers click a recommendation, but whether their actual prices move toward the AI's suggestion.
The gap we're designing to close
€X
AI-recommended price
€15
avg gap
€X+15
What sellers actually set
Success isn't measured by clicks on a recommendation — it's whether actual prices move toward the AI's suggestion.

The consistent finding: sellers express a coherent form of partial trust — they'll take the data, but not the recommendation. They're not disengaged; they're careful. The programme's job isn't to replace seller judgment — it's to give sellers the information to exercise it more confidently.

How sellers actually use AI recommendations
100%
Sellers receive
SMP recommendation
67%
of clicks from sellers
already winning
915
CSV downloads
in 3 months
0.2%
in-product
action rate
Sellers extract the intelligence, not the recommendation. "I want your data. I'll do my own analysis."

These findings pointed to a single conclusion: sellers didn't reject AI — they rejected AI that hadn't earned their trust yet. The order we introduced features mattered as much as the features themselves.

What we'd do differently

The accountability we wanted required getting the order right. Each feature either builds credibility or creates suspicion that every subsequent feature inherits. Shipping AI that works isn't a technology problem — it's a planning problem.

Ideal sequence
1Surface data (2–3 mo)
2Recommend (3–6 mo)
3Automate with consent
4Catalogue intelligence
What actually happened
1SMP launched (limited data phase)
!DPU launched at Level 4 — no consent
3BackPricer succeeded (seller-configured)
4Pricing Opportunities (inherited DPU scepticism)

If we were starting again — with the luxury of sequencing the programme around trust instead of around shipping deadlines — we'd move through four stages:

Trust builds sequentially — each stage earns the right to the next
1
Surface data
2–3 months
Prove data
accuracy
2
Recommend
3–6 months
Build accuracy
track record
3
Automate
with consent
Seller-configured
bounds
4
Catalogue AI
6–12 months
Full catalogue
intelligence
Based on seller research: 51.8% require control to accept/reject · 47.3% require transparent explanation · 41.1% require proven accuracy over time

The actual programme deviated under shipping pressure. BackPricer launched with a limited SMP track record. Dynamic Pricing Up went to Level 4 before the programme had earned it — exactly the kind of "ship AI because we can" decision we'd set out to avoid. The trust damage cast doubt on every subsequent AI feature.

Where this goes next

This is Phase 1 of a multi-year programme. The roadmap operates across three horizons:

Programme roadmap
2026 H1
Closing the €15 gap
Q1: Pause FR/ES Deals for DPU analysis
Q1: SMP copy + trust improvements
Q1–Q2: ML model for Dynamic Adjustment Fee
Q2: Deals relaunch with DPU learnings
2026 H2
System-level intelligence
Q3: Dynamic Pricing Down (BM-financed)
Dashboard anomaly detection
AI-assisted pricing at listing creation
Demand forecasting in inventory
2027+
Ambient AI
Pricing Agent (AI sets prices in seller bounds)
Catalog Agent (AI for listing creation)
Support chatbot → task execution
Transferable framework beyond BM
The lesson: AI is powerful — but only when the planning behind it is as rigorous as the technology itself. You can't design your way to trust with a user base that has structural reasons to be cautious, and you can't ship your way there by adding more features. What earns trust is a sequence of AI tools that did what they said, over months, across thousands of sellers — not because AI was the trend, but because each tool made a seller's job measurably easier. The design job is to structure that sequence accountably.