top of page

17 Experiments.
8 months.

Role

Lead UX designer

UX Research

Team

UX design 

Product owners

Dev team 

​Editorial

Creative director 

Company

BBC

Project time

On going

My Role

As Lead Product Designer on the BBC’s Growth team, I led the end-to-end design process—from strategy and ideation to implementation and live experimentation.

Partnering with a Senior UX Designer, we crafted experiences grounded in user research and data-driven insights, optimised specifically for rapid A/B testing and iterative improvement.

In a newly formed, cross-functional team, I collaborated with PMs, engineers, creative and editorial leads, as well as other BBC product teams, to integrate critical components for experimentation.

Our focus was designing contextual sign-in experiences to drive Weekly Active Accounts (WAA), enable personalised content, and scale high-velocity A/B testing across BBC Sport and News.

Group 301.png

The Challenge

BBC content is mostly accessible without requiring users to sign in, which means many people browse anonymously. This limited the organisation’s ability to personalise experiences or understand user behaviour. To address this, the BBC formed a new Growth team focused on using experimentation and data to uncover user needs, increase sign-ins, and deliver more value to both users and the business.

Why was this project formed?

The Senior Leadership Team (SLT) was particularly interested in understanding which experiments had the biggest impact what worked well, what didn’t and using that data to drive discussions around future prioritie, which meant we focused on designing contextual sign-in experiences to increase Weekly Active Accounts (WAA),

Goals

How might we increase user sign-ins without harming content consumption, while achieving at a 3% uplift in key engagement metrics and supporting a higher volume of experiments across BBC products like Article pages, BBC Live, Sports, and News?

Constraints

  • To run 3 experiments a month

  • Reused sanctioned components

  • Needed to scale delivery fast

  • Users didn’t see the value of signing in

Building empathy

Using insights from both quantitative and qualitative research, we identified key user behaviours and pain points to inform the redesign of the sign-in experience. From internal survey data and A/B test results, it became clear that users rarely signed in unless required optional prompts saw minimal engagement. Feedback from our Capita platform revealed why: sign-in requests were perceived as disruptive, the benefits were unclear, and there was little trust in the value proposition.

These findings helped me better understand user motivations and shaped the priorities for improving the sign-in flow.

image.png

3 Day Workshop

To turn research insights into clear design directions, I co-led a three-day workshop with colleagues from Product, UX, and Accounts. Together, we aligned on user needs, business objectives, and practical solutions for improving the sign-in experience.

Day 1 Journey Mapping

We mapped the existing sign-in journey to uncover friction points, charting user motivations, behaviours, and drop-off stages. This revealed us understand where and why users disengaged or abandoned the process.

image.png

Day 2: Theming & Prioritisation

We developed “How Might We's…” statements based on patterns identified in user feedback, then grouped insights into key opportunity areas:

  • Communicating the value of signing in

  • Feature based incentives to encourage sign-in

  • Comparing modal versus embedded sign-in patterns

We used dot voting to prioritise these themes based on perceived user impact and strategic value.

image.png
image.png

Day 3: ICE Scoring & Experiment Design

We prioritised ideas from the workshop using a three-step process: first, dot voting to surface the most promising concepts; then an impact–effort matrix to balance feasibility with potential value; and finally, refining the top ideas into clear use cases.

One example hypothesis: “If sign-in prompts clearly communicate tangible benefits like syncing progress across devices more users will sign in voluntarily.”

These priorities guided our first prototypes, which we tested with users to validate assumptions early and ensure solutions were shaped by real user needs, not just internal preferences.

Outcome: Expectation vs Reality Sign-In Experiments Roundup

Over nine months, we managed to complete 17 experiments 16 focused on improving sign-in. Some delivered significant wins: others revealed valuable “what not to do” lessons.​

Mandatory sign-in (MSI) consistently outperformed softer approaches. In five separate tests, MSI generated the highest uplift in sign-ins compared to banners or dismissible prompts. Our strongest result came from the Sport Article Limit MSI (sign-in required after three articles per day), which achieved a +4.45% uplift in sign-ins with a –1.58% drop in page views.

Dismissible sign-in modals (DSI) were far less effective. Across four tests, 98% of users clicked “Maybe Later” and almost none signed in a clear signal that this format wasn’t motivating action.

We also uncovered a significant abandonment rate: in MSI experiments, 30% of users began sign-in but didn’t complete it. This insight directly informed our redesign priorities, focusing on streamlining the sign-in flow to reduce drop-off.

What I learned 

Owning the full process - From ideation to live implementation, I led the design process end-to-end. This ownership enabled faster iteration, more focused decision-making, and a clear through-line from concept to measurable impact.

Experimentation is cross-disciplinary - Working closely with Product, Engineering, and Analytics strengthened our approach to experiment design, hypothesis setting, and interpreting results. This collaboration not only improved the quality of our tests but also accelerated delivery.

Collaboration fuels speed - Tight alignment across teams meant we could launch experiments quickly, gather reliable data, and act on insights without delay.

Retrospectives drive improvement - A dedicated, safe space for Product, Design, Engineering, Data, and Editorial to share friction points revealed process, communication, and tooling gaps — and gave us a shared plan to address them.

Fake-door tests need more depth  - Clicks alone can be misleading. Pairing fake-door tests with post-click surveys would have given us richer insights into user intent and expectations.

Recognising team burnout - During the project, I observed signs of burnout among team members due to the intense pace of running multiple experiments. While I wasn’t responsible for managing the workload, I raised this concern during our retrospective and suggested implementing clearer workload balance and regular check-ins. These suggestions helped the team create a more sustainable approach for future projects.

What happened Next

Our finding were presented back to the Senior Leadership Team through a data-driven review session, where we highlighted which experiments had the greatest impact, what hadn’t worked, and where future optimisation efforts should focus. This evidence-based approach helped shape ongoing sign-in strategy and informed the next round of product roadmap decisions.

Growth next steps to streamline future experimentation, reusable components were standardised and documentation was centralised, enabling faster iteration and knowledge sharing across teams. Additionally, closer collaboration with the Accounts team was established to specifically tackle sign-in drop-offs and improve conversion rates. 

Follow Me

  • LinkedIn
bottom of page