Bringing organisation to chaos
Timeline:
2020-2022
Hats Worn:
UX Design
UX Research
Project Management
Team:
1 Lead Designer
2 Visual Designers
1 Product Owner
4 Developers
⚠️
Note: Some details have been adapted to respect confidentiality, but the case study reflects the real work and impact.
TL;DR / Quick summary
Proact began in the chaos of remote work, when scattered tools and lost knowledge made collaboration harder than ever. As lead designer, I framed the strategy and roadmap, mentored junior designers, partnered with PMs and developers, and kept us grounded in real user needs.
Through surveys, focus groups and interviews we uncovered the mindsets and themes that shaped every decision, then used them to cut through the noise and focus the MVP on discovery, visibility and trust.
Proact came alive as a dashboard, library, project views and reporting system. Testing proved its impact, with a 20% productivity boost and nine hours saved each week for practitioners, and a phased rollout gave us room to refine at every step.
But when engagement dipped, I introduced a bold idea that no one believed in at first, and it ended up transforming Proact’s retention and driving some of its biggest wins.
What started as a response to fragmented collaboration grew into a platform trusted across studios and later white-labelled for other organisations.
👉
Want to know what that crazy idea was and how it changed everything? Scroll down to read the full story.
In the studios, resources and tools were created together. Developers, analysts and managers worked side by side, mapping what was needed, refining it in the room, and carrying that knowledge across projects. A whiteboard session was how shared understanding took shape.
When the pandemic broke that rhythm, the centre of gravity shifted. The shared picture of what already existed began to fade.
Developers built new assets because they could not see their peers’ work.
Teams created similar tools in parallel.
Managers found it harder to see which resources were in play.
Leadership lost a clear view of adoption and impact.

People were still collaborating. The problem was that the collaboration no longer had a single place to live. The channels we now relied on were not set up for shared discovery, validation and stewardship of resources. The simple question what already exists became slow to answer.
This was a signal. We needed a remote first way to make shared knowledge visible, measurable and easy to act on.
From Fragments to Focus
The sudden shift to remote work broke apart the way resources and tools were created, shared, and understood. What once came together through collective effort in the studio was now scattered across countless channels and repositories, each holding fragments of the bigger picture but never the whole.
The question quickly became clear:
🤨
How might we recreate that sense of shared ownership and visibility in a remote-first world?
We needed a way to bring practitioners, managers, and leadership into a single space where resources weren’t just stored, but could be discovered, validated, and built upon. A space that made collaboration feel as natural and seamless as it had around the whiteboard, but designed for the realities of remote work.
My Role
I was the lead designer on Proact, with two designers fresh out of university joining me on the project.
My job was to frame the strategy, define the roadmap, and guide the design direction as the platform took shape. I also had to think about how to keep people engaged over time, which later led to the gamification strategy.
Mentorship became a big part of my day-to-day. The designers I worked with were talented but still finding their footing, so I gave them direction while leaving space for them to experiment and grow. Looking back, helping them build confidence in their craft was just as important as getting the pixels right.
Beyond design, I was the bridge between people. I worked with developers to make sure what we imagined could actually ship. I partnered with the PM to align the roadmap with business priorities. And I conducted research and testing with practitioners and managers to keep us anchored in the reality of how people actually worked.
One of the more interesting challenges was thinking beyond immediate design solutions. It was clear that success would not only depend on building the right features but also on finding ways to sustain engagement once people started using the platform. That question stayed with me throughout the project and shaped some of the most impactful decisions later on.
Identifying the User Groups
From the start, it was clear we couldn’t design for “everyone.” The pandemic had affected roles differently, and we had to understand those nuances if Proact was going to work.
Our first assumption was that practitioners (the developers, analysts and BAs doing hands-on work) would feel the brunt of the problem. Without a central space, they were bound to spend too much time piecing together resources. That felt obvious, but we didn’t want to take it at face value.
We also suspected managers and SMEs (Subject Matter Experts) were struggling in less visible ways. We assumed they had limited oversight of the resources their teams were using and were making decisions with incomplete information. Leadership, we thought, would be concerned less with the daily grind and more with the big picture: adoption, efficiency and cost savings.



These were hypotheses, not facts. So we mapped out the three groups, practitioners, managers/SMEs, and leadership, to structure our research and to test whether these assumptions actually held true.
From Hunches to Evidence
Our strategy was to combine breadth with depth. We needed to capture both the scale of the issue and the lived experiences behind it.

Surveys gave us validation at scale. With 187 responses across studios, we could see how widespread certain challenges really were. The survey helped us move from anecdote to evidence.
Focus Groups (50 participants) helped us see collaboration in action. We assumed that knowledge-sharing was breaking down because of fragmented tools, but we needed to observe how people were actually working together. The group format revealed how frustrations compounded when multiple people tried to align, and it highlighted the social side of collaboration that a survey couldn’t capture.
One-to-One Interviews (15 participants) gave us nuance. We wanted to understand not just what people did, but why. These conversations grounded the research in personal stories.
The mix was intentional: surveys for validation, focus groups for dynamics, and interviews for depth. Together, they gave us a 360-degree view of the problem.
87% participants of the Leadership cohort highlighted the need for greater visibility into projects in the Studio
Nearly 80% interviewees shared that they have to go through multiple repositories or reach out to several collaborators before finding the right resource
6 of 6 Studios within Deloitte cited collaboration and visibility as one of their top challenges, calling the problem "very important."
96% survey responders showcased a need for easier discovery of the assets, tools and standards in use to avoid duplication
The Assumptions We Tested
Going into the research, we had a few big questions we needed to validate:
01.
Is lack of discovery really the biggest blocker for practitioners, or are there other pain points we hadn’t considered?
02.
Do managers struggle more with approving and tracking resources, or with communicating about them?
03.
Is leadership mainly concerned with adoption rates and cost savings, or do they also care about qualitative outcomes like collaboration and knowledge growth?
Validating these assumptions mattered, because each pointed to a different design direction. If discovery was the biggest issue, search and tagging would be critical. If leadership cared most about efficiency metrics, reporting tools would take priority.
How Might We Workshops
Armed with research insights, we turned our attention to possibility. We knew the danger of jumping straight into features, so we framed the challenge through a set of “How Might We” questions.
This step was about moving from pain points to opportunities. Instead of saying, “People can’t find resources,” we asked, “How might we make assets and tools easily discoverable across the Deloitte network?” Instead of “Leaders can’t measure adoption,” we asked, “How might we track the impact of resources on projects?”
We ran multiple sessions, bringing stakeholders into the conversation to co-create the framing. Some of the questions we landed on became the north star of the project:
🌟
How might we boost productivity for developers without adding more admin?
🌟
How might we motivate people to contribute to the shared pool of resources?
🌟
How might we give leaders visibility into adoption in a way that was meaningful, not just another dashboard?
These “How Might We's" became the criteria we used to evaluate every idea, every flow, and eventually, the MVP.
The research showed us how people thought about their work. We mapped these behaviours into mindsets.
There were the:
Collaborators, who thrived on sharing but got stuck when channels were scattered
Self-learners, who wanted to figure things out on their own without chasing ten different people for links
Solution Seekers, focused on getting things done quickly, frustrated when the tools slowed them down
Aware, people who wanted to understand the bigger picture around what others were using before making their own choices
Leaders, looking for visibility across projects and resources so they could make better decisions
Explorers, curious and open to new approaches but often lacking a clear starting point.


These mindsets shaped the design direction. Proact had to be:
discoverable and searchable, so that resources felt easy to find rather than buried
transparent and engaging, giving people confidence in what they were looking at and reasons to interact with it
seamless and unobtrusive, blending into existing workflows instead of becoming another layer of admin
rewarding and complete, so people felt motivated to contribute and could trust that what they needed would always be in one place
We treated these like checkpoints. If a feature didn’t reflect a mindset or reinforce a theme, it didn’t make it into the build.
Prioritising What Mattered
The next challenge was deciding what to build first. On paper, everything looked valuable. Dashboards, endorsements, discussion boards, usage metrics. Each had its appeal, but chasing them all at once would have buried us in scope creep and delayed the launch.

We used the MoSCoW prioritisation to force ourselves to make trade-offs.
The must-haves were the foundation: a central asset library, strong search with tagging and filters, and clear reporting for leadership. They directly answered the biggest pain points we uncovered. Without them, Proact wouldn’t solve the problems of discoverability or visibility.
The should-haves gave us a sense of direction. Features like comparison views, activity tracking and a notification centre weren’t essential for launch, but they showed how Proact could grow once the basics were in place.
The could-haves were ideas that sounded exciting in workshops but didn’t hold up against the research. Early on, I wanted to keep a few of them, the kinds of features that make a product feel richer. But the themes kept us grounded. If a feature didn’t clearly connect back to discoverability, transparency, seamlessness or reward, it didn’t make it in.

For me, it came down to discipline as much as design. Building for impact meant stripping things back and staying focused. It reminded me that a clear MVP will always hit harder than a product that tries to do everything at once.
Mapping the Flows
Once we knew what to build, the focus shifted to how people would actually move through it. The MVP had to be simple. Practitioners needed to jump in, find a resource, link it to a project and move on. Managers and leaders needed visibility without drowning in detail.
We mapped flows to test that thinking. Laying it all out made it clear where things felt smooth and where they broke.

What I noticed early was how easy it was to overcomplicate. Some of our first sketches had branches everywhere, trying to cover every possible edge case. The moment we put them in front of people, the reaction was clear: just give me the fastest path to what I need. That set the tone for the flows- keep it lean, keep it obvious.
Shaping the Hierarchy
Flows gave us the “how.” The hierarchy gave us the “what.” We structured it in Figma around three anchors:
the dashboard - tied users to their projects
the asset library - made discovery simple with search, filters and tags
the project views - pulled everything together, showing what was in play and what state it was in

Mapped out a clear information architecture
The takeaway for me was simple. A good hierarchy isn’t about showing everything upfront. It’s about hiding what doesn’t matter until it does.


As the MVP came together, four core screens anchored the platform:
Dashboard: The entry point, tied directly to a user’s projects. This gave instant context and made Proact feel connected to real work instead of just another repository.
Asset Library: The heart of discovery. Search, filters and tagging helped practitioners cut through the noise. Each resource carried context — creator, endorsements, feedback — so people knew they could trust what they were seeing.
Project View: The space where everything came together. Teams could see which resources were in play, add new ones and share feedback in real time. This was where scattered efforts finally felt consolidated.
Reporting: Built for managers and leadership, this view showed adoption, time saved and cost savings. It gave higher-level visibility that had been missing across the studios.
The MVP screens
Testing and Results
We rolled the MVP out in phases, starting with practitioners and gradually involving managers and leadership. Usability testing checked if people could complete core tasks without friction:
→ find a resource,
→ link it to a project,
→ check adoption.
Compared to pre-Proact tools, practitioners reported a 20% increase in productivity, largely because they no longer spent hours hunting across multiple repositories. On average, the MVP gave them back nine hours a week that they could now use on project work instead of admin.
“It’s so much easier to find resources and keep track of new ones being used in other projects. However, some more customisation can be included.”
Managers valued the clarity project views brought, while leadership finally had reporting that showed adoption in numbers rather than anecdotes. For them, Proact closed a gap that had been open since studios went remote.
“The function to link assets to my projects and have that shared with my team in real-time saves me countless minutes!”
Building on Top
The MVP wasn’t the final destination. It was the foundation. Having a phased rollout gave us the chance to refine constantly based on feedback, simplifying navigation, adjusting hierarchy, and tuning reporting to give leaders more actionable data. Each cycle made the platform stronger. By the time Proact launched across all studios, it had grown from a prototype into a system people trusted and wanted to use.
Proact was rolled out in stages, refining with every group of users and growing confidence as the platform scaled. Practitioners were the first to use it, quickly finding that the library and project views cut through the confusion of scattered repositories. Managers appreciated having a single view of what resources were in play, and leadership finally had reporting they could use to measure adoption and impact.
The rollout gave us proof that Proact worked. People were using it, projects were clearer, and leadership finally had visibility. But keeping engagement high was a different challenge. After the initial buzz, usage began to flatten.
Then I came up with a crazy idea. One that stakeholders weren’t impressed with at all. It sounded risky, maybe even unnecessary. But it ended up transforming Proact’s retention rates and driving some of the biggest wins of the project.
👉
Read the full story: How Gamification Transformed Proact
What happened after was just as exciting. Off the back of our success, Proact was white-labelled to be sold to other consulting organisations, turning an internal solution into a product with a life of its own.
The design system we built gave it scalability, and the phased rollout approach proved it could adapt across different contexts.
What This Journey Taught Me
Proact was more than a platform. It was a two-year journey that started with the sudden chaos of remote work and ended with a system that people could finally rely on.
Along the way, I went from sketching wireframes on a blank canvas to leading a team through strategy, design, testing and rollout. I mentored new designers who were just finding their footing, partnered with developers and PMs to turn concepts into builds, and kept pushing to keep us grounded in what users really needed.
The project had its ups and downs. There were moments of excitement when people first saw the library and dashboard in action, and moments of doubt when engagement started to slide. There were debates with stakeholders, tough trade-offs in prioritisation, and more feedback sessions than I can count. But every step built on the last, and slowly Proact grew from an experiment into something people trusted every day.
What stays with me most is how much I learnt about balance. Building features is important, but building adoption matters just as much. A clear MVP can change behaviour, but keeping people engaged requires creativity, persistence and sometimes a crazy idea that nobody believes in until it works.
Looking back, Proact shaped me as much as I shaped it. It taught me how to hold the big picture while sweating the details, how to lead with strategy but stay open to pivots, and how to design not just for usability, but for momentum. It’s one of the projects I’m proudest of — not just because of the results, but because of the journey it took to get there.
I was awarded for my work on Proact 😎