Just a moment.
Back
Test Hub Next
Google, Stadia — September '22
My Role
Lead Designer — Feature Scoping, Research, Interaction Design, Visual Design, Prototyping
Team
Dean Whillier, UXE
Lisa Clift, UXR
Tony deCatanzaro, SWE
Ryan Bartely, PM
Timeline & Status
3 Months, Sunset
Overview
Test Hub was a companion web app originally built to be used by Stadia game developers to monitor, emulate, capture, and debug their games in the cloud.

I owned and led design strategy for the future of Test Hub in support of Stadia's B2B pivot — playing a critical role in scoping and prototyping a transformative feature set, while modernizing the product as Google's Material Design 3 guidelines were emerging.

Test Hub Next's vision was highly praised by key stakeholders but was, unfortunately, sunset alongside Stadia in September 2022.
HIGHLIGHTS
The future of playtesting in the cloud — adaptable to everyone and all the different types of interactive content they develop.
0.1Review Timeline Interaction.
VIDEO LOOP
0.2Supporting Pane Interaction.
VIDEO LOOP
0.3Graph cards.
IMAGE
0.4Graph customization modal
IMAGE
0.5Sample customization interface.
IMAGE
0.6 Review Navigation.
IMAGE
CONTEXT
A new direction for Stadia.
Immersive Stream.
In Q2 of 2022, Google announced that Stadia would be pivoting from being a consumer platform to a B2B, enterprise-grade service, called Immersive Stream (Figure 1.0).
1.0Article from 9to5Google.
IMAGE
Test Hub Beta.
Prior to the pivot, an MVP beta version of Test Hub (Figure 1.1) was released to AAA partners and was well-received.

It gave a glimpse into the infinite possibilities that could be achieved with Stadia's powerful tech stack.
1.1Test Hub Beta.
IMAGE
PROBLEM SPACE
Stadia's top-notch infrastructure was being held back.
Test Hub Beta was ultra-niche:
It was made exclusively for developers at partnered AAA studios with high cognitive load tolerance.
It was only used for porting existing games onto Stadia's cloud infrastructure.
2.0Test Hub Beta's current scope.
IMAGE
Emerging opportunities:
What if Test Hub was more attractive to a wider range of developers across varying levels of technical expertise?
What if that meant that more types of content could be made — and not just ported?
THE CAVEAT
No one really had a concrete sense of direction as to how we could materialize and productize the vision of Immersive Stream.
RESEARCH SUMMARY
Clearing the fog — for all the ways we playtest.
Why I started with research:
To gain a deeper understanding of the playtesting problem space
To identify practical directions for building the North Star and seizing the opportunities
1. The playtesting/QA vertical?
It was critical to first understand where the playtesting vertical fit in amongst the other pillars of standardized development lifecycles (Figure 3.0).

Based on historical user interview data, turns out, it was actually a (very messy) horizontal.
Show discovery
3.0Playtesting/QA as a vertical?
INTERACTIVE
2. More users making more things.
A research workshop informed us that in order for Immersive Stream's vision to succeed, we had to significantly broaden the scope of our user base beyond the highly-specialized AAA.

Naturally, this meant considering how the vision could support a myriad of new content categories, too (Figure 3.1).
3.1Immersive Stream's scope.
IMAGE
3. What our current user base was saying.
Lastly, it was key to account for existing feedback from Test Hub's beta launch (Figure 3.2)— this way I had a foundation to continue building upon.
AAA developers highly value text logs
Higher levels of technical expertise and workflow complexity is associated with a greater need for a customized workspace.
3.2Test Hub Beta feedback, dramatized.
IMAGE
What it all leads to — the North Star principles:
01
Adaptable
Accomodate a new spectrum of users, their workflows, and the content they create.
02
Informative and glanceable
Make data easier to understand and surface insights to users while minimizing interaction.
03
Visionary and forward-looking
Have fun, generate excitement, and lead by design.
HIGH-LEVEL AUDIT
Starting from what we already had.
Test Hub Beta, but through a more critical lens.
The audit was conducted with the project's principles and updated Material Design 3 standards/guidelines in mind — helping surface notable usability and heuristic issues.
4.0Audit exhibit A.
INTERACTIVE
Location reference
Foreign component usage. Top navigation was uncharacteristic of Material Design.

Chrome-like tabs were an overcomplicated affordance for most use cases.
Inefficient information architecture. Metadata placement forced data below the fold.
4.1Audit exhibit B.
INTERACTIVE
Location reference
Low glanceability. Visual design treatment was limited to graph marks, hiding numeric data behind hover interactions.
Functional inconsistencies. Actions and controls were stored in the sidebar, but were also present elsewhere.
Law of common region & proximity.
The positive feedback received from Test Hub’s Beta launch signalled against any drastic layout changes.

The focus then was to adopt new Material Design 3 principles unobtrusively — leading to layout consolidation based solely on functionality (Figure 4.2).
Consolidate & Materialize
4.2Layout consolidation
INTERACTIVE
THE MODULE MODEL
Every function in its own place.
A modular approach to data representation.
Test Hub’s defining feature was its ability to show users different types of data (audiovisual output, textual logs, and graphs). 

To make the data easier and faster to interpret, my approach was to treat each data type as its own distinct module — again following the law of common region (Figure 5.0).
5.0Sample modules.
IMAGE
MODULE 1 — VIDEO PREVIEW
"A video is worth a million moments."
A more visually dynamic playtesting experience.
The existing video output feature from an alpha build of Test Hub was removed for the beta release because most AAA partners were already using multiple monitors.

However, this wasn’t the case for less technical developers. By bringing back audiovisual output (Figure 6.0), entire playtests could be done all in one window.
6.0Video Preview module.
IMAGE
MODULE 2 — STREAM LOGS
The ol' bread & butter.
Before — It was missing the butter.
Initial research insights told us that textual logs were highly valued by developers, but a couple of usability issues contradicted this narrative (Figure 7.0).
7.0Test Hub Beta logs.
IMAGE
Tabs for logs? Added unnecessary layers of sub-navigation and progressive disclosure.
Overlooking legibility & formatting. Lack of visual treatment to support longer entries and text wrapping.
After — Your logs, your way.
My design direction focused on enhancing legibility and glanceability through the addition of valuable elements like timestamps, iconography, and monospace type (Figure 7.1).

The introduction of a toggle filter to replace the previous tab pattern would enable users to fine-tune the complexity of their logs (Figure 7.2) through a one-time interaction rather than having to constantly tab between log sets.
7.1New Stream Logs module.
IMAGE
7.2Stream Logs module interactions.
VIDEO LOOP
MODULE 3 — DATA GRAPHS
The new bread & butter.
Before — Low engagement.
Simply put, current users were not getting much use out of the graphs because they surfaced limited insights (Figure 8.0).

However, I felt that with a few tweaks, they had the potential to better convey real-time information and attract users who prefer visual representation.
8.0Test Hub Beta graph.
IMAGE
Ocular perception. The graph updated from the right, but labels were on the left.
Limited glanceability. A hover interaction was required to view specific readings, causing friction especially when testing in fullscreen.
After — Enhanced glanceability and scalability.
The addition of live numeric metrics improved the visual prominence of the graph cards and minimized the need for additional user interaction (Figure 8.1).

Attention to componentization and design documentation was given to the new graph cards to make them as versatile as possible in preparation for implementation.
Show specs
8.1New graph card component.
INTERACTIVE
Giving users more control over the graph cards and nesting them within a parent Performance Graph module (Figure 8.2) enabled powerful customization options down the road.
8.2Performace Graphs module.
IMAGE
Time as a universal constant.
Grouping graphs together felt logical, but we knew more needed to be done to help developers generate deeper insights.

The aim was to find a way to surface relationships between seemingly unrelated data sets (Figure 8.3), assisting developers in identifying potential root causes of bugs.
8.3Performance Graphs, insight.
IMAGE
KEY INSIGHT
Every piece of data that Test Hub recorded had an associated timestamp. What if we could transform that technical feature into a visual design cue?
I conceptualized a new Performance Timeline module (Figure 8.4) based on that very premise — essentially reformatting the graph cards to be stacked vertically.

This module was precisely structured, particularly with the use of monospace type, to ensure perfect alignment.
Show specs
8.4Performance Timeline module.
INTERACTIVE
CUSTOMIZATION
The future of Test Hub is highly personal.
Elevations of customization.
The module model creates four elevations which offer developers more flexibility when customizing Test Hub (Figure 9.0).
9.0Customization elevation model.
IMAGE
Data component level.
Exclusive to the Performance Graphs module, this is the most granular level of customization and helps support complex and very specific workflows, e.g. pinpointing when GPU usage drops below 35% (Figure 9.1).
9.1Graph customization interaction.
VIDEO LOOP
Data module level.
Each module’s customization options vary based on the category of data it houses, e.g. configuring the card layout within the Performance Graphs module (Figure 9.2).
9.2Module customization interaction.
VIDEO LOOP
Workspace level.
At the highest level, developers can focus on larger data sets by configuring entire modules themselves, e.g. deleting unwanted modules, and resizing more contextually important ones (Figure 9.3).
9.3Workspace customization interaction.
VIDEO LOOP
9.4Customization drag & drop logic.
IMAGE
Test Hub Next adapts to you.
Whether a user is checking up on something small (Figure 9.5), or crushing a massive bug (Figure 9.6), Test Hub Next can adapt across varying levels of technical expertise, workflow demands, and personal preferences.
9.5Test Hub Next, coffee shop.
IMAGE
9.6Test Hub Next, desk set-up.
IMAGE
SUPPORTING PANE
A cozy place for everything else.
Renovation for existing actions:
Run info. Metadata is relocated and grouped by type — becoming supplementary.
Run controls. Updated with M3 characteristics and the ability to make bulk configurations.
10.0Info and Controls panes.
VIDEO LOOP
A welcoming place for new actions:
Certification. The ability to benchmark performance with configurable pseudo-tests.
Artifacts. A chip-sorted repository of every artifact generated from a playtest.
10.1Certifcation and Artifacts panes.
VIDEO LOOP
MORE VISIONS
What happens after a playtest?
You review them!
To expand on Test Hub Beta’s existing capabilities of storing textual logs, we aimed to envision a feature where entire playtest sessions could be recorded.

To minimize change management while remaining on brand, it was a no-brainer to adopt Google Drive’s existing file storage patterns (Figure 11.0).
11.0Test Hub Next reviewing.
VIDEO LOOP
Teamwork makes the dream work.
By treating playtests like shareable documents (Figure 11.1), it enables developers to collaborate for faster debugging.
11.1Sharing a playtest.
IMAGE
Time and time again.
The experience of reviewing a playtest was akin to watching a YouTube video.

By using a similar UI pattern to the Performance Timeline module (Figure 11.2), recorded events are able to serve as chapters that users can jump to.

In essence, users have access to a full-picture overview of the video, textual logs, and graphs simultaneously at any given timestamp.
11.2Review timeline.
IMAGE
RETROSPECTIVE
Incoming plot twist...
Progress was lookin' awesome!
A Test Hub Next design prototype was presented at an all-hands in mid-September 2022 and was well-recieved. The team really wanted to start building it.
A HUGE SUCCESS
It excited many stakeholders — it fulfilled our PM’s vision, garnered engineering support, and set a critical stepping stone for Stadia’s future.
Not the ending we hoped for.
On September 29, 2022, Google officially announced Stadia's shutdown (Figure 12.0), impacting Immersive Stream and Test Hub.

The entire Stadia team would be disbanded.
12.0Stadia's sunset annoucement
IMAGE
I still learned a lot though!
01
Working with research is a cheat code
It helped uncover opportunities to explore and led to quick and informed design decisions.
02
Ambiguity can be a blessing
Not having a concrete direction pushed me to be creative and explore big ideas that led to fun and unexpected solutions.
04
Whiteboards are awesome
Being in the same physical space and seeing collective ideas visually unfold led to some of the most highly fruitful conversations I've ever had.
Next project: