top of page

SKY OS, Voice Activation 

Role

Tools

Context

UX Designer/Team Lead (Research, Ideation, Wireframing, Prototyping, Testing & Presenting to Stakeholders)

Figma/FigJam, Slides & Protopie

King's College London, UX/UI Career Accelerator, Employer Project, April-June 2025. 

Section 0_ Title Screen.jpg

Duration 

6 weeks

PROJECT OVERVIEW

​The primary task was to design a voice control experience that streamlines navigation and playback. In practice, this means enabling users to trigger key functions through natural voice commands – for example, when a user says, “Play an Idris Elba movie”, the system should display relevant search results and initiate playback with clear audible feedback and on-screen cues.​​

Sky-Glass-Gen-2-(12).jpg

 "Deliver a solution that offers the best product experience for customers when using voice control for a worldwide audience on Sky OS, driving feature usage and customer satisfaction."

Project Timeline

Project timeline & Research.jpg

DISCOVERY

Competitor Research

To understand the competitive landscape and identify opportunities for innovation in the Sky OS experience, we conducted a comprehensive analysis of direct and adjacent products in the smart TV, accessibility, and voice-interface domains.

 

The goal was to benchmark against industry standards and gather insights on usability, accessibility, and functionality.

marketanalysis.jpg

Initial Problem Statement

Users of all abilities, including those with physical, cognitive, or speech impairments, struggle to navigate and control Sky TV using voice commands when interacting with the current Sky Q service.

This creates barriers to accessibility and inclusivity, particularly during hands-free use, and leads to frustration and exclusion. It also limits Sky's ability to offer a secure, innovative, and universally accessible experience, impacting user trust, satisfaction, and brand reputation while falling short of Sky's future-forward, inclusive design goals.

User Interviews

Each team member conducted 1-2 initial user interviews during this discovery phase to develop some more insights about how users interact with current voice activation systems across Sky OS and other products on the market such as Alexa, Siri and Apple TV.

Key Insights

Interview Key Insights 2.jpg
Interview Key Insights 1.jpg

Pain Points 

Using the insights we uncovered from talking to potential users, and analysing the personas from Sky's product team we created this Empathy Map to identify the recurring pain points of current Sky OS systems and voice activation systems generally. 

EP_EMPATHY MAP.jpg

Takeaways

Shorten Voice Responses

Reduce the length of system voice replies to improve pacing and minimise user fatigue.

Expand Settings Flexibility

Enable broader or customisable settings saving (e.g., apply subtitle size across all apps or shows).

Improve Input Interpretation 

Enhance natural language processing to better understand informal phrases (e.g., “make subtitles a bit smaller”).

Clarify System Feedback

Provide more intuitive, real-time feedback using both visual and auditory cues.

Support Multiple User Profiles

Allow multiple user profiles with unique and personalised accessibility settings.

Add Error Handling Feedback 

Incorporate haptics or subtle visual cues when the system doesn’t understand or hear the user.

INITIAL IDEATION - 'BLUE SKY THINKING' 

We were encouraged in this project ideate broadly to begin with - without constraints

Some ideas include:

  • a need to improve the remote control design to cater to a broader needs

  • LLM Training features that allowed the system to understand the nuances of different users' voice and language patterns

  • Mixed reality glasses that displayed programme subtitles

Ideation 1.jpg
Blue Sky Thinking .jpg

Some more ideas here are:

  • Integrating existing assistive health devices

  • The use of regular onboarding tutorials

  • and Using voice shortcuts for better system recovery

 

 

However, to create a feasible testable prototype with our resources - we created more UI-focused designs.

'Job's to be done' framework

HOWMIGHTWE.jpg

DEFINE

Refined Problem Statement

We refined the problem statement which we tailored by combining our background research, personas, and interview insights - we wanted to be realistic about the insights that we had access to.

 

Current voice control systems on Sky TV are underused by many people with accessibility needs due to inconsistent recognition, unclear feedback, and a lack of personalisation.

Users often feel frustrated repeating themselves or having to rephrase commands unnaturally. Accessibility settings are hidden in menus and not easily configurable by voice.

Users who consistently rely on subtitles - whether due to hearing loss, sensory sensitivity, or language needs - are often frustrated that subtitles don’t stay enabled across apps or sessions. When voice commands like “turn on subtitles’” are misinterpreted or unrecognised, this forces them to navigate complex menus to activate a core accessibility feature.

Story Board

To try and empathise with the impact of a complicated or unresponsive voice controlled user exeperience we created a Storyboard to understand the frustrations of Zora: 

User Flow Diagram

After ideating on different ideas as a team in ideation sessions and individually, we decided to focus on creating features in the tv experience that would help imporve the watching experience of subtitle users. The goals within the flow are: 

​

  • Open user profile 

  • Play 'The Last of Us' 

  • Turn on Subtitles using voice control

  • Change the size of subtitles

  • Resume playing

Final User Flow - 6.1.4 Presentation.jpg

DESIGN

Mid Fidelity Wireframes & Prototype

Our mid-fi prototype showcases the layout and focuses on the profile page, and how the subtitle settings are part of the playback experience. Our colour version is important to show here, to see how content can alter the viewing experience of subtitles and their subtitle selection settings.

Colour Wireframes_edited.png
  • These wireframes presents a visual representation of how users can quickly view their accessibility settings.

  • The design aims to provide clarity and ease of access for users to understand their current settings at a glance.

Wireframes.jpg

We identified an opportunity here to conduct further testing to explore different display options, such as size variations labelled Small to Large, or using percentage increases.

TEST

Usability Testing

What were we testing?

We wanted to test whether users could successfully  complete tasks using only voice control to identify edge cases for natural speech recognition and need for user personalisation. 

Who were our participants?

5 participants aged 25–45, with varied tech confidence and usage habits Included everyday users, accessibility-focused users, and voice tech power users

Why test?
  • To validate that the voice interface supports hands-free navigation for diverse users

  • To identify pain points with recognition, feedback, and preference saving

  • To ensure the system accommodates accessibility needs like subtitle customisation

Method:
Moderated, task-based voice control test using mid-fidelity prototype. 

(Wizard of Oz Method)

​

Participants:

6 users aged 25–45 with varying voice tech familiarity

​

Focus: Natural voice interaction, accessibility settings, playback control

​

Scenario: Users asked Sky to play a show, enable subtitles, adjust size, and save preferences — using only their voice.

Usability Test Results and Metrics

  • Avg. Satisfaction: 4.4 / 5

  • Avg. Confidence: 4.0 / 5

  • “Subtitles preview was so useful” – Ryan

01 Satisfaction
03 Recovery
  • Most users didn’t need to repeat command

  • On-screen hints rated helpful by 4/5 users​

02 Efficiency
  • Average task time (profile to playback): 28 seconds

  • Users found subtitle commands slightly slower due to percentage clarification

04 Effectiveness
  • 100% of users completed all tasks via voice

  • One user needed to rephrase “make subtitles smaller

​

Qualitative Feedback

user test feedback 2.jpg
user test feedback 1.jpg
user test feedback 3.jpg

Actionable Takeaways 

01 Shorten Voice responses

Reduce the length of system voice replies to improve pacing and minimise user fatigue.

02 Clarify System Feedback

Provide more intuitive, real-time feedback using both visual and auditory cues.

03 Expand Settings Flexibility

Enable broader or customisable settings saving (e.g. apply subtitle size across all apps or shows).

04 Improve Input Interpretation 

Enhance natural language processing to better understand informal phrases (e.g., “make subtitles a bit smaller”).

05 Support Multiple User Profiles

Allow multiple user profiles with unique and personalised accessibility settings.

06 Add Error Handling Feedback

Incorporate haptics or subtle visual cues when the system doesn’t understand or hear the user.

ITERATE

Design Iteration

DESIGN ITERATION 2.jpg
DESIGN ITERATION 1.jpg
DESIGN ITERATION 3.jpg
DESIGN ITERATION .jpg

Updated Storyboard - Happy Path

NEW STORYBOARD1.jpg
NEW STORYBOARD 2.jpg

CONCLUSION

Once we completed the usability testing revealed clear patterns.
Users loved quickly jumping into content without menus or remotes.

 

But friction remained:
Some had to rephrase commands, wait through long responses, or navigate unclear options like subtitle sizing.


Visual cues now show when the system is listening, and phrasing is more flexible—commands like “make subtitles a bit smaller” now work more naturally.

These refinements make the experience faster, more human, and more accessible but more improvements can still be made to the inclusivity of the system.

CONCLUSION.jpg
bottom of page