Research & Development

What we're doing

Spoken interfaces appear to be emerging as a class of device and service which manufacturers are committed to and people are actually using. Amazon Alexa is an obvious example, but Siri and Google Now are also gaining traction, and Google recently announced Home, its own competitor to Alexa and Siri.

These devices represent an opportunity for a kind of personal, connected radio which the BBC would be well placed to explore - we’re already a familiar voice in homes across the UK due to our radio output, putting us in a unique position to explore possibilities for engaging listeners in a two-way spoken conversation. Audiences gain by having well thought-out content on their devices which can inform, educate and entertain alongside the inevitable slew of commerce-driven applications.

Talking with Machines is a project which will explore the possibilities of these devices and platforms in terms of content, interaction design, and software development patterns. We hope to learn enough to support other devices of this type and build a platform for generic support for these kinds of device.

BBC Taster - Try The Inspection Chamber

BBC R&D - The Unfortunates: Interacting with an Audio Story for Smart Speakers

Alongside this practical work, we’ll be experimenting with prototypes and sketches in hardware and software to explore the types of interaction and content forms that these devices allow. There’s also the intriguing possibility of developing a prototyping method based on humans roleplaying the part of the device, since interactions with these devices should resemble a natural human conversation. We have already done some work on a similar method for Radiodan, which we may look to build upon.

Talking with Machines has a few goals:

  • To develop a device-independent platform for supporting spoken interfaces
  • To build knowledge in R&D (and on to the larger BBC) around spoken interfaces:
    • conceptual models, how to think about spoken applications
    • software development patterns
    • UX and interaction design patterns for spoken interfaces
    • what kinds of creative content work well for speech-based devices, and ideas around how to structure creative applications for this context

BBC R&D - Prototyping for Voice UI

BBC R&D - Prototyping for Voice: Methodology

BBC R&D - Prototyping for Voice: Design Considerations

Why it matters

There’s links from this project to a few streams of work happening in R&D. As we start to understand the speech-to-text challenges and move towards building our own engine, there’s a lot of opportunity to work with our IRFS section’s Data team who are working on similar projects. In a more general sense, there’s work around natural language processing that we could use and push forward. We currently have a PhD intern who will be working on modelling voices of BBC talent from large amounts of BBC Redux content, which could be interesting to play with in the context of spoken interfaces.

There’s also potential overlaps with discovery work around finding BBC media and personalisation, choosing what to watch/listen and even structured stories (e.g. interrogating a news story). One of the stranger (but fun sounding) suggestions we’ve had is a Socratic dialogue simulator for the Ideas Service!

BBC Academy - Talking to the internet: Digital assistants and the media

The interactive radio aspects of these devices resembles work done in our North Lab on Perceptive Media and Squeezebox, and there’s a lot we can learn from Radiodan.

There is a lot of interest in conversational UI and bots across the BBC, but this interest tends towards text-based, messenger-type interfaces. This project focuses on spoken interfaces, while learning from and contributing towards more general conversational UI work happening in the wider BBC.

The number of devices and platforms in the wild is expected to grow and it’s not hard to imagine a future in which an entirely new voice-driven platform opens up, either on mobile or specific hardware. And there’s potentially a large number of possible users: anyone who has access to a device which allows for a spoken interface and can play audio.

Our goals

The short-term goal is to prototype services we could offer and we hope the stream of work will drive development of a platform designed to provide support and applications for speech-driven devices in general. Once we’ve got a good, solid prototype we would like to develop standalone applications (or add capabilities to a core platform) based on earlier exploratory work and develop support for other speech-driven devices.

We're also hoping to develop a set of UX tools and techniques to help us think about and design voice UI.

Tweet This - Share on Facebook

BBC R&D - The Inspection Chamber

BBC R&D - User Testing The Inspection Chamber

BBC R&D - The Unfortunates: Interacting with an Audio Story for Smart Speakers

BBC R&D - Prototyping for Voice: Methodology

BBC R&D - Prototyping for Voice: Design

BBC R&D - Singing with Machines

BBC R&D - Better Radio Experiences

BBC R&D - The Mermaid's Tears

BBC R&D - Audio Research

BBC R&D - Responsive Radio

BBC R&D - Object-Based Media

BBC R&D - ORPHEUS

Amazon Developer - Alexa

Google Developers - Assistant

Apple Developer - Siri + Apps

BBC Academy - Talking to the internet: Digital assistants and the media

BBC News: OK Google - who will win the AI wars?

BBC News: Hands-on with Amazon's British-accented Echo speaker

BBC News: Apple brings Siri to Macs

This project is part of the Internet Research and Future Services section

Topics

People & Partners

Project Team