Research & Development

Posted by Libby Miller on , last updated

The bi-weekly notes from the Internet Research and Future Services section. This week: avoiding useless cases.

This week some of us have been looking into how we might prototype aspects of the MediaScape work. MediaScape is an EU-funded project about web-developer-friendly standards for connected devices. We've spent time in the project making sure we have suitable usecases that describe the scope of the project from the point of view of the end user. These initially take the form of scenarios that describe what people might do with MediaScape components in the future, which are then broken down into more detailed componts. Usecases in both the broad and narrow sense are required for many EU-funded and similar projects, for various reasons, for example:

  • To ground the technical choices we make by situating them in real-world situations
  • To produce requirements for the technical work 
  • To ensure we have agreement among the partners in the project about what the project is for and why it's important

In IRFS we to use prototyping - often code prototypes based on usecases - to ensure that the technical choices we make are suitable. But there's another step before we start to code.

Many usecases - in this sense of descriptions of scenarios that describe what people might do - are in fact 'useless cases'. They might appear plausible and convincing when written down, but when they are made they become a thing that turns out to be broken in some way. Perhaps they are too niche, so that only one person in a hundred would use them, or sometimes they just involve implausible digital or physical interactions. Weeding these out - or in some cases refining them so that they are plausible and useful - is what we have started to do this week. Coding is expensive in terms of time and effort, so it's better to filter these out early if possible.

One way we've done this is to draw what happens in the scenario, as we did in NoTube, to try and bring out the detail and focus on the user. MediaScape is about physical devices, which adds much more complexity to the interactions. To start to understand these complexities, this week we built pretend versions of the devices out of scrap materials and took a series of photos based on a short script representing the interactions. This is a variant of pre<to>typing, a tool used by designers, where an inert item represents the object(s) to be designed. This part of the process puts the human user focus and gets us thinking about what the experience would look and feel like, and really brings out whether it would be "useless" or actually plausible and useful. The result - a very quickly made video - also provides a useful thing to show other people involved in the project, including our stakeholders in the MediaScape project and the BBC.

To bring out some of the technical aspects, we assigned each person the role of a component in the process. So for example one person (AndrewN in fact) was the user "Andy" - another (Chris Needham) played the role of his laptop, Lianne was the networked radio he wanted to control, Dan was a networked vacuum cleaner and Joanne, AndrewW and I were various radios and TVs on the same network. Giving those items a voice, and speaking their interactions out loud is a way to tease out some of the complexities of a protocol in a way that is visible and obvious to everyone participating. It becomes very clear what each device needs to know and what the flow would be. As far as I'm aware, this process was invented by Dan Brickley and Vicky Buser in the NoTube project but is also a more physical version of a process Sean and Chris Lowis used for RadioTag.

These two quick, cheap, processes are about bringing the cost of prototyping down, and so only using code when it is productive and useful to do so. Our skills are sufficient to build close-to-production levels of quality for code prototypes, but unless there's a specific reason to do that - for example to test with many end-users - then we need to focus on the pieces that we need to make for the purpose we want to use them for.

That's been our week in the Devices team within IRFS.

Meanwhile, in Highlights

Another way to avoid useless cases is to observe people in their own environment and see what they need to do to do their jobs. This is a good fit for applications of the the Highlights project, to see if automated highlight discovery for sporting events can help with existing tasks happening within the BBC. Andrew Wood, Denise, and Lianne visited the Multimedia Sports Team in Salford to learn from their working process – specifically how Assistant producers work when logging / editing and processing highlights during live sporting events.

Lianne says: "We observed four Assistant Producers, watched a simulated football match (England V Germany 2010 World Cup) and had a discussion during and after the match. This proved very insightful - they talked us through their decisions around which highlights to include in an edit, the extra clips they use and commentary. They also completed a questionnaire and provided feedback which helped to shape the analysis for the World Cup."

Automated Metadata Extraction News

Jana has been to the EBU for another working group meeting on automatic information extraction, and to give a presentation together with James Harrison on 'Large scale metadata extraction at BBC R&D' at the Metadata Developer Network Workshop.

She's also looked at matching scripted presenter prompts to radio transcripts obtained from automatic speech transcription using COMMA. We had good results previously locating continuity announcements between programmes, and the same approach also worked very well for locating the presenter prompts between individual items of Woman's Hour. This could lead to an automatic segmentation of magazine style programmes, allowing to treat the individual items as units rather than the whole programme.

Yves and Thomas Nixon got a paper accepted at Interspeech (one of the largest speech technology conferences) about identifying speakers in an archive through locality sensitive-hashing in the the World Service Archive.

Yves has spent most of the last two weeks adapting a new version of our automated tagging system Mango to work in COMMA, with noisy transcripts, no capitalisation, etc.

He's also working with the Linked Data Platform (LDP) team to get a hold of a good journalistic tagging evaluation set, which represents what they would ultimately want an automated system to do, rather than what they do right now.

He also had a play at predicting missing tags from an LDP data dump from last week, with our theano_bpr library, which seems to work pretty well, except for a bias towards sport, which could probably be rectified by a different sampling algorithm.

Also on COMMA, Rob's been business planning, working with Dom and our partners Somethin’ Else and Kite, establishing business models and a roadmap ahead of launch next year.

Sharing what we've learned

Lianne and Jiri shared some research that they saw at CHI 2014 (Computer-Human Interaction conference) in Toronto. Jiri talked about personal histories - using the idea of a comic book to represent peoples' browsing histories in a more creative and engaging way, coming away from the traditional list format. Lianne discussed the importance of an effective ecosystem for connecting multiple devices - including new (and possibly future) ways that we can interact with smart devices, such as using the skin to control mobile technology. There will be four dedicated blog posts to describe research seen at CHI in more detail over the next few weeks

Tristan and Zillah have been setting up loads of meetings around the BBC for Mythology Engine, Homefront and what Tristan's now calling "BBC Stories". Tristan has been working on a presentation and doing some prototyping using Medium.com. He's got access to the BBC Knowledge and Learning timeline CMS to do more prototyping and has also been talking with Chris, Sean and Andrew about better ways to document project progress

Links

Chris Needham and Matt Haynes joined the latest conference call of the W3C TV API Community Group, which is currently gathering use cases and requirements for a web API for TV tuner control.

Work with us: there's a Creative Director job at IRFS, though the closing date is today!

Goodnight! says the Radiodan