This is the presentation at CHI 2014 for the paper:
Andy Brown, Michael Evans, Caroline Jay, Maxine Glancy, Rhianne Jones, and Simon Harper. 2014. HCI over multiple screens. In CHI '14 Extended Abstracts on Human Factors in Computing Systems (CHI EA '14). ACM, New York, NY, USA, 665-674. DOI=10.1145/2559206.2578869 http://doi.acm.org/10.1145/2559206.2578869
http://www.cs.man.ac.uk/~jayc/final.pdf
It discusses the challenges of determining where attention is focused during TV viewing, and describes a dual-device eye tracking experiment that addresses this issue.
See the presentation in action on YouTube:
https://www.youtube.com/watch?v=c7jhoqupKJY
Call Girls Ahmedabad +917728919243 call me Independent Escort Service
HCI Over Multiple Screens
1. HCI Over Multiple Screens
Andy Brown, Michael Evans, Caroline Jay,
Maxine Glancy, Rhianne Jones, Simon Harper
Web Ergonomics Lab, University of Manchester
BBC R&D, MediaCityUK
Presenter: Caroline Jay
caroline.jay@manchester.ac.uk
Research funded by EPSRC Knowledge Transfer Account and relationship Incubator.
1
3. Interaction Model
• Desktop, Web and social media
– Lean forward
• Newspaper, film and television
– Lean back
• Two or more screens
– Lean back and lean forward
– Lean back and lean back
– Lean forward and lean forward
3
5. Technical issues
• Can we track eye movement over two screens?
• Is the set up ecologically valid?
5
6. Data validity
• Good calibration
• Good match between eye tracking data and
video analysis
• Good match between data collected with and
without eye tracking
6
10. Challenges
• Eye tracking is accurate, but only suitable for the
lab
– Currently investigating logging data and interaction on
the device
• Many factors to consider:
– Interaction
– Content
– Environment
• If we can effectively monitor these in the wild…
– Privacy
10
11. Driving future media development with
empirical models
Find out more tomorrow at CHI 2014
Predicting whether users view updating
content on the World Wide Web (and beyond…)
Session: Interacting with the Web
714AB: 11.00 – 12.20
Find out more on the Web
Slides, papers, tech reports, data:
http://goo.gl/1h4z4K
caroline.jay@manchester.ac.uk
The Web Ergonomics Lab
The University of Manchester, UK
http://wel.cs.manchester.ac.uk/
11
Editor's Notes
This alt.chi presentation considers how we move from the historical HCI paradigm of task-based interaction with a desktop computer, to research which considers interaction with more than one device in an unconstrained entertainment setting.
The particular focus of our current research is on understanding how people orient their attention between the main screen and a companion device, such as a mobile phone or tablet, while they’re watching TV.
Watching TV now involves more than one screen.
People have been using second screens, mobile devices such as tablets or phones, for a while, but much of this has been viewer-led – e.g. looking up additional info or social media.
Broadcasters are really keen to exploit this, so they are starting to do develop companion content for what’s happening on main screen.
Can see Secret Fortune – play at home. Broadcasters want to go way beyond this, but at the moment this type of interaction is not well understood.
Has been SS research, but mostly looking at social aspects of SS use.
What we don’t have are models describing cognitive and perceptual aspects of multiple device interaction: how do people split their attention between devices? What are the factors that influence attention orientation?
This is what we’re trying to investigate with this work.
When we think about media consumption, there are two main interaction models.
So within a TV viewing scenario, there are potentially lots of different types of interaction, and lots of different types of activities.
How should we start to investigate this situation?
We were working on this research the BBC, who are the primary TV network in the UK, but also produce a lot of programs that go out worldwide. They are really interested in this space, and had already produced a prototype companion app for the show ‘Autumn Watch’, which is a popular nature show that goes out between September and November in the UK.
Our approach was to get people to watch the program with the app, and observe what happened. Because we were interested in understanding attention orientation, we decided to track their eye movements during the study, so we could work out which device they were looking at.
So the first obvious technical issue here is, can we track eye movements in this scenario? We wanted to use free standing eye trackers as they are less intrusive than head-mounted ones, but they are essentially designed to be used one at a time with a desktop display. We used two Tobii eye trackers, one mounted below the tablet, which was fixed a clamp, and one in front of the TV.
The second issue is, is the set up ecologically valid. We’ve set up the lab to look like a living room, but there are two obvious problems. One is that the tablet is clamped, which mayu restrict the participant’s willingness to interact with it; the second is that we are still in a lab, we’re not in someone’s home.
False positive rate 6% of segments.
The internal validity of the eye tracking data was pretty good.
Participants primarily fixated faces on the TV, and text on the tablet, as we might expect, which shows that the calibration was reasonably accurate.
The tablet had a camera mounted above it, and we performed a painstaking analysis where we checked, for every half second period, whether the video and eye tracking data agreed on whether the participant was fixating the tablet.
This showed that the gaze tracking was pretty accurate, apart from a few cases where the eye tracker, mounted beneath the tablet, was occluded by the participants’ hand. All in all, the data were pretty good though, so we can see that eye tracking provides a quick and accurate means of monitoring attention.
So what about the external validity of the data? We were concerned that mounting the tablet in a clamp so that it could be used with the eye tracker would restrict the extent that people interacted with it. To check this, we ran a second experiment, without any eye tracking. The video analysis results from these two data sets were highly correlated, so we’re reasonably confident that interaction wasn’t restricted by the set up that much.
So what did the split of attention look like? This graph shows the percentage of participants who were looking at the TV, along the top, or the tablet, along the bottom, in 5 second intervals. One of the things we can see is that updates to the content on the tablet, shown by thick black lines, drew participants’ attention.
There were of course, other factors that drew attention too. Here’s an example of one of them.
We can see that
Explored the methodological issues – haven’t explored what the data actually means. Will do this tomorrow.