Your SlideShare is downloading. ×
0
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Attention Approximation: From the web to multi-screen television
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Attention Approximation: From the web to multi-screen television

120

Published on

The move towards the provision of television content over two or more screens represents an enormous opportunity and a considerable challenge. A scientific understanding of what causes people to …

The move towards the provision of television content over two or more screens represents an enormous opportunity and a considerable challenge. A scientific understanding of what causes people to switch attention between the main screen and a 'second screen' mobile device during television viewing is key to the development of this technology. This seminar describes how ‘attention approximation’, a technique we have used to model visual attention and design screen reader presentation of Web content, can be used to investigate viewing behaviour, and ultimately drive the provision of content across multiple screens.

Published in: Science, Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
120
On Slideshare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
2
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • Watching TV now involves more than one screen.

    People have been using second screens, mobile devices such as tablets or phones, for a while, but much of this has been viewer-led – e.g. looking up additional info or social media.
    Broadcasters are really keen to exploit this, so they are starting to do develop companion content for what’s happening on main screen.

    Can see Secret Fortune – play at home. Broadcasters want to go way beyond this, but at the moment this type of interaction is not well understood.

    Has been SS research, but mostly looking at social aspects of SS use.

    What we don’t have are models describing cognitive and perceptual aspects of multiple device interaction: how do people split their attention between devices? What are the factors that influence attention orientation?

    This is what we’re trying to investigate with this work.

  • Could apply to different modalities.
    May be modelled, or detected on the fly.
  • Some previous work in this area
    Some controlled studies that movement had an effect, others saying it didn’t.
  • Some previous work in this area
    Some controlled studies that movement had an effect, others saying it didn’t.
  • Some controlled studies that movement had an effect, others saying it didn’t.
  • Some controlled studies that movement had an effect, others saying it didn’t.
  • We realised pretty early on that a controlled that designing a controlled study at the outset was not going to be possible. There are a plethora of dynamic updates – we had no idea what a typical update looked like – and we wanted to be able to deal with any of them.
  • Virtually all modelling based on predicting or understanding performance as a function of task.

    Study just talked about as it was only loosely based on task (trying to find errors in spreadsheet).

    In real world, never going to know person’s task. Not to say we ignore task – may be able to infer what’s happening, and use that to help us understand behaviour – but that it’s not possible to know before somebody has done something, what they are going to do. Especially true of complex web apps.

    Still useful to be able to predict behaviour though – not least because knowing how someone will respond to the perceptual characteristics of UI components could help with design.
  • 1486 updates
    585 validators
  • Not surprising -
  • Can the model tell us anything about how people view television content?

    (picture of final score, red button, dual screen)
  • Can the model tell us anything about how people view television content?

    (picture of final score, red button, dual screen)
  • Can the model tell us anything about how people view television content?

    (picture of final score, red button, dual screen)
  • Watching TV now involves more than one screen.

    People have been using second screens, mobile devices such as tablets or phones, for a while, but much of this has been viewer-led – e.g. looking up additional info or social media.
    Broadcasters are really keen to exploit this, so they are starting to do develop companion content for what’s happening on main screen.

    Can see Secret Fortune – play at home. Broadcasters want to go way beyond this, but at the moment this type of interaction is not well understood.

    Has been SS research, but mostly looking at social aspects of SS use.

    What we don’t have are models describing cognitive and perceptual aspects of multiple device interaction: how do people split their attention between devices? What are the factors that influence attention orientation?

    This is what we’re trying to investigate with this work.

  • When we think about media consumption, there are two main interaction models.
  • So far we’ve run eye tracking studies examining two scenarios: additional content on the TV, and additional content on a companion device. The methods for this work have been described in CHI presentations last year and this year, so I won’t go into detail about how they were run, but I will share some of the more interesting results with you.
  • On the web you have event data to help. Previous work showed that it was the most important factor. Lean back consumption we need to consider different factors.
  • So within a TV viewing scenario, there are potentially lots of different types of interaction, and lots of different types of activities.

    How should we start to investigate this situation?

    We were working on this research the BBC, who are the primary TV network in the UK, but also produce a lot of programs that go out worldwide. They are really interested in this space, and had already produced a prototype companion app for the show ‘Autumn Watch’, which is a popular nature show that goes out between September and November in the UK.

    Our approach was to get people to watch the program with the app, and observe what happened. Because we were interested in understanding attention orientation, we decided to track their eye movements during the study, so we could work out which device they were looking at.
  • So the first obvious technical issue here is, can we track eye movements in this scenario? We wanted to use free standing eye trackers as they are less intrusive than head-mounted ones, but they are essentially designed to be used one at a time with a desktop display. We used two Tobii eye trackers, one mounted below the tablet, which was fixed a clamp, and one in front of the TV.

    The second issue is, is the set up ecologically valid. We’ve set up the lab to look like a living room, but there are two obvious problems. One is that the tablet is clamped, which mayu restrict the participant’s willingness to interact with it; the second is that we are still in a lab, we’re not in someone’s home.
  • False positive rate 6% of segments.

    The internal validity of the eye tracking data was pretty good.

    Participants primarily fixated faces on the TV, and text on the tablet, as we might expect, which shows that the calibration was reasonably accurate.
    The tablet had a camera mounted above it, and we performed a painstaking analysis where we checked, for every half second period, whether the video and eye tracking data agreed on whether the participant was fixating the tablet.

    This showed that the gaze tracking was pretty accurate, apart from a few cases where the eye tracker, mounted beneath the tablet, was occluded by the participants’ hand. All in all, the data were pretty good though, so we can see that eye tracking provides a quick and accurate means of monitoring attention.

    So what about the external validity of the data? We were concerned that mounting the tablet in a clamp so that it could be used with the eye tracker would restrict the extent that people interacted with it. To check this, we ran a second experiment, without any eye tracking. The video analysis results from these two data sets were highly correlated, so we’re reasonably confident that interaction wasn’t restricted by the set up that much.


  • Half second time slices. Web cam used to record face.
  • So what did the split of attention look like? This graph shows the percentage of participants who were looking at the TV, along the top, or the tablet, along the bottom, in 5 second intervals. One of the things we can see is that updates to the content on the tablet, shown by thick black lines, drew participants’ attention.

    Correlation 0.6, 2 sec bins
  • There were of course, other factors that drew attention too. Here’s an example of one of them.

    We can see that
  • Whether or not people view tablet.
  • % people viewing (y/n) vs. % people touching (y/n) in 2 s bins.
    Correlation 0.44, 2 sec bins
  • Explored the methodological issues – haven’t explored what the data actually means. Will do this tomorrow.
  • Transcript

    • 1. Attention Approximation: From the web to multi-screen television Caroline Jay caroline.jay@manchester.ac.uk Web Ergonomics Lab, University of Manchester Research funded by EPSRC Knowledge Transfer and Impact Acceleration Accounts
    • 2. ‘Attention Approximation’ • What is it? • Why is it useful? • Where did it come from? • How are we using it now? Attention Approximation 2
    • 3. Attention Approximation • Determining the ‘focus’ of attention, where ‘focus’ may vary along a number of dimensions: – Granularity • Which device? • Which part of the screen? – Population • Individual • Particular group • Everyone – Time period • Seconds • Time of day 3Attention Approximation
    • 4. Driving technology development with empirical models • Conceptual representations of interaction built entirely on data can help us – Predict technology usage – Inform interaction design • In applied research, ecological validity is important. 4Attention Approximation
    • 5. Ecologically valid interaction models • Task may not be predetermined. • We want to understand what the user is doing, and why. – We need to know the current focus of attention. • When there are multiple parallel information streams, determining which is in focus is hard. Attention Approximation 5
    • 6. Translating Web content to audio • Screen readers handled dynamic updates badly. • If we understood how sighted users view updates, could we translate them to audio more effectively? 6 SASWAT project, funded by EPSRC (EP/E062954/1)Attention Approximation
    • 7. Controlled study • Real Web pages • View for 30 seconds • Conditions: – Ticker active – Ticker stationary • Are people more likely to look at the moving ticker?
    • 8. Results Stationary ticker Moving ticker
    • 9. Results Stationary ticker Moving ticker
    • 10. Exploratory study • Participants completed tasks on sites that contained dynamic content. – No constraints on how task was completed. – No constraints on where task was completed. • Nine minutes of browsing. 10Attention Approximation
    • 11. Data-driven analysis • Can we predict whether people view dynamic updates as a function of their characteristics? • Chi-squared Interaction Detector (CHAID) analysis – Action: click, hover, keystroke, enter, none – Area: cm2 – Duration: seconds – (participant) – (addition or replacement) • Validation data from later study 11Attention Approximation
    • 12. Results • CHAID model predicts viewing behaviour with an accuracy of ~80% • Best predictor: action Keystroke/Enter/Hover 41% None 20% Click 77% Action 12Attention Approximation
    • 13. 1.1-7.8 71% 7.8-32.9 90% >32.9 99% <1.1 39% Click 77% Area (cm2) Click-activated updates 13Attention Approximation
    • 14. All other updates 2.8-6.2 20% >6.2 30% <2.8 6% >2.8 81% 1.2-2.8 59% 0.6-1.2 41% <0.6 16% None 20% Duration (s) Duration (s) Keystroke/Enter/Hover 41% 14Attention Approximation
    • 15. Why does the model take this form? • Area (and action) are properties of the update. – As an update increases in size it becomes more salient. • Duration is sometimes a property of the update, and sometimes a property of user behaviour. – The longer a suggestion list appears on the screen, the more likely it is to be viewed. – People pause to view the content. 15Attention Approximation
    • 16. Translating dynamic updates to audio • FireFox plugin – Prioritize click-activated updates. – Deliver keystroke-activated updates whenever there is a pause in typing. – Opt-in to receiving automatic updates. • Preferred by all participants in blind and double-blind evaluation when compared with FireVox baseline. 16Attention Approximation
    • 17. A conversation with BBC R&D • Can we predict behaviour with other types of media? • Can we use this to drive future media development? 17Attention Approximation
    • 18. 18Attention Approximation
    • 19. 19Attention Approximation
    • 20. 20Attention Approximation
    • 21. 21Attention Approximation
    • 22. Media interaction models • Desktop, Web and social media – Lean forward • Newspaper, film and television – Lean back • Two or more screens – Lean back and lean forward – Lean back and lean back – Lean forward and lean forward 22Attention Approximation
    • 23. Eye tracking TV viewing C. Jay, A. Brown, M. Glancy, M. Armstrong, S. Harper (2013). Attention approximation: from the Web to multi-screen television. TVUX- 2013@CHI. http://goo.gl/dvAp3V Brown, M. Evans, C. Jay, M. Glancy, R. Jones, S. Harper (2014). HCI over multiple screens. CHI EA: alt.chi 2014. http://goo.gl/UJhPC5 23Attention Approximation
    • 24. Attention on a single screen 24Attention Approximation
    • 25. Television Second screen Attention across two screens • Observation of existing second screen app use • Unconstrained interaction • Eye tracking 25Attention Approximation
    • 26. Technical issues • Can we track eye movement over two screens? • Is the set up ecologically valid? 26Attention Approximation
    • 27. Data validity • Good calibration. • Good match between eye tracking data and video analysis. • Good match between data collected with and without eye tracking. 27Attention Approximation
    • 28. Results • 5:1 split of visual attention to the TV • Dwell times longer for the TV Length of viewing period > 30 seconds < 2.5 seconds TV 27% 30% Tablet < 1% 51% 28Attention Approximation
    • 29. Television Split of attention across two screen Tablet 29Attention Approximation
    • 30. Updates and action 30 TV: ‘There, there, there..!’ Tablet: ‘Where to see a dolphin’ Attention Approximation
    • 31. 31Attention Approximation
    • 32. 32Attention Approximation
    • 33. Attention approximation in action Attention Approximation 33
    • 34. Approximating attention in the wild • Improve the ecological validity of predictive models. • Detect focus to drive interaction on the fly. Attention Approximation 34
    • 35. Touch as a proxy for visual attention 35 Web proxy logging tool: A. Apaolaza, S. Harper & C. Jay (2013). Understanding users in the wild. W4A 2013. Attention Approximation
    • 36. Using attention approximation in technology development • It’s complicated – particularly in the wild – Influence – Inference • Model according to application – Production design – Content delivery • Ultimate contribution – To advance craft-based engineering with science 36Attention Approximation
    • 37. Find out more Publications, reports and data: http://goo.gl/1h4z4K caroline.jay@manchester.ac.uk The Web Ergonomics Lab The University of Manchester, UK http://wel.cs.manchester.ac.uk/ 37Attention Approximation
    • 38. Challenge • Model must predict future observations. – Internal validity: reliably predicts observations in the same setting. – External validity: reliably predicts observations in other settings. 38 What is the appropriate paradigm for building this type of model? Attention Approximation
    • 39. Challenges • Eye tracking is accurate, but only suitable for the lab – Currently investigating logging data and interaction on the device • Many factors to consider: – Interaction – Content – Environment • If we can effectively monitor these in the wild… – Privacy 39Attention Approximation

    ×