Blignaut Visual Span And Other Parameters For The Generation Of Heatmaps

  • 297 views
Uploaded on

Although heat maps are commonly provided by eye-tracking and visualization tools, they have some disadvantages and caution must be taken when using them to draw conclusions on eye tracking results. It …

Although heat maps are commonly provided by eye-tracking and visualization tools, they have some disadvantages and caution must be taken when using them to draw conclusions on eye tracking results. It is motivated here that visual span is an essential component of visualizations of eye-tracking data and an algorithm is proposed to allow the analyst to set the visual span as a parameter prior to generation of a heat map.

Although the ideas are not novel, the algorithm also indicates how transparency of the heat map can be achieved and how the color gradient can be generated to represent the probability for an object to be observed within the defined visual span. The optional addition of contour lines provides a way to visualize separate intervals in the continuous color map.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
297
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
2
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Visual span and other parameters for the generation of heatmaps Pieter Blignaut Department of Computer Science and Informatics, University of the Free State, South Africa Abstract Miniotas 2007]. Three-dimensional fixation maps can be used to make the heat map graphically more attractive, but they tend to be Although heat maps are commonly provided by eye-tracking and less informative since the further parts of the image are shown visualization tools, they have some disadvantages and caution with less detail and are obscured with peaks at the near end [Tobii must be taken when using them to draw conclusions on eye Technology 2008; Wooding 2002]. tracking results. It is motivated here that visual span is an essential component of visualizations of eye-tracking data and an Despite their informative nature, heatmaps have disadvantages as algorithm is proposed to allow the analyst to set the visual span as well. Bojko [2009] lists several points of caution when using a parameter prior to generation of a heat map. heatmaps and provides a number of guidelines when using heatmaps. Bojko [2009] and Blignaut [2009] highlight the Although the ideas are not novel, the algorithm also indicates how importance of the algorithm and parameters that are used to transparency of the heat map can be achieved and how the color identify fixations. Three other aspects that can lead to erroneous gradient can be generated to represent the probability for an object interpretation of eye tracking data must also be considered. to be observed within the defined visual span. The optional Firstly, if the difference in time spent between areas with little addition of contour lines provides a way to visualize separate attention and areas with much attention is large, the areas with intervals in the continuous color map. little attention might not be colored clearly enough and can be mistaken as not being observed at all. Secondly, the visual span, Keywords: Eye-tracking, Visualization, Heatmaps or foveal field of view, of an individual determines the amount of information that can be observed with peripheral vision. Thirdly, CR Categories: H.5.2 [Information Interfaces and the transitions from one color to the next are not sharp and it is Presentation]: User Interfaces; I6.9c [Simulation and Modeling]: difficult to interpret the colors in terms of a numeric value for the and Visualization: Information visualization) specific metric of attention that is used. 1. Introduction This paper focuses on heatmaps as a visualization technique for eye-tracking data. An algorithm to generate heatmaps is discussed. User-defined parameters, such as visual span, A fixation may be thought of as the mean x and y position transparency, color range, and the probability for an object to be coordinates measured over a minimum period of time during observed at a specific distance from the centre of a fixation are which the eye does not move more than a certain maximum included in this algorithm. The use of contour lines to visualize amount [Eyenal 2001]. Therefore, the point of regard (POR), i.e. separate intervals in the continuous color map is proposed. the gaze coordinates at a specific moment in time, must continuously remain within a small area for some minimum time for it to be regarded as a fixation. 2. Experimental set-up Several techniques exist in which eye-tracking data can be The stimuli used as example in this paper were taken from a visualized. Bar graphs, for example, may be used to show the memory recall experiment during which chess players had to look number of fixations or visitors or average time spent per area of at a configuration of chess pieces for 15 seconds whereafter they interest (AOI). Techniques also exist to overlay the original had to reconstruct the configuration. The recall performance of stimulus with visualizations in order to guide the analyst towards the participants is beyond the scope of this paper and only the eye- conclusions. Scan paths, for example, may be used to indicate the tracking data that was captured during the fifteen seconds position of fixations with dots that overlie an image of the original exposure time was used in the visualizations. stimulus. The dots may be connected with lines to indicate the temporal relationship or saccades between fixations while the Data was captured with a Tobii 1750 eye-tracker. The stimuli radius of the dots can, optionally, represent fixation duration. were displayed on a 17" screen with a resolution of 1024×768 at an eye-screen distance of 600 mm. The stimuli were sized so Heat maps are semi-transparent, multi-colored layers that cover that 1 of visual angle was equivalent to about 33 pixels or 10.5 areas of higher attention with warmer colors and areas of less mm. The individual squares of the chess board spanned about 20 attention with cooler colors. Instead of highlighting the areas of mm (2) while each piece was displayed at about 7×8 mm (<1). higher attention with red, they can be left uncolored while the areas of lesser attention are dimmed to a degree that corresponds 3. Generation of heatmaps to the amount of attention [Tobii Technology 2008; Spakov and 3.1 Visual span Copyright © 2010 by the Association for Computing Machinery, Inc. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed Visual span refers to the extremes of the visual field of a viewer, for commercial advantage and that copies bear this notice and the full citation on the i.e. the area that can be cognitively observed with a single first page. Copyrights for components of this work owned by others than ACM must be fixation. The visual span of a fixation is measured as the distance honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on (in pixels) from the centre of a fixation to the furthest point where servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail an observer might be able to perceive objects. This is not the permissions@acm.org. ETRA 2010, Austin, TX, March 22 – 24, 2010. © 2010 ACM 978-1-60558-994-7/10/0003 $10.00 125
  • 2. same as the radius of a fixation, which is the distance from the centre of a fixation to the POR that is the furthest away. 8 In Figure 2 circles are drawn around fixation centers to indicate 7 the visual field of highest acuity (diameter = 2). Fixations are shown as dots, with the size of the dots being representative of the 6 duration of a fixation on a linear scale. The 2 visual fields of Figure 2 might lead an analyst to conclude that the participant did not see the pieces on a2, b8, g1 or h2. One could rightfully ask 5 why a participant would bother to look at g2. 4 Bearing in mind, however, that a person might be able to observe objects at 2.5 from the centre of the foveal zone (5 visual span) 3 with 50% acuity [Duchowski 2007], it might be possible that the viewer perceived the white king and white pawn on g1 and h2 respectively, although he did not look at them directly. Using the 2 algorithm in Figure 1, a heat map was generated that illustrates this possibility (Figure 3). The same data set was used as in 1 Figure 2 but the visual span (Line 6) was set to 5. a b c d e f g h Figure 2. Circles around fixations to indicate the visual field of highest acuity (diameter = 2). 1. for each pixel of original stimulus 2. Weight[pixel] := 0 //Init pixel weights 3. end for 3.2 Assigning weights to fixations and pixels //User opted to let the system assign the Analysts should be allowed to select the metric of attention they //highest pixel weight to weight for red wish to plot in a heatmap. In other words, they should be able to 4. WtRed := 0; 5. for each fixation select whether they want to base a heat map on the number of 6. for each pixel within the visual span of fixations, the duration of fixations or the number of participants current fixation who observed a target area [Bojko 2009]. In the case of fixation 7. D := Distance pixel to fixation centre duration, the fixation weight (W) is set to the total duration (in ms) //p and W determined as described above of the fixation (Figure 1, Line 9). For the number of fixations or 8. p := Probability participant recordings, the fixation weight is set to a value that the 9. W := FixationWeight user may select to ensure smooth coloring, typically W=100. 10. Weight[pixel] := Weight[pixel] + (W*p) 11. end for 12. if Weight[pixel] > WtRed then Each fixation contributes to the total weight of all pixels within its 13. WtRed := Weight[pixel] visual field (Figure 1, Line 10). Since the visual fields of different 14. end if fixations may overlap, it is possible that various fixations can 15. end for contribute to the total weight of a specific pixel. For the duration and number of fixations all fixations within the visual field of a 16. for each pixel of original stimulus with pixel contribute to its weight. For the number of participant Weight[pixel] > 0 //Get respective colour components recordings only the nearest fixation of a specific recording to a 17. r := GetRedValue(Weight[pixel], WtRed) pixel contributes to the total weight of that pixel, provided that the 18. g := GetGreenValue(Weight[pixel], WtRed) pixel falls in the visual field of the fixation. 19. b := GetBlueValue(Weight[pixel], WtRed) //Add transparency 3.3 Probability 20. Pixel.Red := (T*Pixel.Red + (10-T)*r)/10 21. Pixel.Grn := (T*Pixel.Grn + (10-T)*g)/10 22. Pixel.Blu := (T*Pixel.Blu + (10-T)*b)/10 The probability that an observer will perceive an object during a fixation, p ε [0,1], decreases as the distance of the object from the //Draw contours if selected centre of a fixation increases. For each pixel within the visual 23. if Draw contours then span of a fixation, the fixation weight is multiplied with p before 24. c := Contour interval adding it to the total weight of the pixel (Figure 1, Line 10). 25. if Weight[pixel] div c <> Weight[neighbour pixel] div c then For the algorithm proposed in this paper, a user may select from //Make the colour of the pixel brown 26. Pixel.Red := 204 three different models for scaling the weight over the visual field, 27. Pixel.Grn := 102 V, i.e. Linear, Gaussian and No scaling. For no scaling p=1 for 28. Pixel.Blue := 0 all pixels within the visual field of a fixation, i.e. the complete 29. end if weight of the fixation contributes to the total weight of all pixels 30. end if within its visual field (example in Figure 3a). For linear scaling the probability, p, at a distance D from the fixation center is 31. end for p = 1-D/V where D  V. Figure 1. Algorithm for generation of heat maps 126
  • 3. 8 1.0 Probability to be observed 0.9 7 0.8 0.7 0.6 6 0.5 0.4 0.3 5 0.2 0.1 4 0.0 0.0 0.5 1.0 1.5 2.0 2.5 3 Distance from fixation centre (degrees) 2 Figure 4: Graph of the probability to be observed against distance from fixation centre. The red curve is for linear scaling and the blue curve for Gaussian scaling (FWHM = 40% 1 of 5 visual span). a b c d e f g h 3.4 Color model 8 The RGB color model is an additive model in which red, green, 7 and blue light are added together to reproduce a broad spectrum of colors. When generating heat maps, each pixel of the stimulus is assigned an RGB triplet (R, G, B) where each one of the 6 components can be an integer in the range 0 through 255. 5 The algorithm of Figure 1 uses a set of functions, GetRedValue, GetGreenValue and GetBlueValue (Lines 17, 18 & 19) to return the intensities for red, green and blue respectively for a specific 4 pixel based on its weight according to the composite linear model of Figure 5. Other color models, such as CMYK and CIE can also 3 be implemented. 2 3.5 Handling transparency The analyst has to select a transparency index for the heat map, T 1 ε [0,10], where 0 indicates no transparency (the stimulus is totally obscured) and 10 indicates complete transparency (heat map a b c d e f g h invisible). Every pixel of the original stimulus that is covered by the heat map, i.e. pixel weight > 0, is edited by decreasing the red Figure 3a (top): Heat map of the same data set of Figure 2. No component by the transparency factor, T/10 (Figure 1, Line 20). scaling. Duration for red=1264 ms. Thereafter, 1-T/10 of the red component of the heat map at that Figure 3b (bottom): Heat map of the same data set of Figure 2. pixel is added to the red component of the pixel of the original Gaussian scaling (FWHM = 40% of 5 stimulus (Figure 1, Line 20). The green and blue components are visual span). Duration for red=1264 ms. edited likewise (Lines 21 & 22). For Figures 3 and 6 the transparency index was set to 5 while for Figure 7 it was set to 8. For Gaussian scaling (example in Figure 3b), pixels near the centre of a fixation are assigned more weight than would have been the case with linear scaling while those further off are assigned less weight (Figure 4). For Gaussian scaling, p = a.e-(D-b)²/2c² , with a=1 and b=0. The constant c can be expressed in terms of the full width of the distribution at half maximum (FWHM), i.e. FWHM = 2.3548 × c [Wikipedia]. If FWHM is defined to represent 0.4 of the maximum visual span, it follows that Figure 5: A composite linear model for the relationships c = 0.17 × (visual span). between RGB components and pixel weight. (Weight for red is set to 100.) 127
  • 4. 8 8 7 7 6 6 5 5 4 4 3 3 2 2 1 1 a b c d e f g h a b c d e f g h Figure 6: Heat map of the same data set of Figure 3 but with Figure 7: Heat map with contour lines at intervals of 200 ms. the duration for red set to 600 ms instead of allowing the Duration for red = 1200 ms; Transparency = 8. algorithm to allocate the highest aggregate duration to red. presented that allows analysts to indicate the amount of peripheral vision that should be accommodated. The algorithm also allows 3.6 Color range the analyst to select the metric of attention together with an appropriate weight. The drop-off in visual attention can be scaled Besides the parameters for visual span, the model for scaling the linearly, according to a Gaussian function, or not at all. The weight and the transparency index, the analyst may decide to set a threshold value for red as well as the transparency can be weight to be used for red or choose to let the algorithm assign the adjusted. The addition of contour lines provides a means to highest weight of all pixels (as was done in Figure 1, Lines 4 & visualize areas of equal attention. 12-14). A fixed value is useful if the analyst wants to determine which areas received a certain minimum amount of attention References [Bojko 2009]. Figure 6 shows an example of a heat map where the duration for red was set to 600 ms instead of the 1264 ms that was BLIGNAUT, P.J. 2009. Fixation identification: The optimum determined by the algorithm and used for Figure 3. threshold for a dispersion algorithm. Attention, Perception and Psychophysics, 71(4), 881-895. 3.7 Adding contours BOJKO, A. 2009. Informative or Misleading? Heatmaps A heat map provides a qualitative overview of viewers' attention. Deconstructed. In J.A. Jacko (ed.) Human-Computer Although a specific color can be mapped quantitatively in terms Interaction, Part 1, HCII 2009, LNCS 5610, 30-39, Springer- of the selected metric of attention, it is not easy to communicate Verlag, Berlin. the value. Contours can be added to separate intervals in the continuous color map. DUCHOWSKI, A.T. 2007. Eye Tracking Methodology: Theory and Practice (2nd ed.). Springer, Londen. Contour lines designate the borders between different intervals of pixel weight. If two adjacent pixels belong to different contours, EYENAL. 2001. Eyenal (Eye-Analysis) Software Manual. Applied one of them should be colored differently to indicate a contour Science Group. Retrieved 12 June 2008 from point (Figure 1, Lines 23 – 30). Figure 7 shows a heat map of the http://www.csbmb.princeton.edu/resources/DocsAndForms/site same data as in Figure 3 with contour lines at intervals of 200 ms. /forms/Eye_Tracker/Eyenal.pdf It is believed that the contour lines assist substantially towards the SPAKOV, O. and MINIOTAS, D. 2007. Visualization of eye gaze interpretation of heatmaps. For example, it is now clear that the data using heat maps. Electronics and Electrical Engineering, pawn on d4 received about twice as much attention (average 900 2(74), 55-58. ms) as the pawn on e5 (average 450 ms). The contour lines also compensate for the loss of color information if the transparency is TOBII TECHNOLOGY AB. 2008. Tobii Studio 1.2 User Manual increased to improve visibility of the original stimulus. version 1.0. Tobii Technology. 4. Summary WIKIPEDIA. Gaussian function. Retrieved on 30 November 2009 from http://en.wikipedia.org/wiki/Gaussian_function. Although heat maps are valuable to identify qualitative trends in eye-tracking data it is important to have control over various WOODING, D.S. 2002. Fixation maps: Quantifying eye-movement settings to enable sensible comparisons. A simple algorithm was traces. Proc. ETRA 2002, ACM, 31-36. 128