In the online world, user engagement refers to the quality of the user experience that emphasizes the phenomena associated with wanting to use a web application longer and frequently. User engagement is a multifaceted, complex phenomenon, giving rise to a number of approaches for its measurement: self-reporting (e.g., questionnaires); observational methods (e.g., facial expression analysis, desktop actions); and web analytics using online behavior metrics. These methods represent various trade-offs between the scale of the data analyzed and the depth of understanding. For instance, surveys are hardly scalable but offer rich, qualitative insights, whereas click data can be collected on a large-scale but are more difficult to analyze. Still, the core research questions each type of measurement is able to answer are unclear. This talk will present various efforts aiming at combining approaches to measure engagement and seeking to provide insights into what questions to ask when measuring engagement.
Keynote at 18th International Conference on Application of Natural Language to Information Systems (NLDB2013), University of Salford, MediaCityUK
Blog: http://labtomarket.wordpress.com
In the online world, user engagement refers to the quality of the user experience that emphasizes the positive aspects of the interaction with a web application and, in particular, the phenomena associated with wanting to use that application longer and frequently. User engagement is a key concept in the design of web applications, motivated by the observation that successful applications are not just used, but are engaged with. Users invest time, attention, and emotion in their use of technology, and it must satisfy both their pragmatic and hedonic needs and expectations. Measurement is key for evaluating the success of information technologies, and is particularly critical to any web applications, from media to e-commerce sites, as it informs our understanding of user needs and expectations, system design and functionality. For instance, news portals have become a very popular destination for web users who read news online. As there is great potential for online news consumption but also serious competition among news portals, online news providers strive to develop effective and efficient strategies to engage users longer in their sites. Measuring how users engage with a news portal can inform the portal if there are areas that need to be enhanced, if current optimization techniques are still effective, if the published material triggers user behavior that causes engagement with the portal, etc. Understanding the above is dependent upon the ability to measure user engagement. The focus of this tutorial is how user engagement is currently being measured and future considerations for its measurement.
This tutorial is part of the World-Wide-Web Conference, held in Rio to Janeiro, May 2013.
Measuring user engagement: the do, the do not do, and the we do not knowMounia Lalmas-Roelleke
In the online world, user engagement refers to the quality of the user experience that emphasises the phenomena associated with wanting to use an application longer and frequently. User engagement is a multifaceted, complex phenomenon; this gives rise to a number of measurement approaches. Common ways to evaluate user engagement include self-report measures, e.g., questionnaires; physiological methods, e.g. cursor and eye tracking; and web analytics, e.g., number of site visits, click depth. These methods represent various trade-off in terms of the setting (laboratory versus in the wild), object of measurement (user behaviour, affect or cognition) and scale of data collected. This talk will present various efforts aiming at combining approaches to measure engagement. A particular focus will be what these measures individually and combined can tell us and not tell about user engagement. The talk will use examples of studies on news sites, social media, and native advertising.
User Engagement: A Cross-Disciplinary Framework (Nim Dvir)Nim Dvir
(Nim Dvir, Department of Information Science, State University of New York at Albany, NY, USA)
In recent years, the term “engagement” has been increasingly used in broader academic literature as a possible framework to measure and explain human interactions with technology. However, current research is unstructured and spread across various disciplines, leading to wide-ranging, and sometimes disparate, perspectives, vocabularies and measurement methodologies. As result, the theoretical meaning and foundations underlying “Engagement” remain unclear in the literature to-date. To address this issue, I conducted a comprehensive literature review to synthesize knowledge on engagement from several academic fields - Informatics, Information Systems, Communications, Marketing, and Education. Specifically, my review will focus on textual and linguistic information, and how the presentation, framing and organization of such information influence user experience (UX) and online behavior. Using this interdisciplinary approach, I believe that it is possible to identify a basic process of engagement happening in many different contexts. This may lead to progress towards understanding and assessing user engagement by suggesting unified, generalized and overarching models and principles that can be applied across domains and applications.
Keywords: Information Organization, User Experience, Engagement, Content Strategy
In the online world, user engagement refers to the quality of the user experience that emphasizes the positive aspects of the interaction with a web application and, in particular, the phenomena associated with wanting to use that application longer and frequently. User engagement is a key concept in the design of web applications, motivated by the observation that successful applications are not just used, but are engaged with. Users invest time, attention, and emotion in their use of technology, and it must satisfy both their pragmatic and hedonic needs and expectations. Measurement is key for evaluating the success of information technologies, and is particularly critical to any web applications, from media to e-commerce sites, as it informs our understanding of user needs and expectations, system design and functionality. For instance, news portals have become a very popular destination for web users who read news online. As there is great potential for online news consumption but also serious competition among news portals, online news providers strive to develop effective and efficient strategies to engage users longer in their sites. Measuring how users engage with a news portal can inform the portal if there are areas that need to be enhanced, if current optimization techniques are still effective, if the published material triggers user behavior that causes engagement with the portal, etc. Understanding the above is dependent upon the ability to measure user engagement. The focus of this tutorial is how user engagement is currently being measured and future considerations for its measurement.
This tutorial is part of the World-Wide-Web Conference, held in Rio to Janeiro, May 2013.
Measuring user engagement: the do, the do not do, and the we do not knowMounia Lalmas-Roelleke
In the online world, user engagement refers to the quality of the user experience that emphasises the phenomena associated with wanting to use an application longer and frequently. User engagement is a multifaceted, complex phenomenon; this gives rise to a number of measurement approaches. Common ways to evaluate user engagement include self-report measures, e.g., questionnaires; physiological methods, e.g. cursor and eye tracking; and web analytics, e.g., number of site visits, click depth. These methods represent various trade-off in terms of the setting (laboratory versus in the wild), object of measurement (user behaviour, affect or cognition) and scale of data collected. This talk will present various efforts aiming at combining approaches to measure engagement. A particular focus will be what these measures individually and combined can tell us and not tell about user engagement. The talk will use examples of studies on news sites, social media, and native advertising.
User Engagement: A Cross-Disciplinary Framework (Nim Dvir)Nim Dvir
(Nim Dvir, Department of Information Science, State University of New York at Albany, NY, USA)
In recent years, the term “engagement” has been increasingly used in broader academic literature as a possible framework to measure and explain human interactions with technology. However, current research is unstructured and spread across various disciplines, leading to wide-ranging, and sometimes disparate, perspectives, vocabularies and measurement methodologies. As result, the theoretical meaning and foundations underlying “Engagement” remain unclear in the literature to-date. To address this issue, I conducted a comprehensive literature review to synthesize knowledge on engagement from several academic fields - Informatics, Information Systems, Communications, Marketing, and Education. Specifically, my review will focus on textual and linguistic information, and how the presentation, framing and organization of such information influence user experience (UX) and online behavior. Using this interdisciplinary approach, I believe that it is possible to identify a basic process of engagement happening in many different contexts. This may lead to progress towards understanding and assessing user engagement by suggesting unified, generalized and overarching models and principles that can be applied across domains and applications.
Keywords: Information Organization, User Experience, Engagement, Content Strategy
Digital Communication Power Tools: Speakers Notes versionMarilyn Herie
This Keynote presentation at the 2012 Ontario Association of Social Work annual conference outlines the "digital communication power tools" for social workers and other practitioners. Speakers' notes can be toggled on or off. This file provides the Speakers Notes that accompany the slides.
A Rubric for Assessing the UX of Online Museum Collections: Preliminary Findi...craigmmacdonald
The increasing popularity of the Web and the proliferation of mobile technologies have had a tremendous impact on museums. The deployment of new technology into physical museum spaces has greatly enhanced the in-person museum experience, but efforts to improve the virtual museum experience have been less successful. This lightning talk describes our preliminary efforts to develop and validate a user experience (UX) assessment rubric for online museum collections. Drawing from existing research and current interface design and usability best practices, this rubric provides a set of criteria for assessing the extent to which an online museum collection provides a positive user experience for online visitors. Future research directions will be presented alongside the results from an initial pilot study.
Presented at the 2014 Museums and the Web conference in Baltimore, MD.
Toward a More Robust Usability concept with Perceived Enjoyment in the contex...Waqas Tariq
Mobile multimedia service is relatively new but has quickly dominated people¡¯s lives, especially among young people. To explain this popularity, this study applies and modifies the Technology Acceptance Model (TAM) to propose a research model and conduct an empirical study. The goal of study is to examine the role of Perceived Enjoyment (PE) and what determinants can contribute to PE in the context of using mobile multimedia service. The result indicates that PE is influencing on Perceived Usefulness (PU) and Perceived Ease of Use (PEOU) and directly Behavior Intention (BI). Aesthetics and flow are key determinants to explain Perceived Enjoyment (PE) in mobile multimedia usage.
UX Librarians: User Advocates, User Researchers, Usability Evaluators, or All...craigmmacdonald
User Experience (UX) is gaining momentum as a critical success factor across all industries and sectors, including libraries. While usability studies of library websites and related digital interfaces are commonplace, UX is becoming an increasingly popular topic of discussion in the community and is emerging as a new specialization for library professionals. To better understand this phenomenon, this paper reports the results of a qualitative study involving interviews with 16 librarians who have “User Experience” in their official job titles. The results show that UX Librarians share a user-centered mindset and many common responsibilities, including user research, usability testing, and space/service assessments, but each individual UX Librarian is also somewhat unique in how they approach and describe their work. As a whole, the research sheds light on an emerging library specialization and provides a valuable snapshot of the current state of UX Librarianship.
Full paper available at http://www.craigmacdonald.com/research-2/
Assessing the User Experience (UX) of Online Museum Collections: Perspectives...craigmmacdonald
Studies show that online museum collections are among the least popular features of a museum website, which many museums attribute to a lack of interest. While it’s certainly possible that a large segment of the population is simply uninterested in viewing museum objects through a computer screen, it is also possible that a large number of people want to find and view museum objects digitally but have been discouraged from doing so due to the poor user experience (UX) of existing online-collection interfaces. This paper describes the creation and validation of a UX assessment rubric for online museum collections. Consisting of ten factors, the rubric was developed iteratively through in-depth examinations of several existing museum-collection interfaces. To validate the rubric and test its reliability and utility, an experiment was conducted in which two UX professionals and two museum professionals were asked to apply the rubric to three online museum collections and then provide their feedback on the rubric and its use as an assessment tool. This paper presents the results of this validation study, as well as museum-specific results derived from applying the rubric. The paper concludes with a discussion of how the rubric may be used to improve the UX of museum-collection interfaces and future research directions aimed at strengthening and refining the rubric for use by museum professionals.
Presented at the 2015 Museums and the Web conference in Chicago IL.
Mobile Search: A Force to be Reckoned With!Karen Church
This invited talk was given at ECIR 2013 Industry Day in Moscow on the 27th March 2013. The talk was on the topic of mobile search, a research area I've devoted the past 10 years to.
Recently the world has witnessed a revolution in terms of mobile web and mobile search usage. Mobile phones, once deemed as simple communications devices, now provide mobile users with access to a wealth of online content, anytime and anywhere. In 2012, the increasing presence of mobile devices caused desktop search to decline for the first time ever; a level of growth that simply cannot be ignored.
My aim is to take a nostalgic look back at the simple beginnings of mobile search and discuss how, why and in what ways mobile search has evolved over the past 8-10 years. I highlight patterns of mobile search usage and show how they not only differ from desktop search, but they are continually evolving. And instead of taking a single, data-centric viewpoint of mobile search, I also discuss user-centric studies, highlighting the unique needs, intents and motivations of mobile searchers. Finally, I share some thoughts about where mobile search is heading, the challenges that lie ahead and discuss some of the factors that I think are important when it comes to enriching the future search experiences of mobile users.
Karen Church
Research Scientist
Telefonica Research
www.karenchurch.com
@karenchurch
What Does it Mean for a System to be Useful? An Exploratory Study of Usefulnesscraigmmacdonald
HCI has always focused on designing useful and usable interactive systems, but usability has dominated the field while research on usefulness has been largely absent. With user experience (UX) emerging as a dominant paradigm, it is necessary to consider the meaning of usefulness for modern computing contexts. This paper describes the results of an exploratory study of usefulness and its relation to contextual and experiential factors. The results show that a system’s usefulness is shaped by the context in which it is used, usability is closely linked to usefulness, usefulness may have both pragmatic and hedonic attributes, and usefulness is critical in defining users’ overall evaluation of a system (i.e., its goodness). We conclude by discussing the implications of this research and describing plans for extending our understanding of usefulness in other settings.
Paper presented at the 2014 ACM Conference on Designing Interactive Systems (DIS 2014).
User Experience is Everything (and Vice Versa): Lessons for Libraries and Inf...craigmmacdonald
User Experience is more than just a buzzword; it is a design philosophy that puts “users” at the center and recognizes that providing them with opportunities for enjoyment is just as important (if not more) than eliminating pain and frustration in their interactions with interfaces (both digital and analog). By de-constructing the cognitive and emotional dimensions of UX and tracing how UX has evolved from its historical roots in the Human-Computer Interaction (HCI) discipline to its present-day application across multiple domains and industries (including Library and Information Science), this talk will inspire information professionals and their organizations to take a more UX-centric approach to the design and/or evaluation of their technologies, services, and spaces.
What Can IA Learn from LIS? Perspectives from LIS Educationcraigmmacdonald
Morville & Rosenfeld's "Information Architecture for the World Wide Web" positioned IA as an approach to web/interface design that is deeply embedded in, and strongly informed by, the LIS discipline. To re-consider of the impact of the LIS discipline on the IA profession, this presentation (and a subsequent paper) reports the preliminary results of an analysis of syllabi of information architecture courses offered by graduate schools of Library and Information Science in the United States and Canada.
Presented for the Teaching IA workshop at the 2014 IA Summit in San Diego, CA.
Designing and deploying mobile user studies in the wild: a practical guideKaren Church
This tutorial was presented as part of Mobile HCI 2012 in San Francisco on the 19th September 2012. The tutorial aims to provide a practical guide to conduct mobile field studies based on the learning outcomes of the research I've been involved in while working as a Research Scientist in Telefonica Research, Barcelona. I cover how to design effective mobile field studies, the importance of mobile prototyping, the impact of various design choices on the study setup and deployment, how to engage participants and how to avoid ethical and legal issues. I've also tried to include listings of useful resources for those who are interested in conducting mobile field studies of their own.
More details: http://mm2.tid.es/mhcitutorial/
Karen Church
Research Scientist
Telefonica Research
www.karenchurch.com
@karenchurch
A good search engine is one when users come very regularly, type their queries, get their results, and leave quickly. With user engagement metrics from web analytics, these translate to a low dwell time, often low CTR, but a very high return rate. But user engagement is not just about this. User engagement is a multifaceted, complex phenomenon, giving rise to a number of approaches for its measurement: self-reporting (e.g. questionnaires); observational methods (e.g., facial expression analysis, desktop actions); and of course web analytics using online behavior metrics. These methods represent various trade-offs between the scale of the data analyzed and the depth of understanding. For instance, surveys are hardly scalable but offer rich, qualitative insights, whereas click data can be collected on a large-scale but are more difficult to analyze. This talk will present various efforts aiming at combining approaches to measure engagement and seeking to provide insights into what makes an engaging experience. The talk will focus of what makes users click or not click, and what this means in terms of user engagement.
SIGIR 2013 Industry Track: Keynote by Ricardo Baeza-Yates - VP, Yahoo! Research Europe & Latin America
A Journey into Evaluation: from Retrieval Effectiveness to User EngagementMounia Lalmas-Roelleke
Slides of my presentation at SPIRE 2015 at King's College London.
The talk builds on my work on user engagement, and proposes to move beyond clicks and relevance in information retrieval evaluation, towards user engagement. The talk presents some of my work to show how this could be done. There are two focus: moving from intra- to inter-session evaluation, and moving from small- to large-scale evaluation. These focuses are there to acknoweldge that (1) happy users come back, and (2) we need to properly identify who are the happy users. I hope that this talk will bring new perspectives to those building search systems and wanting to evaluate them, beyond their retrieval effectiveness.
Tutorial on metrics of user engagement -- Applications to Search & E- commerceMounia Lalmas-Roelleke
User engagement plays a central role in companies operating online services, such as search engines, news portals, e-commerce sites, and social networks. A main challenge is to leverage collected knowledge about the daily online behavior of millions of users to understand what engage them short-term and more importantly long-term. The most common way that engagement is measured is through various online metrics, acting as proxy measures of user engagement. This tutorial reviews these metrics and proposes a taxonomy of metrics. As case studies, it focuses on two types of services, search and e-commerce. The tutorial also discusses how to develop better machine learning models to optimize online metrics, and design experiments to test these models.
This tutorial was given by Mounia Lalmas from Spotify and Liangjie Long from Etsy Inc.
This tutorial was presented at WSDM 2018 (11th ACM International Conference on Web Search and Data Mining). It is the first delivery of this tutorial, so feedbacks and comments are welcome. We intend to continue working on this material.
Slides for keynote "Social Media and AI: Don’t forget the users" at WWW 2017 workshop "International Workshop on Modeling Social Media: Machine Learning and AI for Modeling and Analyzing Social Media". I am arguing that we need consider two things: the source of what we use to make good algorithms and whether users are impacted the way we want to impact them. The talk is based on two uses cases around providing diversity (something many of us believe is good) to users:
1. Engaging through diversity: serendipity (same algorithm, different sources)
2. Engaging through diversity: awareness (effective algorithm, perception)
My goal is to say, we may have the best AI, but we may get it wrong if we forget the users. I don't have answers, but it is important that we ask the right questions in today's world.
Digital Communication Power Tools: Speakers Notes versionMarilyn Herie
This Keynote presentation at the 2012 Ontario Association of Social Work annual conference outlines the "digital communication power tools" for social workers and other practitioners. Speakers' notes can be toggled on or off. This file provides the Speakers Notes that accompany the slides.
A Rubric for Assessing the UX of Online Museum Collections: Preliminary Findi...craigmmacdonald
The increasing popularity of the Web and the proliferation of mobile technologies have had a tremendous impact on museums. The deployment of new technology into physical museum spaces has greatly enhanced the in-person museum experience, but efforts to improve the virtual museum experience have been less successful. This lightning talk describes our preliminary efforts to develop and validate a user experience (UX) assessment rubric for online museum collections. Drawing from existing research and current interface design and usability best practices, this rubric provides a set of criteria for assessing the extent to which an online museum collection provides a positive user experience for online visitors. Future research directions will be presented alongside the results from an initial pilot study.
Presented at the 2014 Museums and the Web conference in Baltimore, MD.
Toward a More Robust Usability concept with Perceived Enjoyment in the contex...Waqas Tariq
Mobile multimedia service is relatively new but has quickly dominated people¡¯s lives, especially among young people. To explain this popularity, this study applies and modifies the Technology Acceptance Model (TAM) to propose a research model and conduct an empirical study. The goal of study is to examine the role of Perceived Enjoyment (PE) and what determinants can contribute to PE in the context of using mobile multimedia service. The result indicates that PE is influencing on Perceived Usefulness (PU) and Perceived Ease of Use (PEOU) and directly Behavior Intention (BI). Aesthetics and flow are key determinants to explain Perceived Enjoyment (PE) in mobile multimedia usage.
UX Librarians: User Advocates, User Researchers, Usability Evaluators, or All...craigmmacdonald
User Experience (UX) is gaining momentum as a critical success factor across all industries and sectors, including libraries. While usability studies of library websites and related digital interfaces are commonplace, UX is becoming an increasingly popular topic of discussion in the community and is emerging as a new specialization for library professionals. To better understand this phenomenon, this paper reports the results of a qualitative study involving interviews with 16 librarians who have “User Experience” in their official job titles. The results show that UX Librarians share a user-centered mindset and many common responsibilities, including user research, usability testing, and space/service assessments, but each individual UX Librarian is also somewhat unique in how they approach and describe their work. As a whole, the research sheds light on an emerging library specialization and provides a valuable snapshot of the current state of UX Librarianship.
Full paper available at http://www.craigmacdonald.com/research-2/
Assessing the User Experience (UX) of Online Museum Collections: Perspectives...craigmmacdonald
Studies show that online museum collections are among the least popular features of a museum website, which many museums attribute to a lack of interest. While it’s certainly possible that a large segment of the population is simply uninterested in viewing museum objects through a computer screen, it is also possible that a large number of people want to find and view museum objects digitally but have been discouraged from doing so due to the poor user experience (UX) of existing online-collection interfaces. This paper describes the creation and validation of a UX assessment rubric for online museum collections. Consisting of ten factors, the rubric was developed iteratively through in-depth examinations of several existing museum-collection interfaces. To validate the rubric and test its reliability and utility, an experiment was conducted in which two UX professionals and two museum professionals were asked to apply the rubric to three online museum collections and then provide their feedback on the rubric and its use as an assessment tool. This paper presents the results of this validation study, as well as museum-specific results derived from applying the rubric. The paper concludes with a discussion of how the rubric may be used to improve the UX of museum-collection interfaces and future research directions aimed at strengthening and refining the rubric for use by museum professionals.
Presented at the 2015 Museums and the Web conference in Chicago IL.
Mobile Search: A Force to be Reckoned With!Karen Church
This invited talk was given at ECIR 2013 Industry Day in Moscow on the 27th March 2013. The talk was on the topic of mobile search, a research area I've devoted the past 10 years to.
Recently the world has witnessed a revolution in terms of mobile web and mobile search usage. Mobile phones, once deemed as simple communications devices, now provide mobile users with access to a wealth of online content, anytime and anywhere. In 2012, the increasing presence of mobile devices caused desktop search to decline for the first time ever; a level of growth that simply cannot be ignored.
My aim is to take a nostalgic look back at the simple beginnings of mobile search and discuss how, why and in what ways mobile search has evolved over the past 8-10 years. I highlight patterns of mobile search usage and show how they not only differ from desktop search, but they are continually evolving. And instead of taking a single, data-centric viewpoint of mobile search, I also discuss user-centric studies, highlighting the unique needs, intents and motivations of mobile searchers. Finally, I share some thoughts about where mobile search is heading, the challenges that lie ahead and discuss some of the factors that I think are important when it comes to enriching the future search experiences of mobile users.
Karen Church
Research Scientist
Telefonica Research
www.karenchurch.com
@karenchurch
What Does it Mean for a System to be Useful? An Exploratory Study of Usefulnesscraigmmacdonald
HCI has always focused on designing useful and usable interactive systems, but usability has dominated the field while research on usefulness has been largely absent. With user experience (UX) emerging as a dominant paradigm, it is necessary to consider the meaning of usefulness for modern computing contexts. This paper describes the results of an exploratory study of usefulness and its relation to contextual and experiential factors. The results show that a system’s usefulness is shaped by the context in which it is used, usability is closely linked to usefulness, usefulness may have both pragmatic and hedonic attributes, and usefulness is critical in defining users’ overall evaluation of a system (i.e., its goodness). We conclude by discussing the implications of this research and describing plans for extending our understanding of usefulness in other settings.
Paper presented at the 2014 ACM Conference on Designing Interactive Systems (DIS 2014).
User Experience is Everything (and Vice Versa): Lessons for Libraries and Inf...craigmmacdonald
User Experience is more than just a buzzword; it is a design philosophy that puts “users” at the center and recognizes that providing them with opportunities for enjoyment is just as important (if not more) than eliminating pain and frustration in their interactions with interfaces (both digital and analog). By de-constructing the cognitive and emotional dimensions of UX and tracing how UX has evolved from its historical roots in the Human-Computer Interaction (HCI) discipline to its present-day application across multiple domains and industries (including Library and Information Science), this talk will inspire information professionals and their organizations to take a more UX-centric approach to the design and/or evaluation of their technologies, services, and spaces.
What Can IA Learn from LIS? Perspectives from LIS Educationcraigmmacdonald
Morville & Rosenfeld's "Information Architecture for the World Wide Web" positioned IA as an approach to web/interface design that is deeply embedded in, and strongly informed by, the LIS discipline. To re-consider of the impact of the LIS discipline on the IA profession, this presentation (and a subsequent paper) reports the preliminary results of an analysis of syllabi of information architecture courses offered by graduate schools of Library and Information Science in the United States and Canada.
Presented for the Teaching IA workshop at the 2014 IA Summit in San Diego, CA.
Designing and deploying mobile user studies in the wild: a practical guideKaren Church
This tutorial was presented as part of Mobile HCI 2012 in San Francisco on the 19th September 2012. The tutorial aims to provide a practical guide to conduct mobile field studies based on the learning outcomes of the research I've been involved in while working as a Research Scientist in Telefonica Research, Barcelona. I cover how to design effective mobile field studies, the importance of mobile prototyping, the impact of various design choices on the study setup and deployment, how to engage participants and how to avoid ethical and legal issues. I've also tried to include listings of useful resources for those who are interested in conducting mobile field studies of their own.
More details: http://mm2.tid.es/mhcitutorial/
Karen Church
Research Scientist
Telefonica Research
www.karenchurch.com
@karenchurch
A good search engine is one when users come very regularly, type their queries, get their results, and leave quickly. With user engagement metrics from web analytics, these translate to a low dwell time, often low CTR, but a very high return rate. But user engagement is not just about this. User engagement is a multifaceted, complex phenomenon, giving rise to a number of approaches for its measurement: self-reporting (e.g. questionnaires); observational methods (e.g., facial expression analysis, desktop actions); and of course web analytics using online behavior metrics. These methods represent various trade-offs between the scale of the data analyzed and the depth of understanding. For instance, surveys are hardly scalable but offer rich, qualitative insights, whereas click data can be collected on a large-scale but are more difficult to analyze. This talk will present various efforts aiming at combining approaches to measure engagement and seeking to provide insights into what makes an engaging experience. The talk will focus of what makes users click or not click, and what this means in terms of user engagement.
SIGIR 2013 Industry Track: Keynote by Ricardo Baeza-Yates - VP, Yahoo! Research Europe & Latin America
A Journey into Evaluation: from Retrieval Effectiveness to User EngagementMounia Lalmas-Roelleke
Slides of my presentation at SPIRE 2015 at King's College London.
The talk builds on my work on user engagement, and proposes to move beyond clicks and relevance in information retrieval evaluation, towards user engagement. The talk presents some of my work to show how this could be done. There are two focus: moving from intra- to inter-session evaluation, and moving from small- to large-scale evaluation. These focuses are there to acknoweldge that (1) happy users come back, and (2) we need to properly identify who are the happy users. I hope that this talk will bring new perspectives to those building search systems and wanting to evaluate them, beyond their retrieval effectiveness.
Tutorial on metrics of user engagement -- Applications to Search & E- commerceMounia Lalmas-Roelleke
User engagement plays a central role in companies operating online services, such as search engines, news portals, e-commerce sites, and social networks. A main challenge is to leverage collected knowledge about the daily online behavior of millions of users to understand what engage them short-term and more importantly long-term. The most common way that engagement is measured is through various online metrics, acting as proxy measures of user engagement. This tutorial reviews these metrics and proposes a taxonomy of metrics. As case studies, it focuses on two types of services, search and e-commerce. The tutorial also discusses how to develop better machine learning models to optimize online metrics, and design experiments to test these models.
This tutorial was given by Mounia Lalmas from Spotify and Liangjie Long from Etsy Inc.
This tutorial was presented at WSDM 2018 (11th ACM International Conference on Web Search and Data Mining). It is the first delivery of this tutorial, so feedbacks and comments are welcome. We intend to continue working on this material.
Slides for keynote "Social Media and AI: Don’t forget the users" at WWW 2017 workshop "International Workshop on Modeling Social Media: Machine Learning and AI for Modeling and Analyzing Social Media". I am arguing that we need consider two things: the source of what we use to make good algorithms and whether users are impacted the way we want to impact them. The talk is based on two uses cases around providing diversity (something many of us believe is good) to users:
1. Engaging through diversity: serendipity (same algorithm, different sources)
2. Engaging through diversity: awareness (effective algorithm, perception)
My goal is to say, we may have the best AI, but we may get it wrong if we forget the users. I don't have answers, but it is important that we ask the right questions in today's world.
Evaluating the search experience: from Retrieval Effectiveness to User Engage...Mounia Lalmas-Roelleke
These are my slides for my presentation at CLEF 2015 which is being held in Toulouse. I discuss evaluation in the context of search, and how to move towards looking at long-term effect of the search experience. I do this through the concept of absence time. I present examples for search but also in the context if mobile advertising. My aim is to frame evaluation within user engagement.
In the online world, user engagement refers to the quality of the user experience that emphasizes the phenomena associated with wanting to use an application longer and frequently. This talk looks at the role of Big Data in measuring user engagement. It does so through two case studies on using absence time, within sessions and across sessions.
Presentation at "Data-Driven Business Day" at Strata + HW Barcelona 2014.
These are the slides of the tutorial Liangjie Hong and I gave at The Web Conference in San Francisco, 2019. Full details of the tutorial and previous instances can be found at https://onlineuserengagement.github.io/.
Tutorial abstract:
User engagement plays a central role in companies operating online services, such as search engines, news portals, e commerce sites, entertainment services, and social networks. A main challenge is to leverage collected knowledge about the daily online behavior of millions of users to understand what engage them short-term and more importantly long-term. Two critical steps of improving user engagement are metrics and their optimization. The most common way that engagement is measured is through various online metrics, acting as proxy measures of user engagement. This tutorial will review these metrics, their advantages and drawbacks, and their appropriateness to various types of online services. Once metrics are defined, how to optimize them will become the key issue. We will survey methodologies including machine learning models and experimental designs that are utilized to optimize these metrics via directly or indirect ways. As case studies, we will focus on four types of services, news, search, entertainment, and e-commerce.
We will end with lessons learned and a discussion on the most promising research directions.
Presenters:
Liangjie Hong, Director of Engineering, Data Science and Machine Learning at Etsy Inc.
Mounia Lalmas, Director of Research at Spotify, and Head of Tech Research in Personalization.
CoMo Game Dev - usability and user experience methods Isa Jahnke
This month we'll be hearing from Dr. Isa Jahnke, Director of the User Experience and Learning Design Lab at the University of Missouri. She will be discussing usability, user experience, and how to get the most out of focus groups and testing nights for the CoMO game development community.
An Engaging Click ... or how can user engagement measurement inform web searc...Mounia Lalmas-Roelleke
A good search engine is one when users come very regularly, type their queries, get their results, and leave quickly. With user engagement metrics from web analytics, these translate to a low dwell time, often low CTR, but a very high return rate. But user engagement is not just about this. User engagement is a complex phenomenon that requires a number of approaches for its measurement: we can ask the user about their experience though questionnaires, we can observe where they look or move the mouse, and we can calculate various web analytic metrics. The aim of this talk is to discuss how current work on user engagement, not necessary specific to web search, can provide insights into putting search into more broader perspectives.
This presentation is part of Search Solutions 2013, 27 November 2013, at the BCS HQ. A first version of this talk was given at the SIGIR 2013 Industry Day by Ricardo Baeza-Yates.
Self Service Online Research - online communities for research and insightsStephen Thompson
In this webcast, we Stephen Thompson (EVP, Ramius) and Lenny Murphy (Chief Editor and Principal Consultant, Greenbook):
• Discuss how the online research world is shifting and how that is expected to develop in the near future. This include some (very) early stats from the Greenbook GRIT report for 2014.
• Round up some of the new software tools available for gathering insights under the categories of Surveys / Microsurveys, Crowdsourcing, Analytics, Big Data, Video, Mobile and Online Communities.
• Work through some examples of how Ramius’ new platform, Recollective, is being used by marketing and research professionals to build on-demand insight communities and conduct rapid, qualitative research online.
HR Webinar: Technology Opens New Paths for HR and BusinessAscentis
Technology is EVERYWHERE! We see it in our work, in our personal lives, and certainly in HR. But can technology help HR guide and align the workforce to organization goals? Employers, HR, and workers will experience the newest technologies soon, can HR leverage it to the betterment of the organization’s culture?
Join us for a look into the trending technologies that are allowing HR to build new paths to workforce development and engagement.
We will examine the connection between the organizational directives, how the workforce can accomplish those goals through continuing collaboration and innovation, and how HR can be the change agent, using technology, to impact the working and personal lives of the worker. We will look into the life cycle of the worker and discuss the various technologies supporting of the various areas of Human Resources programs and what considerations might be needed to use that technology.
And lastly, we will look at the NEW corporation mission and structure that is taking hold across the globe.
Today, I had the big honor to give the opening keynote at the 8th AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2020), being held virtually. HCOMP is the home of the human computation and crowdsourcing community working on frameworks, methods and systems that bring together people and machine intelligence to achieve better results. I decided to totally revamp a previous talk to focus on so-called "human in the loop" and showed how we incorporate human in the loop to personalise at scale, with some of the research at Spotify. Sharing the slides for general interests.
These are the slides of my invited talk at the REVEAL workshop at RecSys 2019. The workshop focuses on the offline evaluation for recommender systems, and this year’s focus was on Reinforcement Learning. Although not directly related to reinforcement learning, it is clear that there are connections to what research in reinforcement learning is attempting to achieve (defining the rewards) and metrics that are optimized by recommender systems. I presented various works and personal thoughts on how to develop metrics of user engagement, which recommender systems can optimize for. An important message was that, for recommender systems to work both in the short and the long-term, it is important to consider the heterogeneity of both user and content to formalise the notion of engagement, and in turn design the appropriate metrics to capture these and optimize for. One way to achieve this is to follow these four steps: 1) Understanding intents; 2) Optimizing for the right metric; 3) Acting on segmentation; and 4) Thinking about diversity.
An previous version of this talk was given to UMAP 2019. See https://www.slideshare.net/mounialalmas/metrics-engagement-personalization
These are the slides of the keynote I gave at UMAP 2019 (User Modeling, Adaptation and Personalization) held in Larnaca, June 2019. The theme of the conference this year was "Making Personalization Transparent: Giving Control Back To The User". I looked at the 1st part for my talk.
When users interact with the recommendations served to them, they leave behind fine-grained traces of interaction patterns, which can be leveraged to predict how satisfying their experience was. This talk will present various works and personal thoughts on how to measure user engagement. It will discuss the definition and development of metrics of user satisfaction that can be used as proxy of user engagement, and will include cases of good, bad and ugly scenarios. An important message will be to show that, to make personalization transparent, it is important to consider the heterogeneity of both user and content to formalise the notion of satisfaction, and in turn design the appropriate satisfaction metrics to capture these. One way to do this is to consider the following angles: 1) Understanding intents; 2) Optimizing for the right metric; 3) Acting on segmentation; and 4) Thinking about diversity.
These are the slides I used for my talk at the BIG Track at the Web Conference 2019. This is a very similar talk to what I gave at the celebration kickoff of Chalmers AI Research Centre in Gothenburg in March 2019. It has a bit more and reflect some of the most recent work we are doing at Spotify Research. I am posted these again as people are asking for the slides. Thank you.
These are the slides of my talk at the 2019 Netflix Workshop on Personalization, Recommendation and Search (PRS). This talk is based on previous talks on research we are doing at Spotify, but here I focus on the work we do on personalizing Spotify Home, with respect to success, intent & diversity. The link to the workshop is https://prs2019.splashthat.com/. This is research from various people at Spotify, and has been published at RecSys 2018, CIKM 2018 and WWW (The Web Conference) 2019.
These are the slides of a talk about some of our research at Spotify, as part of the celebration kickoff of Chalmers AI Research Centre in Gothenburg. I always like to make a story in my talk, and this time I wanted to reflect on the "push" (think recommender system) and "pull" (think search) paradigms. I am using this quote from Nicholas Belkin and Bruce Croft from their Communications of the ACM article published in 1992 to frame my story: "We conclude that information retrieval and information filtering are indeed two sides of the same coin. They work together to help people get the information needed to perform their tasks."
At the BCS Search Solutions 2018, I gave a talk about work on search we are doing at Spotify. The talk described what search means in the context of Spotify, how it differs what we know about search, and the challenges associated with understanding user intents and mindsets in an "entertainment" context. The talk also discussed various efforts at Spotify to understand why users submit search queries, what they expect, how they assess their search experience, and how Spotify responds to these search queries. This is work done with many colleagues at Spotify in Boston, London, New York and Stockholm, and our wonderful summer interns.
An introduction to system-oriented evaluation in Information RetrievalMounia Lalmas-Roelleke
Slides for my lecture on IR evaluation, presented at 11th European Summer School in Information Retrieval (ESSIR 2017) at Universitat Pompeu Fabra, Barcelona.
These slides were based on
1. Evaluation lecture @ QMUL; Thomas Roelleke & Mounia Lalmas
3. Lecture 8: Evaluation @ Stanford University; Pandu Nayak & Prabhakar Raghavan
4. Retrieval Evaluation @ University of Virginia; Hongnig Wang
5. Lectures 11 and 12 on Evaluation @ Berkeley; Ray Larson
6. Evaluation of Information Retrieval Systems @ Penn State University; Lee Giles
Textbooks:
1. Information Retrieval, 2nd edition, C.J. van Rijsbergen (1979)
2. Introduction to Information Retrieval, C.D. Manning, P. Raghavan & H. Schuetze (2008)
3. Modern Information Retrieval: The Concepts and Technology behind Search, 2nd ed; R. Baeza-Yates & B. Ribeiro-Neto (2011)
Friendly, Appealing or Both? Characterising User Experience in Sponsored Sear...Mounia Lalmas-Roelleke
Many of today’s websites have recognised the importance of mobile friendly pages to keep users engaged and to provide a satisfying user experience. However, next to the experience provided by the sites themselves, advertisements, when clicked, present users with landing pages that are not necessarily mobile friendly. We explore what type of features are able to characterise the mobile friendliness of sponsored search ad landing pages. To have a complete understanding of the mobile ad experience in terms of layout and visual appearance, we also explore the notion of the ad page aesthetic appeal. We design and collect annotations for both dimensions on a large set of ads, and find that mobile friendliness and aesthetics represent different notions.
We perform a comprehensive study of the effectiveness of over 120 features on the tasks of friendliness and aesthetics prediction. We find that next to general page size, HTML, and resource usage based features, several features based on the visual composition of landing pages are important to determine mobile friendliness and aesthetics. We demonstrate the additional benefit of these various types of features by comparing against the mobile friendliness guidelines provided by W3C. Finally, we use our models to determine the state of landing page mobile friendliness and aesthetics on a large sample of advertisements of a major internet company.
These are the slides of work presented at WWW 2017 in Perth:
M. Bron, M. Redi, F. Silvestri, H. Evans, M. Chute and M. Lalmas. Friendly, Appealing or Both? Characterising User Experience in Sponsored Search Landing Pages, 26th International World Wide Web Conference (WWW 2017), Industrial Track, Perth, Australia, 3-7 April, 2017.
Native advertising is a specific form of online advertising where ads replicate the look-and-feel of their serving platform. In such context, providing a good user experience with the served ads is crucial to ensure a positive user experience and hence long-term user engagement. In this talk, I will describe work at Yahoo aiming at understanding the user experience on ads in the mobile context and building learning frameworks to identify and account for ads of low quality while ensuring a return of investment to advertisers.
Slides for the Invited Talk at BigData Innovators Gathering (BIG), co-located with WWW 2017, Perth 2017 (https://big2017.org). Earlier versions of this talk were given at various venues in London.
Describing Patterns and Disruptions in Large Scale Mobile App Usage DataMounia Lalmas-Roelleke
The advertising industry is seeking to use the unique data provided by the increasing usage of mobile devices and mobile applications (apps) to improve targeting and the experience with apps. As a consequence, understanding user behaviours with apps has gained increased interests from both academia and industry. In this paper we study user app engagement patterns and disruptions of those patterns in a data set unique in its scale and coverage of user activity. First, we provide a detailed account of temporal user activity patterns with apps and compare these to previous studies on app usage behavior. Then, in the second part, and the main contribution of this work, we take advantage of the scale and coverage of our sample and show how app usage behavior is disrupted through major political, social, and sports events.
Slides for paper presented at TempWeb 2017:
S. Van Canneyt, M. Bron, A. Haines and M. Lalmas. Describing Patterns and Disruptions in Large Scale Mobile App Usage Data, 7th Temporal Web Analytics Workshop (TempWeb), International World Wide Web Conference (WWW 2017), Industrial Track, Perth, Australia, 3-7 April, 2017.
Story-focused Reading in Online News and its Potential for User EngagementMounia Lalmas-Roelleke
We study the news reading behaviour of several hundred thousand users on 65 highly-visited news sites. We focus on a specific phenomenon: users reading several articles related to a particular news development, which we call story-focused reading. Our goal is to understand the effect of story-focused reading on user engagement and how news sites can support this phenomenon. We found that most users focus on stories that interest them and that even casual news readers engage in story-focused reading. During story-focused reading, users spend more time reading and a larger number of news sites are involved. In addition, readers employ different strategies to find articles related to a story.
We also analyse how news sites promote story-focused reading, by looking at how they link their articles to related content published by them, or by other sources. The results show that providing links to related content leads to a higher engagement of the users, and that this is the case even for links to external sites. We also show that the performance of links can be affected by their type, their position, and how many of them are present within an article.
This work co-authored with J. Lehmann, C. Castillo and R. Baeza-Yates has been published in the Journal of The Association For Information Science And Technology (JASIST), available online in May 2016. The work was presented at the Yahoo TechPulse Annual conference in December 2016.
Native advertising is a specific form of online advertising where ads replicate the look and feel of their serving platform. In such context, providing a good user experience with the served ads is crucial to ensure long-term user engagement. This talk present an overview of work aimed at understanding the user preclick experience of ads and building a learning framework to identify ads with low preclick quality.
Work in collaboration with Ke (Adam) Zhou, Miriam Redi and Andy Haines. An version of this work was presented at WWW Montreal, April 2016.
Native advertising is a specific form of online advertising where ads replicate the look-and-feel of their serving platform. In such context, providing a good user experience with the served ads is crucial to ensure long-term user engagement. In this work, we explore the notion of ad quality, namely the effectiveness of advertising from a user experience perspective. We design a learning framework to predict the pre-click quality of native ads. More specifically, we look at detecting offensive native ads, showing that, to quantify ad quality, ad offensive user feedback rates are more reliable than the commonly used click-through rate metrics. We then conduct a crowd-sourcing study to identify which criteria drive user preferences in native advertising. We translate these criteria into a set of ad quality features that we extract from the ad text, image and advertiser, and then use them to train a model able to identify offensive ads. We show that our model is very effective in detecting offensive ads, and provide in-depth insights on how different features affect ad quality. Finally, we deploy a preliminary version of such model and show its effectiveness in the reduction of the offensive ad feedback rate.
There are the slides of our WWW 2016 paper. This is work with Ke (Adam) Zhou, Miriam Redi and Andy Haines.
Improving Post-Click User Engagement on Native Ads via Survival AnalysisMounia Lalmas-Roelleke
In this paper we focus on estimating the post-click engagement on native ads by predicting the dwell time on the corresponding ad landing pages. To infer relationships between feature of the ads and dwell time we resort to the application of survival analysis techniques, which allow us to estimate the distribution of the length of time that the user will spend on the ad. This information is then integrated into the ad ranking function with the goal of promoting the rank of ads that are likely to be clicked and consumed by users (dwell time greater than a given threshold). The online evaluation over live traffic shows that considering post-click engagement has a consistent positive effect on both CTR, decreases the number of bounces and increases the average dwell time, hence leading to a better user post-click experience.
There are the slides of our WWW 2016 paper. This is work with Nicola Barbieri and Fabrizio Silvestri.
Promoting Positive Post-click Experience for In-Stream Yahoo Gemini UsersMounia Lalmas-Roelleke
Click-through rate (CTR) is the most common metric used to assess the performance of an online advert; another performance of an online advert is the user post-click experience. In this paper, we describe the method we have implemented in Yahoo Gemini to measure the post-click experience on Yahoo mobile news streams via an automatic analysis of advert landing pages. We measure the post-click experience by means of two well-known metrics, dwell time and bounce rate. We show that these metrics can be used as proxy of an advert post-click experience, and that a negative post-click experience has a negative effect on user engagement and future ad clicks. We then put forward an approach that analyses advert landing pages, and show how these can affect dwell time and bounce rate. Finally, we develop a prediction model for advert quality based on dwell time, which was deployed on Yahoo mobile news stream app running on iOS. The results show that, using dwell time as a proxy of post- click experience, we can prioritise higher quality ads. We demonstrate the impact of this on users via A/B testing.
From “Selena Gomez” to “Marlon Brando”: Understanding Explorative Entity SearchMounia Lalmas-Roelleke
Slides of our paper. Work with Iris Miliaraki and Roi Blanco. Paper published at 24th International World Wide Web Conference (WWW 2015), Florence, Italy.
Abstract: Consider a user who submits a search query "Shakira" having a specific search goal in mind (such as her age) but at the same time willing to explore information for other entities related to her, such as comparable singers. In previous work, a system called Spark, was developed to provide such search experience. Given a query submitted to the Yahoo search engine, Spark provides related entity suggestions for the query, exploiting, among else, public knowledge bases from the Semantic Web. We refer to this search scenario as explorative entity search. The effectiveness and efficiency of the approach has been demonstrated in previous work. The way users interact with these related entity suggestions and whether this interaction can be predicted have however not been studied. In this paper, we perform a large-scale analysis into how users interact with the entity results returned by Spark. We characterize the users, queries and sessions that appear to promote an explorative behavior. Based on this analysis, we develop a set of query and user-based features that reflect the click behavior of users and explore their
effectiveness in the context of a prediction task.
Social Media News Communities: Gatekeeping, Coverage, and Statement BiasMounia Lalmas-Roelleke
We examine biases in online news sources and social media communities around them. To that end, we introduce unsupervised methods considering three types of biases: selection or "gatekeeping" bias, coverage bias, and statement bias, characterizing each one through a series of metrics. Our results, obtained by analyzing 80 international news sources during a two-week period, show that biases are subtle but observable, and follow geographical boundaries more closely than political ones. We also demonstrate how these biases are to some extent amplied by social media.
Aggregating search results from a variety of diverse verticals such as news, images, videos and Wikipedia into a single interface is a popular web search presentation paradigm. Although several aggregated search (AS) metrics have been proposed to evaluate AS result pages, their properties remain poorly understood. In this paper, we compare the properties of existing AS metrics under the assumptions that (1) queries may have multiple preferred verticals; (2) the likelihood of each vertical preference is available; and (3) the topical relevance assessments of results returned from each vertical is available. We compare a wide range of AS metrics on two test collections. Our main criteria of comparison are (1) discriminative power, which represents the reliability of a metric in comparing the performance of systems, and (2) intuitiveness, which represents how well a metric captures the various key aspects to be measured (i.e. various aspects of a user’s perception of AS result pages). Our study shows that the AS metrics that capture key AS components (e.g., vertical selection) have several advantages over other metrics. This work sheds new lights on the further developments and applications of AS metrics.
Penguins in Sweaters, or Serendipitous Entity Search on User-generated ContentMounia Lalmas-Roelleke
In many cases, when browsing the Web users are searching for specic information or answers to concrete questions.
Sometimes, though, users find unexpected, yet interesting and useful results, and are encouraged to explore further.
What makes a result serendipitous? We propose to answer this question by exploring the potential of entities extracted
from two sources of user-generated content - Wikipedia, a user-curated online encyclopedia, and Yahoo! Answers,
a more unconstrained question/answering forum - in promoting serendipitous search. In this work, the content of
each data source is represented as an entity network, which is further enriched with metadata about sentiment, writing
quality, and topical category. We devise an algorithm based on lazy random walk with restart to retrieve entity recommendations from the networks. We show that our method provides novel results from both datasets, compared to standard web search engines. However, unlike previous research, we find that choosing highly emotional entities does not increase user interest for many categories of entities, suggesting a more complex relationship between topic matter and
the desirable metadata attributes in serendipitous search.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Leading Change strategies and insights for effective change management pdf 1.pdf
To be or not be engaged: What are the questions (to ask)?
1. To
be
or
not
be
engaged:
What
are
the
ques2ons
(to
ask)?
Mounia
Lalmas
Yahoo!
Labs
Barcelona
mounia@acm.org
1
2. About
me
• Since
January
2011:
Visi2ng
Principal
Scien2st
at
Yahoo!
Labs
Barcelona
• User
engagement,
social
media,
search
• 1999-‐2008:
Lecturer
(assistant
professor)
to
Professor
at
Queen
Mary,
University
of
London
• XML
retrieval
and
evalua>on
(INEX)
• 2008-‐2010:
MicrosoR
Research/RAEng
Research
Professor
at
the
University
of
Glasgow
• Quantum
theory
to
model
informa>on
retrieval
Blog:
labtomarket.wordpress.com
2
3. Why
is
it
important
to
engage
users?
• In
today’s
wired
world,
users
have
enhanced
expecta>ons
about
their
interac>ons
with
technology
…
resul>ng
in
increased
compe>>on
amongst
the
purveyors
and
designers
of
interac>ve
systems.
• In
addi>on
to
u>litarian
factors,
such
as
usability,
we
must
consider
the
hedonic
and
experien>al
factors
of
interac>ng
with
technology,
such
as
fun,
fulfillment,
play,
and
user
engagement.
• In
order
to
make
engaging
systems,
we
need
to
understand
what
user
engagement
is
and
how
to
measure
it.
3
4. Why
is
it
important
to
measure
and
interpret
user
engagement
well?
CTR
4
5. Outline
• What
is
user
engagement?
• What
are
the
characteris>cs
of
user
engagement?
• How
to
measure
user
engagement?
• What
are
the
ques>ons
to
ask?
saliency,
interes>ng,
serendipity,
relevance,
sen>ment,
reading,
news,
social
media,
user
generated
content,
automa>c
linking,
aesthe>cs.
5
8. What
is
user
engagement?
User
engagement
is
a
quality
of
the
user
experience
that
emphasizes
the
posi>ve
aspects
of
interac>on
–
in
par>cular
the
fact
of
being
cap>vated
by
the
technology
(Ahield
et
al,
2011).
user
feelings:
happy,
sad,
excited,
…
emo>onal,
cogni>ve
and
behavioural
connec>on
that
exists,
at
any
point
in
>me
and
over
>me,
between
a
user
and
a
technological
resource
user
interac2ons:
click,
read,
comment,
buy…
user
mental
states:
involved,
lost,
concentrated…
8
9. Considera2ons
in
the
measurement
of
user
engagement
• Short
term
(within
session)
and
long
term
(across
mul>ple
sessions)
• Laboratory
vs.
field
studies
• Subjec>ve
vs.
objec>ve
measurement
• Large
scale
(e.g.,
dwell
>me
of
100,000
people)
vs.
small
scale
(gaze
pa[erns
of
10
people)
• User
engagement
as
process
vs.
as
product
One
is
not
be[er
than
other;
it
depends
on
what
is
the
aim.
9
11. Characteris2cs
of
user
engagement
(I)
• Users
must
be
focused
to
be
engaged
• Distor>ons
in
the
subjec>ve
percep>on
of
>me
used
to
measure
it
Focused
a_en2on
(Webster
&
Ho,
1997;
O’Brien,
2008)
• Emo>ons
experienced
by
user
are
intrinsically
mo>va>ng
• Ini>al
affec>ve
“hook”
can
induce
a
desire
for
explora>on,
ac>ve
discovery
or
par>cipa>on
Posi2ve
Affect
(O’Brien
&
Toms,
2008)
• Sensory,
visual
appeal
of
interface
s>mulates
user
&
promotes
focused
a[en>on
• Linked
to
design
principles
(e.g.
symmetry,
balance,
saliency)
Aesthe2cs
(Jacques
et
al,
1995;
O’Brien,
2008)
• People
remember
enjoyable,
useful,
engaging
experiences
and
want
to
repeat
them
• Reflected
in
e.g.
the
propensity
of
users
to
recommend
an
experience/a
site/a
product
Endurability
(Read,
MacFarlane,
&
Casey,
2002;
O’Brien,
2008)
11
12. Characteris2cs
of
user
engagement
(II)
• Novelty,
surprise,
unfamiliarity
and
the
unexpected
• Appeal
to
users’
curiosity;
encourages
inquisi>ve
behavior
and
promotes
repeated
engagement
Novelty
(Webster
&
Ho,
1997;
O’Brien,
2008)
• Richness
captures
the
growth
poten>al
of
an
ac>vity
• Control
captures
the
extent
to
which
a
person
is
able
to
achieve
this
growth
poten>al
Richness
and
control
(Jacques
et
al,
1995;
Webster
&
Ho,
1997)
• Trust
is
a
necessary
condi>on
for
user
engagement
• Implicit
contract
among
people
and
en>>es
which
is
more
than
technological
Reputa2on,
trust
and
expecta2on
(Attfield et al,
2011)
• Difficul>es
in
sevng
up
“laboratory”
style
experiments
• Why
should
users
engage?
Mo2va2on,
interests,
incen2ves,
and
benefits
(Jacques
et
al.,
1995;
O’Brien
&
Toms,
2008)
12
14. Measuring
user
engagement
Measures
Characteris2cs
Self-‐reported
engagement
Ques>onnaire,
interview,
report,
product
reac>on
cards,
think-‐aloud
Subjec>ve
Short-‐
and
long-‐term
Lab
and
field
Small-‐scale
Product
outcome
Cogni>ve
engagement
Task-‐based
methods
(>me
spent,
follow-‐on
task)
Physiological
measures
(e.g.
EEG,
SCL,
fMRI,
eye
tracking,
mouse-‐tracking)
Objec>ve
Short-‐term
Lab
and
field
Small-‐scale
and
large-‐scale
Process
outcome
Interac>on
engagement
Web
analy>cs
metrics
+
models
Objec>ve
Short-‐
and
long-‐term
Field
Large-‐scale
Process
outcome
14
15. Large-‐scale
measurements
of
user
engagement
–
Web
analy2cs
Intra-‐session
measures
Inter-‐session
measures
• Dwell
>me
/
session
dura>on
• Play
>me
(video)
• (Mouse
movement)
• Click
through
rate
(CTR)
• Mouse
movement
• Number
of
pages
viewed
(click
depth)
• Conversion
rate
(mostly
for
e-‐
commerce)
• Number
of
UCG
(comments)
• Frac>on
of
return
visits
• Time
between
visits
(inter-‐session
>me,
absence
>me)
• Total
view
>me
per
month
(video)
• Life>me
value
(number
of
ac>ons)
• Number
of
sessions
per
unit
of
>me
• Total
usage
>me
per
unit
of
>me
• Number
of
friends
on
site
(social
networks)
• Number
of
UCG
(comments)
• Intra-‐session
engagement
measures
our
success
in
a[rac>ng
the
user
to
remain
on
our
site
for
as
long
as
possible.
• Inter-‐session
engagement
can
be
measured
directly
or,
for
commercial
sites,
by
observing
life>me
customer
value.
15
17. Signals
–
Signals
–
Signals:
Five
studies
self-‐reported
engagement
WHAT
ARE
THE
QUESTIONS
TO
ASK?
Interac>on
engagement
17
18. STUDY
I
• Domain:
entertainment
news
• Study:
saliency
• Measurement:
focus
a[en>on
and
affect
18
+
Lori
McCay-‐Peet
+
Vidhya
Navalpakkam
19. • How
the
visual
catchiness
(saliency)
of
“relevant”
informa>on
impacts
user
engagement
metrics
such
as
focused
a[en>on
and
emo>on
(affect)
• focused
a_en2on
refers
to
the
exclusion
of
other
things
• affect
relates
to
the
emo>ons
experienced
during
the
interac>on
• Saliency
model
of
visual
a[en>on
developed
by
(Iv
&
Koch,
2000)
Self-‐report
engagement
19
20. Manipula2ng
saliency
Web
page
screenshot
Saliency
maps
salient
condi>on
non-‐salient
condi>on
(McCay-‐Peet
et
al,
2012)
20
21. Study
design
• 8
tasks
=
finding
latest
news
or
headline
on
celebrity
or
entertainment
topic
• Affect
measured
pre-‐
and
post-‐
task
using
the
Posi>ve
e.g.
“determined”,
“a[en>ve” and
Nega>ve
e.g.
“hos>le”,
“afraid”
Affect
Schedule
(PANAS)
• Focused
a[en>on
measured
with
7-‐item
focused
a4en5on
subscale
e.g.
“I
was
so
involved
in
my
news
tasks
that
I
lost
track
of
>me”,
“I
blocked
things
out
around
me
when
I
was
comple>ng
the
news
tasks”
and
perceived
>me
• Interest
level
in
topics
(pre-‐task)
and
ques>onnaire
(post-‐task)
e.g.
“I
was
interested
in
the
content
of
the
web
pages”,
“I
wanted
to
find
out
more
about
the
topics
that
I
encountered
on
the
web
pages”
• 189
(90+99)
par>cipants
from
Amazon
Mechanical
Turk
21
22. PANAS
(10
posi2ve
items
and
10
nega2ve
items)
• You
feel
this
way
right
now,
that
is,
at
the
present
moment
[1
=
very
slightly
or
not
at
all;
2
=
a
li[le;
3
=
moderately;
4
=
quite
a
bit;
5
=
extremely]
[randomize
items]
distressed,
upset,
guilty,
scared,
hos>le,
irritable,
ashamed,
nervous,
ji[ery,
afraid
interested,
excited,
strong,
enthusias>c,
proud,
alert,
inspired,
determined,
a[en>ve,
ac>ve
(Watson,
Clark
&
Tellegen,
1988)
22
23. 7-‐item
focused
a_en2on
subscale
(part
of
the
31-‐item
user
engagement
scale)
5-‐point
scale
(strong
disagree
to
strong
agree)
1. I
lost
myself
in
this
news
tasks
experience
2. I
was
so
involved
in
my
news
tasks
that
I
lost
track
of
>me
3. I
blocked
things
out
around
me
when
I
was
comple>ng
the
news
tasks
4. When
I
was
performing
these
news
tasks,
I
lost
track
of
the
world
around
me
5. The
>me
I
spent
performing
these
news
tasks
just
slipped
away
6. I
was
absorbed
in
my
news
tasks
7. During
the
news
tasks
experience
I
let
myself
go
(O'Brien
&
Toms,
2010)
23
24. Saliency
and
posi2ve
affect
• When
headlines
are
visually
non-‐salient
•
users
are
slow
at
finding
them,
report
more
distrac>on
due
to
web
page
features,
and
show
a
drop
in
affect
• When
headlines
are
visually
catchy
or
salient
•
user
find
them
faster,
report
that
it
is
easy
to
focus,
and
maintain
posi>ve
affect
• Saliency
is
helpful
in
task
performance,
focusing/
avoiding
distrac2on
and
in
maintaining
posi2ve
affect
24
25. Saliency
and
focused
a_en2on
• Adapted
focused
a[en>on
subscale
from
the
online
shopping
domain
to
entertainment
news
domain
• Users
reported
“easier
to
focus
in
the
salient
condi>on”
BUT
no
significant
improvement
in
the
focused
a[en>on
subscale
or
differences
in
perceived
>me
spent
on
tasks
• User
interest
in
web
page
content
is
a
good
predictor
of
focused
a_en2on,
which
in
turn
is
a
good
predictor
of
posi2ve
affect
25
26. Self-‐repor2ng,
crowdsourcing,
saliency
and
user
engagement
• Interac>on
of
saliency,
focused
a[en>on,
and
affect,
together
with
user
interest,
is
complex.
• Using
crowdsourcing
worked!
• What
next?
• include
web
page
content
as
a
quality
of
user
engagement
in
focused
a[en>on
scale
• more
“realis2c”
user
(interac>ve)
reading
experience
• other
measurements:
mouse-‐tracking,
eye-‐tracking,
facial
expression
analysis,
etc.
(McCay-‐Peet,
Lalmas
&
Navalpakkam,
2012)
26
27. STUDY
II
• Domain:
news
and
user
generated
content
(comments)
• Study:
interes>ngness
and
sen>ment
• Measurement:
focus
a[en>on,
affect
and
gaze
27
+
Ioannis
Arapakis
+
Barla
Cambazoglu
+
Mari-‐Carmen
Marcos
+
Joemon
Jose
28. Gaze
and
self-‐repor2ng
• News
+
comments
• Sen>ment,
interest
• 57
users
(lab-‐based)
• Reading
task
(114)
• Ques>onnaire
(qualita>ve
data)
• Record
eye
tracking
(quan>ta>ve
data)
Three
metrics:
gaze,
focus
a[en>on
and
posi>ve
affect
28
(Lin
et
al,
2007)
29. Interes2ng
content
promote
users
engagement
metrics
• All
three
metrics:
• focus
a[en>on,
posi>ve
affect
&
gaze
• What
is
the
right
trade-‐off?
• news
is
news
J
• Can
we
predict?
• provider,
editor,
writer,
category,
genre,
visual
aids,
…,
sen2mentality,
…
• Role
of
user-‐generated
content
(comments)
• As
measure
of
engagement?
• To
promote
engagement?
29
30. Lots
of
sen2ments
but
with
nega2ve
connota2ons!
• Positive effect (and interest, enjoyment and wanted to
know more) correlates
• Positively (é) with sentimentality (lots of
emotions)
• Negatively (ê) with positive polarity (happy news)
Sen2Strenght
(from
-‐5
to
5
per
word)
sen>mentality:
sum
of
absolute
values
(amount
of
sen>ments)
polairity:
sum
of
values
(direc>on
of
the
sen>ments:
posi>ve
vs
nega>ve)
(Thelwall,
Buckley
&
Paltoglou,
2012)
30
31. Effect
of
comments
on
user
engagement
• 6
ranking
of
comments:
• most
replied,
most
popular,
newest
• sen>mentality
high,
sen>mentality
low
• polarity
plus,
polarity
minus
• Longer
gaze
on
• newest
and
most
popular
for
interes>ng
news
• most
replied
and
high
sen>mentality
for
non-‐interes>ng
news
• Can
we
leverage
this
to
prolong
user
a[en>on?
31
32. Gaze,
sen2mentality,
interest
• Interes>ng
and
“a[rac>ve”
content!
• Sen>ment
as
a
proxy
of
focus
a[en>on,
posi>ve
affect
and
gaze?
• Next
• Larger-‐scale
study
• Other
domains
(beyond
news!)
• Role
of
social
signals
(e.g.
Facebook,
Twi[er)
• Lots
more
data:
mouse
tracking,
EEG,
facial
expression
(Arapakis
et
al.,
2013)
32
33. STUDY
III
• Domain:
news
and
social
media
(Wikipedia)
• Study:
interes>ngness,
aesthe>cs,
task
• Measurement:
focus
a[en>on,
affect
and
mouse
movement
33
+
David
Warnock
34. Mouse
tracking
and
self-‐repor2ng
• 324
users
from
Amazon
Mechanical
Turk
(between
subject
design)
• Two
domains
(BBC
News
and
Wikipedia)
• Two
tasks
(reading
and
search)
• “Normal
vs
Ugly” interface
• Ques>onnaires
(qualita>ve
data)
• focus
a[en>on,
posi>ve
effect,
novelty,
• interest,
usability,
aesthe>cs
• +
demographics,
handeness
&
hardware
• Mouse
tracking
(quan>ta>ve
data)
• movement
speed,
movement
rate,
click
rate,
pause
length,
percentage
of
>me
s>ll
34
36. Mouse
tracking
can
tell
about
• Age
• Hardware
• Mouse
• Trackpad
• Task
• Searching:
There
are
many
different
types
of
phobia.
What
is
Gephyrophobia
a
fear
of?
• Reading:
(Wikipedia)
Archimedes,
Sec5on
1:
Biography
36
37. Mouse
tracking
could
not
tell
much
about
• focused
a[en>on
and
posi>ve
affect
• user
interests
in
the
task/topic
• BUT
BUT
BUT
BUT
• “ugly”
variant
did
not
result
in
lower
aesthe>cs
scores
•
although
BBC
>
Wikipedia
• BUT
–
the
comments
le•
…
• Wikipedia:
“The
website
was
simply
awful.
Ads
flashing
everywhere,
poor
text
colors
on
a
dark
blue
background.”;
“The
webpage
was
en5rely
blue.
I
don't
know
if
it
was
supposed
to
be
like
that,
but
it
definitely
detracted
from
the
browsing
experience.”
• BBC
News:
“The
website's
layout
and
color
scheme
were
a
bitch
to
navigate
and
read.”;
“Comic
sans
is
a
horrible
font.”
37
38. Mouse
tracking
and
user
engagement
• Task
and
hardware
• Do
we
have
a
Hawthorne
Effect???
• “Usability” vs
engagement
• “Even
uglier”
interface?
• Within-‐
vs
between-‐subject
design?
• What
next?
• Sequence
of
movements
• Automa>c
clustering
(Warnock
&
Lalmas,
2013)
38
39. STUDY
IV
• Domain:
news
• Study:
automa>c
linking
• Measurement:
interes>ngness
39
+
Ioannis
Arapakis
+Hakan
Ceylan
+
Pinar
Domnez
41. LEPA: Linker for Events to Past Articles
LEPA is a a fully automated approach to
constructing hyperlinks in news articles
using “simple” text processing and
understanding techniques
Indexer
• Processes
ar>cles
over
a
>me
period
by
extrac>ng
features
from
each
ar>cle
and
storing
them
to
facilitate
faster
retrieval
Linker
• Iden>fies
sentences
that
contain
newsworthy
events
• For
each
such
event
it
retrieves
from
the
index
all
the
matching
ar>cles
and
links
the
top-‐ranked
with
the
event
41
43. Pilot study
• Rating results:
• Bad: 35.15%
• Fair: 33.93%
• Good: 20%
• Excellent: 9.09%
• Not Judged: 1.81%
• With 63.03% of the links being good:
• initial evidence that LEPA is not too far from the
optimum achieved by human editors
Professional
editors
A
collec>on
of
system-‐embedded
links
(164
ar>cle-‐link
combina>ons)
5-‐point
Likert
Scale:
(i)
bad,
(ii)
fair,
(iii)
good,
(iv)
excellent,
and
(v)
not
judged
43
44. Assessing
the
links:
are
they
related?
• 664
par>cipants
recruited
through
Amazon
Mechanical
Turk;
between-‐group
design
(two
groups)
• Precision
=
frac>on
of
links
(total=164)
that
received,
in
terms
of
relatedness,
a
score
equal
to,
or
greater
than,
3
on
a
5-‐point
Likert
scale
System-Embedded Links Manually-curated Links
Participant
A
Participant
B
All Participant
A
Participant
B
All
Related to the main
theme
49% 42% 45% 54% 51% 53%
Related to subtopic 21% 24% 22% 31% 34% 33%
Tangentially related 13% 15% 14% 9% 12% 10%
Unrelated 15% 16% 16% 5% 1% 3%
Other 2% 2% 2% 1% 2% 1%
44
45. Assessing
the
Reading
Experience
• 120
par>cipants
recruited
through
Amazom
Mechanical
Turk;
between-‐groups
design
(three
groups)
• Editors
+
two
opposite
“extremes”
of
LEPA:
• High
recall:
best
at
embedding
newsworthy
links
&
ar>cles
that
provide
interes>ng
insights
• High
precision:
best
in
terms
of
embedding
the
right
number
of
links
good
topical
coverage
informa2ve
ness
broader
perspec2ve
interes2ng
insights
good
topical
coverage
link
presenta2on
content
volume
posi2ve
news
reading
experience
45
induc>ve,
thema>c
coding
of
open-‐ended
ques>ons
46. Automa2c
linking
and
news
reading
experience
• Even
under
realis>c
and
uncontrolled
condi>ons,
performance
of
LEPA
comparable
to
that
of
editors,
and
in
some
cases
be[er
• High
precision
vs.
high
recall
• High
precision
threshold
leads
to
a
be[er
news
reading
experience:
less
is
more
“They
were
too
many,
being
mostly
quite
long,
in
some
cases
more
than
half
the
length
of
the
main
ar5cle,
and
some5mes
they
repeated
the
same
iden5cal
informa5on”
46
47. STUDY
V
• Domain:
social
media
(Yahoo!
Answers
and
Wikipedia)
• Study:
serendipity
• Measurement:
relevance,
unexpectedness,
interes>ngness
47
+
Ilaria
Bordino
+
Yelena
Mejova
48. En2ty-‐driven
Exploratory
Search
Linguis-cally
Mo-vated
Seman-c
Aggrega-on
Engines
“transi5on
to
a
truly
seman5c
aggrega5on
paradigm
where
machines
understand
a
user’s
intent,
discover
and
organize
facts,
iden5fy
opinions,
experiences
and
trends”
En>ty
Search
we
build
an
en>ty-‐driven
serendipitous
search
system
based
on
en>ty
networks
extracted
from
Wikipedia
and
Yahoo!
Answers
Serendipity
finding
something
good
or
useful
while
not
specifically
looking
for
it,
serendipitous
search
systems
provide
relevant
and
interes>ng
results
48
49. Yahoo!
Answers
vs
Wikipedia
community-‐driven
ques>on
&
answer
portal
• 67
336
144
ques>ons
&
261
770
047
answers
• January
1,
2010
–
December
31,
2011
• English-‐language
community-‐driven
encyclopedia
• 3
795
865
ar>cles
• as
of
end
of
December
2011
• English
Wikipedia
curated
high-‐quality
knowledge
variety
of
niche
topics
minimally
curated
opinions,
gossip,
personal
info
variety
of
points
of
view
49
50. Entity
&
Relationship
Extraction
• en2ty
–
any
well-‐defined
concept
that
has
a
Wikipedia
page
• rela2onship
–
a
topical
rela>onship/similarity
between
a
pair
of
en>>es
based
on
document
co-‐occurrence
• related
to
the
number
of
documents
in
which
the
two
en>>es
occur
50
Dataset
#
Nodes
#
Edges
Density
#
Isolated
Yahoo!
Answers
896,799
112,595,138
0.00028
69,856
Wikipedia
1,754,069
237,058,218
0.00015
82,381
Dataset
Avg
Degree
Max
Degree
Size
of
Largest
CC
Yahoo!
Answers
251
231,921
826,402
(92.15%)
Wikipedia
270
346,070
1,671,241
(95.28%)
52. Retrieval
Wikipedia
Yahoo!
Answers
Combined
Precision
@
5
0.668
0.724
0.744
MAP
0.716
0.762
0.782
Jus>n
Bieber,
Nicki
Minaj,
Katy
Perry,
Shakira,
Eminem,
Lady
Gaga,
Jose
Mourinho,
Selena
Gomez,
Kim
Kardashian,
Miley
Cyrus,
Robert
Pavnson,
Adele
%28singer%29,
Steve
Jobs,
Osama
bin
Laden,
Ron
Paul,
Twi[er,
Facebook,
Ne_lix,
IPad,
IPhone,
Touchpad,
Kindle,
Olympic
Games,
Cricket,
FIFA,
Tennis,
Mount
Everest,
Eiffel
Tower,
Oxford
Street,
Nubcrburgring,
Hai>,
Chile,
Libya,
Egypt,
Middle
East,
Earthquake,
Oil
spill,
Tsunami,
Subprime
mortgage
crisis,
Bailout,
Terrorism,
Asperger
syndrome,
McDonal's,
Vitamin
D,
Appendici>s,
Cholera,
Influenza,
Pertussis,
Vaccine,
Childbirth
3
labels
per
query-‐result
pair
gold
standard
quality
control
Yahoo!
Answers
Jon
Rubinstein
Timothy
Cook
Kane
Kramer
Steve
Wozniak
Jerry
York
Wikipedia
System
7
PowerPC
G4
SuperDrive
Power
Macintosh
Power
Compu>ng
Corp.
Steve
Jobs
• Annotator
agreement
(overlap):
0.85
• Average
overlap
in
top
5
results:
<1
52
retrieve
en>>es
most
related
to
a
query
en>ty
using
random
walk
53. |
relevant
&
unexpected
|
/
|
unexpected
|
number
of
serendipitous
results
out
of
all
of
the
unexpected
results
retrieved
|
relevant
&
unexpected
|
/
|
retrieved
|
serendipitous
out
of
all
retrieved
53
Baseline
Data
Top:
5
en>>es
that
occur
most
frequently
WP
0.63
(0.58)
in
top
5
search
from
Bing
and
Google
YA
0.69
(0.63)
Top
–WP:
same
as
above,
but
excluding
WP
0.63
(0.58)
Wikipedia
page
from
results
YA
0.70
(0.64)
Rel:
top
5
en>>es
in
the
related
query
WP
0.64
(0.61)
sugges>ons
provided
by
Bing
and
Google
YA
0.70
(0.65)
Rel
+
Top:
union
of
Top
and
Rel
WP
0.61
(0.54)
YA
0.68
(0.57)
Serendipity
“making
fortunate
discoveries
by
accident”
Serendipity
=
unexpectedness
+
relevance
“Expected”
result
baselines
from
web
search
54. Interes2ngness
≠
Relevance
Interes2ng
>
Relevant
Relevant
>
Interes2ng
Oil
Spill
à
Penguins
in
Sweaters
WP
Robert
Pavnson
à
Water
for
Elephants
WP
Lady
Gaga
à
Britney
Spears
WP
Egypt
à
Cairo
Conference
WP
Ne_lix
à
Blu-‐ray
Disc
YA
Egypt
à
Ptolemaic
Kingdom
WP
&
YA
54
(Bordino,
Mejova
&
Lalmas,
2013)
55. Similarity
(Kendall’s
tau-‐b)
between
result
sets
and
reference
ranking
55
Data
tau-‐b
Which
result
is
more
WP
0.162
relevant
to
the
query?
YA
0.336
If
someone
is
interested
in
the
query,
would
WP
0.162
they
also
be
interested
in
the
result?
YA
0.312
Even
if
you
are
not
interested
in
the
query,
WP
0.139
is
the
result
interes5ng
to
you
personally?
YA
0.324
Would
you
learn
anything
new
about
WP
0.167
the
query
from
the
results
YA
0.307
Following
(Arguello
et
al,
2011)
1. Labelers
provide
pairwise
comparisons
between
results
2. Combine
into
a
reference
ranking
3. Compare
result
ranking
to
op>mal
ranking
using
Kendall’s
tau
Assessing
“interes2ngness”
57. What
are
the
ques2ons
to
ask?
• No
one
measurement
is
perfect
or
complete.
• All
studies
(process
or
product)
have
different
constraints.
• Need
to
ensure
methods
are
applied
consistently
with
a[en>on
to
reliability:
what
is
a
good
signal?
• More
emphasis
should
be
placed
on
using
mixed
methods
to
improve
the
validity
of
the
measures.
• Be
careful
of
the
WEIRD
syndrome
(Western,
Educated,
Industrialized,
Rich,
and
Democra>c)
57
58. Acknowledgements
• Collaborators:
Ioannis
Arapakis,
Ilaria
Bordino,
,
Barla
Cambazoglu,
Hakan
Ceylan,
Pinar
Domnez,
Lori
McCay-‐
Peet,
Yelena
Mejova,
Vidhya
Navalpakkam,
David
Warnock,
and
others
at
Yahoo!
Labs.
• This
talk
uses
some
material
from
a
tutorial
“Measuring
User
Engagement”
given
at
WWW
2013,
Rio
de
Janeiro
(with
Heather
O’Brien
and
Elad
Yom-‐Tov)
Blog:
labtomarket.wordpress.com
58