There is a rapid growth in the use of voice-controlled intelligent
personal assistants on mobile devices, such as Microsoft’s Cortana,
Google Now, and Apple’s Siri.
They significantly change the way users interact with search systems,
not only because of the voice control use and touch gestures,
but also due to the dialogue-style nature of the interactions and their
ability to preserve context across different queries. Predicting success
and failure of such search dialogues is a new problem, and
an important one for evaluating and further improving intelligent
assistants. While clicks in web search have been extensively used
to infer user satisfaction, their significance in search dialogues is
lower due to the partial replacement of clicks with voice control,
direct and voice answers, and touch gestures.
In this paper, we propose an automatic method to predict user
satisfaction with intelligent assistants that exploits all the interaction
signals, including voice commands and physical touch gestures
on the device.
First, we conduct an extensive user study to measure user satisfaction
with intelligent assistants, and simultaneously record all
user interactions. Second, we show that the dialogue style of interaction
makes it necessary to evaluate the user experience at the
overall task level as opposed to the query level. Third, we train a
model to predict user satisfaction, and find that interaction signals
that capture the user reading patterns have a high impact: when including
all available interaction signals, we are able to improve the
prediction accuracy of user satisfaction from 71% to 81% over a
baseline that utilizes only click and query features.
Using Contextual Information to Understand Searching and Browsing BehaviorJulia Kiseleva
Julia Kiseleva's slides for PhD defense on June 13 2016.
The thesis is available by the following link -- https://www.researchgate.net/publication/303285745_Using_Contextual_Information_to_Understand_Searching_and_Browsing_Behavior
Detecting Good Abandonment in Mobile SearchJulia Kiseleva
Web search queries for which there are no clicks are referred to as abandoned queries and are usually considered
as leading to user dissatisfaction. However, there are many
cases where a user may not click on any search result page
(SERP) but still be satised. This scenario is referred to
as good abandonment and presents a challenge for most ap-
proaches measuring search satisfaction, which are usually
based on clicks and dwell time. The problem is exacerbated
further on mobile devices where search providers try to in-
crease the likelihood of users being satised directly by the
SERP. This paper proposes a solution to this problem us-
ing gesture interactions, such as reading times and touch
actions, as signals for dierentiating between good and bad
abandonment. These signals go beyond clicks and charac-
terize user behavior in cases where clicks are not needed to
achieve satisfaction. We study different good abandonment
scenarios and investigate the dierent elements on a SERP
that may lead to good abandonment. We also present an
analysis of the correlation between user gesture features and
satisfaction. Finally, we use this analysis to build models to
automatically identify good abandonment in mobile search
achieving an accuracy of 75%, which is significantly better
than considering query and session signals alone. Our fundings have implications for the study and application of user
satisfaction in search systems.
V Jornadas eMadrid sobre “Educación Digital”. Roberto Centeno, Universidad Na...eMadrid network
V Jornadas eMadrid sobre “Educación Digital”. Roberto Centeno, Universidad Nacional de Educación a Distancia: Mecanismos de reputación en MOOCs. 2015-06-30
Using Contextual Information to Understand Searching and Browsing BehaviorJulia Kiseleva
Julia Kiseleva's slides for PhD defense on June 13 2016.
The thesis is available by the following link -- https://www.researchgate.net/publication/303285745_Using_Contextual_Information_to_Understand_Searching_and_Browsing_Behavior
Detecting Good Abandonment in Mobile SearchJulia Kiseleva
Web search queries for which there are no clicks are referred to as abandoned queries and are usually considered
as leading to user dissatisfaction. However, there are many
cases where a user may not click on any search result page
(SERP) but still be satised. This scenario is referred to
as good abandonment and presents a challenge for most ap-
proaches measuring search satisfaction, which are usually
based on clicks and dwell time. The problem is exacerbated
further on mobile devices where search providers try to in-
crease the likelihood of users being satised directly by the
SERP. This paper proposes a solution to this problem us-
ing gesture interactions, such as reading times and touch
actions, as signals for dierentiating between good and bad
abandonment. These signals go beyond clicks and charac-
terize user behavior in cases where clicks are not needed to
achieve satisfaction. We study different good abandonment
scenarios and investigate the dierent elements on a SERP
that may lead to good abandonment. We also present an
analysis of the correlation between user gesture features and
satisfaction. Finally, we use this analysis to build models to
automatically identify good abandonment in mobile search
achieving an accuracy of 75%, which is significantly better
than considering query and session signals alone. Our fundings have implications for the study and application of user
satisfaction in search systems.
V Jornadas eMadrid sobre “Educación Digital”. Roberto Centeno, Universidad Na...eMadrid network
V Jornadas eMadrid sobre “Educación Digital”. Roberto Centeno, Universidad Nacional de Educación a Distancia: Mecanismos de reputación en MOOCs. 2015-06-30
Understanding User Satisfaction with Intelligent AssistantsJulia Kiseleva
Voice-controlled intelligent personal assistants, such as Cortana,
Google Now, Siri and Alexa, are increasingly becoming a part of
users’ daily lives, especially on mobile devices. They introduce
a significant change in information access, not only by introducing
voice control and touch gestures but also by enabling dialogues
where the context is preserved. This raises the need for evaluation
of their effectiveness in assisting users with their tasks. However,
in order to understand which type of user interactions reflect different
degrees of user satisfaction we need explicit judgements. In this
paper, we describe a user study that was designed to measure user
satisfaction over a range of typical scenarios of use: controlling a
device, web search, and structured search dialogue. Using this data,
we study how user satisfaction varied with different usage scenarios
and what signals can be used for modeling satisfaction in the
different scenarios. We find that the notion of satisfaction varies
across different scenarios, and show that, in some scenarios (e.g.
making a phone call), task completion is very important while for
others (e.g. planning a night out), the amount of effort spent is key.
We also study how the nature and complexity of the task at hand
affects user satisfaction, and find that preserving the conversation
context is essential and that overall task-level satisfaction cannot
be reduced to query-level satisfaction alone. Finally, we shed light
on the relative effectiveness and usefulness of voice-controlled intelligent
agents, explaining their increasing popularity and uptake
relative to the traditional query-response interaction.
A Journey into Evaluation: from Retrieval Effectiveness to User EngagementMounia Lalmas-Roelleke
Slides of my presentation at SPIRE 2015 at King's College London.
The talk builds on my work on user engagement, and proposes to move beyond clicks and relevance in information retrieval evaluation, towards user engagement. The talk presents some of my work to show how this could be done. There are two focus: moving from intra- to inter-session evaluation, and moving from small- to large-scale evaluation. These focuses are there to acknoweldge that (1) happy users come back, and (2) we need to properly identify who are the happy users. I hope that this talk will bring new perspectives to those building search systems and wanting to evaluate them, beyond their retrieval effectiveness.
Evaluating the search experience: from Retrieval Effectiveness to User Engage...Mounia Lalmas-Roelleke
These are my slides for my presentation at CLEF 2015 which is being held in Toulouse. I discuss evaluation in the context of search, and how to move towards looking at long-term effect of the search experience. I do this through the concept of absence time. I present examples for search but also in the context if mobile advertising. My aim is to frame evaluation within user engagement.
Optimizing Mobile UX Design Webinar Presentation SlidesUserZoom
Optimizing Mobile UX Design: Webinar on Mobile User Experience Research Methods & Tools
Most businesses are investing in mobile apps and mobile commerce. Recently, more emphasis has been placed on the interactive experiences users have on mobile devices
To explain how to optimize the user experience on mobile interfaces, UserZoom will be joined by special guest User Centric in a complimentary webinar. The webinar will focus on how user experience research methods and tools can add extremely valuable insights into the design process and help brands optimize their mobile site or application’s performance. Attendees will hear presentations from the following experts:
Gavin Lew, Managing Director, User Centric
Gavin’s 20 years of experience in corporate and academic environments have given him a strong foundation in user-centered design and evaluation. In addition to managing User Centric, he holds particular expertise in mobile technology, among other interests. He is a frequent presenter at national conferences, adjunct faculty member at DePaul University and Northwestern University’s Feinberg School of Medicine, and the inventor of several patents.
Kim Oslob, UserZoom Director of Client Services
Kim has extensive experience with both qualitative and quantitative UX Research through her work at Claris (now Filemaker), Macromedia (now Adobe) and VistoCorp (now Good). She has managed projects with companies in the mobile space such as Vodafone, Nokia, Sprint, and Roger’s Wireless to improve the user experience of over 10 different mobile operating systems.
An Engaging Click ... or how can user engagement measurement inform web searc...Mounia Lalmas-Roelleke
A good search engine is one when users come very regularly, type their queries, get their results, and leave quickly. With user engagement metrics from web analytics, these translate to a low dwell time, often low CTR, but a very high return rate. But user engagement is not just about this. User engagement is a complex phenomenon that requires a number of approaches for its measurement: we can ask the user about their experience though questionnaires, we can observe where they look or move the mouse, and we can calculate various web analytic metrics. The aim of this talk is to discuss how current work on user engagement, not necessary specific to web search, can provide insights into putting search into more broader perspectives.
This presentation is part of Search Solutions 2013, 27 November 2013, at the BCS HQ. A first version of this talk was given at the SIGIR 2013 Industry Day by Ricardo Baeza-Yates.
Ten tips for surveys: on questions, process, and testing your survey.
Books mentioned are listed here: http://rosenfeldmedia.com/uxzeitgeist/lists/cjforms/10-tips-for-a-better-survey-stc2011
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
UX playbook: Real world user exercisesInVision App
Users are full of surprises. And they have a way of finding confusing spots in a product even if your team meticulously planned and designed it. In this session, our very own Clark Wimberly walked us through a number of fun and challenging exercises aimed at keeping users happy.
Day 2 slides from a two-day workshop on UX Foundations by Meg Kurdziolek and Karen Tang. Day 2 covered research methods that can be used throughout the design process to evaluate and validate design.
10 Steps to Mapping Your Customer JourneyQualtrics
Customer journey mapping is an important piece of understanding how you can provide the best customer experience possible. Learn how in 10 essential steps.
The first part of a workshop on user experience surveys. Topics: (1) how to improve the questions in surveys and (2) how to assess UX using a survey.
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Six crucial survey concepts that UX professionals need to knowCaroline Jarrett
Surveys can be a really valuable source of great data. This workshop explores six crucial survey concepts:
1. Ask questions that people can answer
2. Satisfaction is a slippery topic
3. Assess the total survey error
4. Understand who responds
5. your survey goals drive your analysis
6. Test everything.
How to ask better questions and how to assess UX using surveys.
This workshop at UXLX 2014 in Lisbon was a deep dive into two important topics in survey design for user research.
We used the four-step model of how people answer questions to work on better questions, then we focused on two special uses of questionnaires in user research: the post-test assessment of satisfaction, and then how to gather information from users for redesign.
Thanks to all the attendees for making this workshop a lot of fun.
Caroline Jarrett @cjforms
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Philosophy 101NameFall 2017Midterm #2Due in class.docxmattjtoni51554
Philosophy 101 Name:
Fall 2017
Midterm #2
Due in class: December 6 th 9:00 AM
No late papers accepted. No online submissions.
Answer each of the following questions to the best of your abilities. Be sure to answer completely and write your answers in complete sentences. Type and double space your answers.
1. What is the difference between rationalism and empiricism?
2. What was Descartes’ method? How did it work?
3. What was Hume’s theory of causation?
4. What was the implication of Hume’s theory of causation?
5. How did Kant respond to Hume?
6. Explain the way Kant’s response to Hume led to philosophers to begin focusing on the human subject in a new way.
Findings
Survey Results
The results of the Breeze Way Café surveys will provide a helpful data and information pertaining to the following subjects:
· Age. The average customer age
· Visits. We asked the customers how frequent you visit the Breeze way café.
· Menu. Asking the costumer for their understanding of the menu, what did they order and if they were pleased with the product(s) they ordered.
· Waiting Time. Survey participants were asked “How long did you wait for your food?” and were they happy with they waiting time.
· Amount of Money Spent.
· Café Criteria. Asking the customers to rate Breeze Way Café in seven different categories: (1) Quality of the products served, (2) taste, (3) atmosphere, (4) price of menu items, (5) cleanliness, (6) variety of options offered, (7) Customer service. These categories will give Breeze Way Café helpful feedback on which categories need to improvement.
Survey Results Analysis
We have collected 47 completed surveys and complied the results into graphs.
Age
We asked the customers on the surveys their age. The age range options were, age 18-24, 25-34, 35-49, 50-60, and 61 and over.
The two highest percentages of the Breeze Way Café customer are 32% were aged 35-49 and 26% aged 25-34. The middle 19% aged 50-60. The lowest are 11% aged 61 and over and 13% 18-24 aged. The results on age will be in figure 1.
Figure 1:
Visits
We asked the customers on the survey on how often they frequent visit Breeze Way Café. The options on the survey were, one or more times a week, a few times a month, a few times a year, and this is my first visit.
The results of the surveys will help Breeze Way Café whether the café is active or not.
The survey results were 51% visit one or more times a week, and 19% a few times a month. 17% customers first time visiting Breeze Way Café, leaving 13% visiting few times a year. Which means there are a good percentage those customers are visiting frequently. You can see the chart in figure 2.
Figure2:
Menu
On the survey a question were asked the customers if they understood the menu easily, and if they were pleased with their order.
It will help Breeze Way Café with their products that they are serving the customers, if they.
Behavioral Dynamics from the SERP’s Perspective: What are Failed SERPs and Ho...Julia Kiseleva
Web search is always in a state of flux: queries, their intent, and the
most relevant content are changing over time, in predictable and unpredictable
ways. Modern search technology has made great strides
in keeping up to pace with these changes, but there remain cases of
failure where the organic search results on the search engine result
page (SERP) are outdated, and no relevant result is displayed.
Failing SERPs due to temporal drift are one of the greatest frustrations
of web searchers, leading to search abandonment or even
search engine switch. Detecting failed SERPs timely and providing
access to the desired out-of-SERP results has huge potential to improve
user satisfaction. Our main findings are threefold: First, we
refine the conceptual model of behavioral dynamics on the web by
including the SERP and defining (un)successful SERPs in terms of
observable behavior. Second, we analyse typical patterns of temporal
change and propose models to predict query drift beyond the
current SERP, and ways to adapt the SERP to include the desired
results. Third, we conduct extensive experiments on real world
search engine traffic demonstrating the viability of our approach.
Our analysis of behavioral dynamics at the SERP level gives new
insight in one of the primary causes of search failure due to temporal
query intent drifts. Our overall conclusion is that the most
detrimental cases in terms of (lack of) user satisfaction lead to the
largest changes in information seeking behavior, and hence to observable
changes in behavior we can exploit to detect failure, and
moreover not only detect them but also resolve them.
Where to Go on Your Next Trip? Optimizing Travel Destinations Based on User P...Julia Kiseleva
Recommendation based on user preferences is a common
task for e-commerce websites. New recommendation algorithms
are often evaluated by offline comparison to baseline
algorithms such as recommending random or the most
popular items. Here, we investigate how these algorithms
themselves perform and compare to the operational production
system in large scale online experiments in a real-world
application. Specifically, we focus on recommending travel
destinations at Booking.com, a major online travel site, to
users searching for their preferred vacation activities. To
build ranking models we use multi-criteria rating data provided
by previous users after their stay at a destination. We
implement three methods and compare them to the current
baseline in Booking.com: random, most popular, and Naive
Bayes. Our general conclusion is that, in an online A/B test
with live users, our Naive-Bayes based ranker increased user
engagement significantly over the current online system
More Related Content
Similar to Predicting User Satisfaction with Intelligent Assistants
Understanding User Satisfaction with Intelligent AssistantsJulia Kiseleva
Voice-controlled intelligent personal assistants, such as Cortana,
Google Now, Siri and Alexa, are increasingly becoming a part of
users’ daily lives, especially on mobile devices. They introduce
a significant change in information access, not only by introducing
voice control and touch gestures but also by enabling dialogues
where the context is preserved. This raises the need for evaluation
of their effectiveness in assisting users with their tasks. However,
in order to understand which type of user interactions reflect different
degrees of user satisfaction we need explicit judgements. In this
paper, we describe a user study that was designed to measure user
satisfaction over a range of typical scenarios of use: controlling a
device, web search, and structured search dialogue. Using this data,
we study how user satisfaction varied with different usage scenarios
and what signals can be used for modeling satisfaction in the
different scenarios. We find that the notion of satisfaction varies
across different scenarios, and show that, in some scenarios (e.g.
making a phone call), task completion is very important while for
others (e.g. planning a night out), the amount of effort spent is key.
We also study how the nature and complexity of the task at hand
affects user satisfaction, and find that preserving the conversation
context is essential and that overall task-level satisfaction cannot
be reduced to query-level satisfaction alone. Finally, we shed light
on the relative effectiveness and usefulness of voice-controlled intelligent
agents, explaining their increasing popularity and uptake
relative to the traditional query-response interaction.
A Journey into Evaluation: from Retrieval Effectiveness to User EngagementMounia Lalmas-Roelleke
Slides of my presentation at SPIRE 2015 at King's College London.
The talk builds on my work on user engagement, and proposes to move beyond clicks and relevance in information retrieval evaluation, towards user engagement. The talk presents some of my work to show how this could be done. There are two focus: moving from intra- to inter-session evaluation, and moving from small- to large-scale evaluation. These focuses are there to acknoweldge that (1) happy users come back, and (2) we need to properly identify who are the happy users. I hope that this talk will bring new perspectives to those building search systems and wanting to evaluate them, beyond their retrieval effectiveness.
Evaluating the search experience: from Retrieval Effectiveness to User Engage...Mounia Lalmas-Roelleke
These are my slides for my presentation at CLEF 2015 which is being held in Toulouse. I discuss evaluation in the context of search, and how to move towards looking at long-term effect of the search experience. I do this through the concept of absence time. I present examples for search but also in the context if mobile advertising. My aim is to frame evaluation within user engagement.
Optimizing Mobile UX Design Webinar Presentation SlidesUserZoom
Optimizing Mobile UX Design: Webinar on Mobile User Experience Research Methods & Tools
Most businesses are investing in mobile apps and mobile commerce. Recently, more emphasis has been placed on the interactive experiences users have on mobile devices
To explain how to optimize the user experience on mobile interfaces, UserZoom will be joined by special guest User Centric in a complimentary webinar. The webinar will focus on how user experience research methods and tools can add extremely valuable insights into the design process and help brands optimize their mobile site or application’s performance. Attendees will hear presentations from the following experts:
Gavin Lew, Managing Director, User Centric
Gavin’s 20 years of experience in corporate and academic environments have given him a strong foundation in user-centered design and evaluation. In addition to managing User Centric, he holds particular expertise in mobile technology, among other interests. He is a frequent presenter at national conferences, adjunct faculty member at DePaul University and Northwestern University’s Feinberg School of Medicine, and the inventor of several patents.
Kim Oslob, UserZoom Director of Client Services
Kim has extensive experience with both qualitative and quantitative UX Research through her work at Claris (now Filemaker), Macromedia (now Adobe) and VistoCorp (now Good). She has managed projects with companies in the mobile space such as Vodafone, Nokia, Sprint, and Roger’s Wireless to improve the user experience of over 10 different mobile operating systems.
An Engaging Click ... or how can user engagement measurement inform web searc...Mounia Lalmas-Roelleke
A good search engine is one when users come very regularly, type their queries, get their results, and leave quickly. With user engagement metrics from web analytics, these translate to a low dwell time, often low CTR, but a very high return rate. But user engagement is not just about this. User engagement is a complex phenomenon that requires a number of approaches for its measurement: we can ask the user about their experience though questionnaires, we can observe where they look or move the mouse, and we can calculate various web analytic metrics. The aim of this talk is to discuss how current work on user engagement, not necessary specific to web search, can provide insights into putting search into more broader perspectives.
This presentation is part of Search Solutions 2013, 27 November 2013, at the BCS HQ. A first version of this talk was given at the SIGIR 2013 Industry Day by Ricardo Baeza-Yates.
Ten tips for surveys: on questions, process, and testing your survey.
Books mentioned are listed here: http://rosenfeldmedia.com/uxzeitgeist/lists/cjforms/10-tips-for-a-better-survey-stc2011
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
UX playbook: Real world user exercisesInVision App
Users are full of surprises. And they have a way of finding confusing spots in a product even if your team meticulously planned and designed it. In this session, our very own Clark Wimberly walked us through a number of fun and challenging exercises aimed at keeping users happy.
Day 2 slides from a two-day workshop on UX Foundations by Meg Kurdziolek and Karen Tang. Day 2 covered research methods that can be used throughout the design process to evaluate and validate design.
10 Steps to Mapping Your Customer JourneyQualtrics
Customer journey mapping is an important piece of understanding how you can provide the best customer experience possible. Learn how in 10 essential steps.
The first part of a workshop on user experience surveys. Topics: (1) how to improve the questions in surveys and (2) how to assess UX using a survey.
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Six crucial survey concepts that UX professionals need to knowCaroline Jarrett
Surveys can be a really valuable source of great data. This workshop explores six crucial survey concepts:
1. Ask questions that people can answer
2. Satisfaction is a slippery topic
3. Assess the total survey error
4. Understand who responds
5. your survey goals drive your analysis
6. Test everything.
How to ask better questions and how to assess UX using surveys.
This workshop at UXLX 2014 in Lisbon was a deep dive into two important topics in survey design for user research.
We used the four-step model of how people answer questions to work on better questions, then we focused on two special uses of questionnaires in user research: the post-test assessment of satisfaction, and then how to gather information from users for redesign.
Thanks to all the attendees for making this workshop a lot of fun.
Caroline Jarrett @cjforms
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Philosophy 101NameFall 2017Midterm #2Due in class.docxmattjtoni51554
Philosophy 101 Name:
Fall 2017
Midterm #2
Due in class: December 6 th 9:00 AM
No late papers accepted. No online submissions.
Answer each of the following questions to the best of your abilities. Be sure to answer completely and write your answers in complete sentences. Type and double space your answers.
1. What is the difference between rationalism and empiricism?
2. What was Descartes’ method? How did it work?
3. What was Hume’s theory of causation?
4. What was the implication of Hume’s theory of causation?
5. How did Kant respond to Hume?
6. Explain the way Kant’s response to Hume led to philosophers to begin focusing on the human subject in a new way.
Findings
Survey Results
The results of the Breeze Way Café surveys will provide a helpful data and information pertaining to the following subjects:
· Age. The average customer age
· Visits. We asked the customers how frequent you visit the Breeze way café.
· Menu. Asking the costumer for their understanding of the menu, what did they order and if they were pleased with the product(s) they ordered.
· Waiting Time. Survey participants were asked “How long did you wait for your food?” and were they happy with they waiting time.
· Amount of Money Spent.
· Café Criteria. Asking the customers to rate Breeze Way Café in seven different categories: (1) Quality of the products served, (2) taste, (3) atmosphere, (4) price of menu items, (5) cleanliness, (6) variety of options offered, (7) Customer service. These categories will give Breeze Way Café helpful feedback on which categories need to improvement.
Survey Results Analysis
We have collected 47 completed surveys and complied the results into graphs.
Age
We asked the customers on the surveys their age. The age range options were, age 18-24, 25-34, 35-49, 50-60, and 61 and over.
The two highest percentages of the Breeze Way Café customer are 32% were aged 35-49 and 26% aged 25-34. The middle 19% aged 50-60. The lowest are 11% aged 61 and over and 13% 18-24 aged. The results on age will be in figure 1.
Figure 1:
Visits
We asked the customers on the survey on how often they frequent visit Breeze Way Café. The options on the survey were, one or more times a week, a few times a month, a few times a year, and this is my first visit.
The results of the surveys will help Breeze Way Café whether the café is active or not.
The survey results were 51% visit one or more times a week, and 19% a few times a month. 17% customers first time visiting Breeze Way Café, leaving 13% visiting few times a year. Which means there are a good percentage those customers are visiting frequently. You can see the chart in figure 2.
Figure2:
Menu
On the survey a question were asked the customers if they understood the menu easily, and if they were pleased with their order.
It will help Breeze Way Café with their products that they are serving the customers, if they.
Similar to Predicting User Satisfaction with Intelligent Assistants (20)
Behavioral Dynamics from the SERP’s Perspective: What are Failed SERPs and Ho...Julia Kiseleva
Web search is always in a state of flux: queries, their intent, and the
most relevant content are changing over time, in predictable and unpredictable
ways. Modern search technology has made great strides
in keeping up to pace with these changes, but there remain cases of
failure where the organic search results on the search engine result
page (SERP) are outdated, and no relevant result is displayed.
Failing SERPs due to temporal drift are one of the greatest frustrations
of web searchers, leading to search abandonment or even
search engine switch. Detecting failed SERPs timely and providing
access to the desired out-of-SERP results has huge potential to improve
user satisfaction. Our main findings are threefold: First, we
refine the conceptual model of behavioral dynamics on the web by
including the SERP and defining (un)successful SERPs in terms of
observable behavior. Second, we analyse typical patterns of temporal
change and propose models to predict query drift beyond the
current SERP, and ways to adapt the SERP to include the desired
results. Third, we conduct extensive experiments on real world
search engine traffic demonstrating the viability of our approach.
Our analysis of behavioral dynamics at the SERP level gives new
insight in one of the primary causes of search failure due to temporal
query intent drifts. Our overall conclusion is that the most
detrimental cases in terms of (lack of) user satisfaction lead to the
largest changes in information seeking behavior, and hence to observable
changes in behavior we can exploit to detect failure, and
moreover not only detect them but also resolve them.
Where to Go on Your Next Trip? Optimizing Travel Destinations Based on User P...Julia Kiseleva
Recommendation based on user preferences is a common
task for e-commerce websites. New recommendation algorithms
are often evaluated by offline comparison to baseline
algorithms such as recommending random or the most
popular items. Here, we investigate how these algorithms
themselves perform and compare to the operational production
system in large scale online experiments in a real-world
application. Specifically, we focus on recommending travel
destinations at Booking.com, a major online travel site, to
users searching for their preferred vacation activities. To
build ranking models we use multi-criteria rating data provided
by previous users after their stay at a destination. We
implement three methods and compare them to the current
baseline in Booking.com: random, most popular, and Naive
Bayes. Our general conclusion is that, in an online A/B test
with live users, our Naive-Bayes based ranker increased user
engagement significantly over the current online system
Predicting Current User Intent with Contextual Markov ModelsJulia Kiseleva
Abstract—In many web information systems like e-shops and information portals predictive modeling is used to understand user intentions based on their browsing behavior. User behavior is inherently sensitive to various contexts. Identifying such relevant contexts can help to improve the prediction performance. In this work, we propose a formal approach in which the context
discovery process is defined as an optimization problem. For simplicity we assume a concrete yet generic scenario in which context is considered to be a secondary label of an instance that is either known from the available contextual attribute (e.g. user location) or can be induced from the training data (e.g. novice vs. expert user). In an ideal case, the objective function of the optimization problem has an analytical form enabling us
to design a context discovery algorithm solving the optimization problem directly. An example with Markov models, a typical approach for modeling user browsing behavior, shows that the derived analytical form of the optimization problem provides us with useful mathematical insights of the problem. Experiments with a real-world use-case show that we can discover useful contexts allowing us to significantly improve the prediction of
user intentions with contextual Markov models.
The talk at Twente University on 28 July 2014 Julia Kiseleva
Predictive Web Analytics is aimed at understanding behavioural patterns of users of various web-based applications: e-commerce, ubiquitous and mobile computing, and computational advertising. Within these applications business decisions often rely on two types of predictions: an overall or particular user segment demand predictions and individualised recommendations for visitors. Visitor behaviour is inherently sensitive to the context, which can be de ned as a collection of external factors. Context-awareness allows integrating external explanatory information into the learning process and adapting user behaviour accordingly. The importance of context-awareness has been recognised by researchers and practitioners in many disciplines, including recommendation systems, information retrieval, personalization, data mining, and marketing. We focus on studying ways of context discovery and its integration into predictive analytics.
Multi-cluster Kubernetes Networking- Patterns, Projects and GuidelinesSanjeev Rampal
Talk presented at Kubernetes Community Day, New York, May 2024.
Technical summary of Multi-Cluster Kubernetes Networking architectures with focus on 4 key topics.
1) Key patterns for Multi-cluster architectures
2) Architectural comparison of several OSS/ CNCF projects to address these patterns
3) Evolution trends for the APIs of these projects
4) Some design recommendations & guidelines for adopting/ deploying these solutions.
This 7-second Brain Wave Ritual Attracts Money To You.!nirahealhty
Discover the power of a simple 7-second brain wave ritual that can attract wealth and abundance into your life. By tapping into specific brain frequencies, this technique helps you manifest financial success effortlessly. Ready to transform your financial future? Try this powerful ritual and start attracting money today!
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024APNIC
Ellisha Heppner, Grant Management Lead, presented an update on APNIC Foundation to the PNG DNS Forum held from 6 to 10 May, 2024 in Port Moresby, Papua New Guinea.
1.Wireless Communication System_Wireless communication is a broad term that i...JeyaPerumal1
Wireless communication involves the transmission of information over a distance without the help of wires, cables or any other forms of electrical conductors.
Wireless communication is a broad term that incorporates all procedures and forms of connecting and communicating between two or more devices using a wireless signal through wireless communication technologies and devices.
Features of Wireless Communication
The evolution of wireless technology has brought many advancements with its effective features.
The transmitted distance can be anywhere between a few meters (for example, a television's remote control) and thousands of kilometers (for example, radio communication).
Wireless communication can be used for cellular telephony, wireless access to the internet, wireless home networking, and so on.
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptxBrad Spiegel Macon GA
Brad Spiegel Macon GA’s journey exemplifies the profound impact that one individual can have on their community. Through his unwavering dedication to digital inclusion, he’s not only bridging the gap in Macon but also setting an example for others to follow.
# Internet Security: Safeguarding Your Digital World
In the contemporary digital age, the internet is a cornerstone of our daily lives. It connects us to vast amounts of information, provides platforms for communication, enables commerce, and offers endless entertainment. However, with these conveniences come significant security challenges. Internet security is essential to protect our digital identities, sensitive data, and overall online experience. This comprehensive guide explores the multifaceted world of internet security, providing insights into its importance, common threats, and effective strategies to safeguard your digital world.
## Understanding Internet Security
Internet security encompasses the measures and protocols used to protect information, devices, and networks from unauthorized access, attacks, and damage. It involves a wide range of practices designed to safeguard data confidentiality, integrity, and availability. Effective internet security is crucial for individuals, businesses, and governments alike, as cyber threats continue to evolve in complexity and scale.
### Key Components of Internet Security
1. **Confidentiality**: Ensuring that information is accessible only to those authorized to access it.
2. **Integrity**: Protecting information from being altered or tampered with by unauthorized parties.
3. **Availability**: Ensuring that authorized users have reliable access to information and resources when needed.
## Common Internet Security Threats
Cyber threats are numerous and constantly evolving. Understanding these threats is the first step in protecting against them. Some of the most common internet security threats include:
### Malware
Malware, or malicious software, is designed to harm, exploit, or otherwise compromise a device, network, or service. Common types of malware include:
- **Viruses**: Programs that attach themselves to legitimate software and replicate, spreading to other programs and files.
- **Worms**: Standalone malware that replicates itself to spread to other computers.
- **Trojan Horses**: Malicious software disguised as legitimate software.
- **Ransomware**: Malware that encrypts a user's files and demands a ransom for the decryption key.
- **Spyware**: Software that secretly monitors and collects user information.
### Phishing
Phishing is a social engineering attack that aims to steal sensitive information such as usernames, passwords, and credit card details. Attackers often masquerade as trusted entities in email or other communication channels, tricking victims into providing their information.
### Man-in-the-Middle (MitM) Attacks
MitM attacks occur when an attacker intercepts and potentially alters communication between two parties without their knowledge. This can lead to the unauthorized acquisition of sensitive information.
### Denial-of-Service (DoS) and Distributed Denial-of-Service (DDoS) Attacks
guildmasters guide to ravnica Dungeons & Dragons 5...
Predicting User Satisfaction with Intelligent Assistants
1. Predicting User Satisfaction with
Intelligent Assistants
Julia Kiseleva, Kyle Williams, Ahmed Hassan Awadallah,
Aidan C. Crook, Imed Zitouni, Tasos Anastasakos
Eindhoven University of Technology
Pennsylvania State University
Microsoft
SIGIR’16, Pisa, Italy
2. From Queries to Dialogues
Q1: how is the weather in Chicago
Q2: how is it this weekend
Q3: find me hotels
Q4: which one of these is the cheapest
Q5: which one of these has at least 4 stars
Q6: find me directions from the Chicago airport to
number one
User’s dialogue
with Cortana:
Task is “Finding
a hotel in
Chicago”
3. From Queries to Dialogues
Q1: find me a pharmacy nearby
Q2: which of these is highly rated
Q3: show more information about number 2
Q4: how long will it take me to get there
Q5: Thanks
User’s dialogue
with Cortana:
Task is “Finding
a pharmacy”
4. Cortana:
“Here are ten
restaurants
near you”
Cortana:
“Here are ten
restaurants near
you that have
good reviews”
Cortana:
“Getting you
direction to the
Mayuri Indian
Cuisine”
User:
“show
restauran
ts near
me”
User:
“show the
best ones”
User:
“show
directions
to the
second
one”
From Queries to Dialogues
5. Main Research Question
How can we automatically predict user
satisfaction with search dialogues on
intelligent assistants using
click, touch, and voice interactions?
6. User:
“Do I need
to have a
jacket
tomorrow?”
Cortana: “You
could probably
go without one.
The forecast
shows …”
Single Task Search Dialogue
7. Cortana:
“Here are ten
restaurants
near you”
Cortana:
“Here are ten
restaurants near
you that have
good reviews”
Cortana:
“Getting you
direction to the
Mayuri Indian
Cuisine”
User:
“show
restauran
ts near
me”
User:
“show the
best ones”
User:
“show
directions
to the
second
one”
Multi-Task Search Dialogues
8. How to define user satisfaction with
with search dialogues?
9. Cortana:
“Here are ten
restaurants
near you”
Cortana:
“Here are ten
restaurants near
you that have
good reviews”
Cortana:
“Getting you
direction to the
Mayuri Indian
Cuisine”
User:
“show
restauran
ts near
me”
User:
“show the
best ones”
User:
“show
directions
to the
second
one”
No Clicks
???
10. Cortana:
“Here are ten
restaurants
near you”
Cortana:
“Here are ten
restaurants near
you that have
good reviews”
Cortana:
“Getting you
direction to the
Mayuri Indian
Cuisine”
User:
“show
restauran
ts near
me”
User:
“show the
best ones”
User:
“show
directions
to the
second
one”
SAT? SAT?
SAT
?
Overall
SAT?
? SAT? SAT?
SAT
?
11. User Frustration
Q1: what's the weather like in San Francisco
Q2: what's the weather like in Mountain View
Q3: can you find me a hotel close to Mountain
View
Q4: can you show me the cheapest ones
Q5: show me the third one
Q6: show me the directions from SFO to this
hotel
Q6: show me the directions from SFO to this
hotel
Q7: go back to first hotel (misrecognition)
Q8: show me hotels in Mountain View
Q9: show me cheap hotels in Mountain View
Q10: show me more about the third one
Dialog with
Intelligent Assistant
Task is “Planning a
weekend ”
RestartsearchAuserissatisfied
13. Tracking User Interaction:
Click Signals
• Number of queries in a dialogue
• Number of clicks in a dialogue
• Number of SAT clicks (> 30 sec. dwell time) in a dialogue
• Number of DSAT clicks (< 15 sec. dwell time) in a dialogue
• Time (seconds) until the first click in a dialogue
16. 3 seconds 6 seconds
33% of
ViewPort
66% of
ViewPort
ViewPortHeight
2 seconds
20% of
ViewPort
1s 4s 0.4s 5.4s+ + =
Tracking User Interaction
17. • Number of Swipes
• Number of up-swipes
• Number of down-swipes
• Total distance swiped (pixels)
• Number of swipes normalized by
time
• Total distance divided by num. of
swipes
• Total swiped distance divided by
time
• Number of swipe direction
changes
• SERP answer duration (seconds)
which is shown on screen (even
partially)
• Fraction of visible pixels belonging to
SERP answer
• Attributed time (seconds) to viewing a
particular element (answer) on SERP
• Attributed time (seconds) per unit
height (pixels) associated with a
particular element on SERP
• Attributed time (milliseconds) per unit
area (square pixels) associated with a
particular element on SERP
Tracking User Interaction:
Touch Signals
19. User Study Participants
75%
25%
GENDER
Male Female
55%
45%
LANGUAGE
English Other
82%
8%
2% 8%
EDUCATION Computer
Science
Electrical
Engineering
Mathematics
Other
• 60 Participants
• 25.53 +/- 5.42 years
20. You are planning a
vacation. Pick a place.
Check if the weather is
good enough for the
period you are planning
the vacation. Find a hotel
that suits you. Find the
driving directions to this
place.
21. You are planning a
vacation. Pick a place.
Check if the weather is
good enough for the
period you are planning
the vacation. Find a hotel
that suits you. Find the
driving directions to this
place.
22. Questionnaire
• Were you able to complete the task?
o Yes/No
• How satisfied are you with your experience in this task?
o If the task has sub-tasks participants indicate their graded satisfaction e.g.
o a. How satisfied are you with your experience in finding a hotel?
o b. How satisfied are you with your experience in finding directions?
• How well did Cortana recognize what you said?
o 5-point Likert scale
• Did you put in a lot of effort to complete the task?
o 5-point Likert scale
23. Questionnaire
• Were you able to complete the task?
o Yes/No
• How satisfied are you with your experience in this task?
o If the task has sub-tasks participants indicate their graded satisfaction e.g.
o a. How satisfied are you with your experience in finding a hotel?
o b. How satisfied are you with your experience in finding directions?
• How well did Cortana recognize what you said?
o 5-point Likert scale
• Did you put in a lot of effort to complete the task?
o 5-point Likert scale
8 Tasks:
1 simple,
4 with 2 subtasks,
3 with 3 subtasks
~ 30 Minutes
24. Search Dialog Dataset
• Total amount of queries is 2, 040
• Amount of unique queries is 1, 969
• The average query-length is 7.07
25. Search Dialog Dataset
• Total amount of queries is 2, 040
• Amount of unique queries is 1, 969
• The average query-length is 7.07
• The simple task generated 130 queries
• Tasks with 2 context switches generated 685 queries
• Tasks with 3 context switches generated 1, 355
queries
26. How can we predict user
satisfaction
with search dialogues using
interaction signals?
27. Q1: what do you have medicine for the
stomach ache
Q2: stomach ache medicine over the counter
General
Web
SERP
User’s dialogue about the ‘stomach
ache’
28. Q1: what do you have medicine for the
stomach ache
Q2: stomach ache medicine over the counter
Q3: show me the nearest pharmacy
Q4: more information on the second one
General
Web
SERP
Structured
SERP
User’s dialogue about the ‘stomach
ache’
34. Quality of Interaction Model
Method Accuracy (%) Average F1 (%)
Baseline 70.62 61.38
Interaction Model 1 78.78*
(+11.55)
83.59*
(+35.90)
Interaction Model 2 80.21*
(+13.58)
83.31*
(+35.44)
Interaction Model 3 80.81*
(14.43)
79.08*
(28.83)
* Statistically significant improvement (p < 0,05 )
35. Which interaction signals have
the highest impact on predicting
user satisfaction with search
dialogues?
36. Predicting User Satisfaction
• F1: The SERP for a query is ordered by a measure of relevance as
determined by the system, then additional exploration is unlikely to achieve
user satisfaction, but is more likely an indication that the best-provided
results (i.e. the SERP top) are insufficient to address the user intent
37. Predicting User Satisfaction
• F1: The SERP for a query is ordered by a measure of relevance as
determined by the system, then additional exploration is unlikely to achieve
user satisfaction, but is more likely an indication that the best-provided
results (i.e. the SERP top) are insufficient to address the user intent
• F2: In the converse case of F1, when users find content that satisfies their
intent, their likelihood of scrolling is reduced, and they dwell for an extended
period on the top viewport
38. Predicting User Satisfaction
• F1: The SERP for a query is ordered by a measure of relevance as
determined by the system, then additional exploration is unlikely to achieve
user satisfaction, but is more likely an indication that the best-provided
results (i.e. the SERP top) are insufficient to address the user intent
• F2: In the converse case of F1, when users find content that satisfies their
intent, their likelihood of scrolling is reduced, and they dwell for an extended
period on the top viewport
• F3: When users are involved in a complex task, they are dissatisfied when
redirected to a general web SERP. Unlike F2, the absence of scrolling on this
landing page is an indication of dissatisfaction
39. How can we define user satisfaction with search dialogues?
• User satisfaction with search dialogues is defined in the generalized form,
which showed understanding the nature of user satisfaction as an
aggregation of satisfaction with all dialogue’s tasks and not as a satisfaction
with all dialogue’s queries separately
How can we predict user satisfaction with search dialogues using interaction
signals?
• We showed that features derived from voice and especially from touch and
voice interactions add significant gain in accuracy over the baseline
How can we predict user satisfaction with search dialogues using interaction
signals?
• Our analysis showed a strong negative correlation between user satisfaction
and swipe actions
Conclusion
40. • User satisfaction with search dialogues is defined in the generalized form,
which showed understanding the nature of user satisfaction as an
aggregation of satisfaction with all dialogue’s tasks and not as a satisfaction
with all dialogue’s queries separately
• We showed that features derived from voice and especially from touch and
voice interactions add significant gain in accuracy over the baseline
• Our analysis showed a strong negative correlation between user satisfaction
and swipe actions
Thank you!
Questions?
Editor's Notes
We utilize acoustic feature to characterize
voice interaction happening in search dialogues. More
specifically, we use the phonetic similarity between consecutive
requests to identify patterns of repetition. Metaphone representation
[39] is a way of indexing words by their pronunciation that allows
us to represent words by how they are pronounced as opposed
to how they are written.