This document discusses predicting user satisfaction with intelligent assistants. It defines user satisfaction with search dialogues as an aggregation of satisfaction across all tasks in a dialogue, rather than individual queries. Features from touch, voice, and acoustic interactions are shown to improve prediction accuracy over baselines. Analysis found that more swipe actions correlate with lower user satisfaction. The study collected a dataset of over 2,000 queries across tasks to model satisfaction. Predicting satisfaction can help assistants provide better experiences.
Predicting User Satisfaction with Intelligent AssistantsJulia Kiseleva
There is a rapid growth in the use of voice-controlled intelligent
personal assistants on mobile devices, such as Microsoft’s Cortana,
Google Now, and Apple’s Siri.
They significantly change the way users interact with search systems,
not only because of the voice control use and touch gestures,
but also due to the dialogue-style nature of the interactions and their
ability to preserve context across different queries. Predicting success
and failure of such search dialogues is a new problem, and
an important one for evaluating and further improving intelligent
assistants. While clicks in web search have been extensively used
to infer user satisfaction, their significance in search dialogues is
lower due to the partial replacement of clicks with voice control,
direct and voice answers, and touch gestures.
In this paper, we propose an automatic method to predict user
satisfaction with intelligent assistants that exploits all the interaction
signals, including voice commands and physical touch gestures
on the device.
First, we conduct an extensive user study to measure user satisfaction
with intelligent assistants, and simultaneously record all
user interactions. Second, we show that the dialogue style of interaction
makes it necessary to evaluate the user experience at the
overall task level as opposed to the query level. Third, we train a
model to predict user satisfaction, and find that interaction signals
that capture the user reading patterns have a high impact: when including
all available interaction signals, we are able to improve the
prediction accuracy of user satisfaction from 71% to 81% over a
baseline that utilizes only click and query features.
Using Contextual Information to Understand Searching and Browsing BehaviorJulia Kiseleva
Julia Kiseleva's slides for PhD defense on June 13 2016.
The thesis is available by the following link -- https://www.researchgate.net/publication/303285745_Using_Contextual_Information_to_Understand_Searching_and_Browsing_Behavior
V Jornadas eMadrid sobre “Educación Digital”. Roberto Centeno, Universidad Na...eMadrid network
V Jornadas eMadrid sobre “Educación Digital”. Roberto Centeno, Universidad Nacional de Educación a Distancia: Mecanismos de reputación en MOOCs. 2015-06-30
Quantifying the Invisible Audience in Social NetworksMichael Bernstein
Presented at CHI 2013
When you share content in an online social network, who is listening? Users have scarce information about who actually sees their content, making their audience seem invisible and difficult to estimate. However, understanding this invisible audience can impact both science and design, since perceived audiences influence content production and self-presentation online. In this paper, we combine survey and large-scale log data to examine how well users’ perceptions of their audience match their actual audience on Facebook. We find that social media users consistently underestimate their audience size for their posts, guessing that their audience is just 27% of its true size. Qualitative coding of survey responses reveals folk theories that attempt to reverse-engineer audience size using feedback and friend count, though none of these approaches are particularly accurate. We analyze audience
logs for 222,000 Facebook users’ posts over the course of one month and find that publicly visible signals — friend count, likes, and comments — vary widely and do not strongly indicate the audience of a single post. Despite the variation, users typically reach 61% of their friends each month. Together, our results begin to reveal the invisible undercurrents of audience attention and behavior in online social networks.
A Journey into Evaluation: from Retrieval Effectiveness to User EngagementMounia Lalmas-Roelleke
Slides of my presentation at SPIRE 2015 at King's College London.
The talk builds on my work on user engagement, and proposes to move beyond clicks and relevance in information retrieval evaluation, towards user engagement. The talk presents some of my work to show how this could be done. There are two focus: moving from intra- to inter-session evaluation, and moving from small- to large-scale evaluation. These focuses are there to acknoweldge that (1) happy users come back, and (2) we need to properly identify who are the happy users. I hope that this talk will bring new perspectives to those building search systems and wanting to evaluate them, beyond their retrieval effectiveness.
Predicting User Satisfaction with Intelligent AssistantsJulia Kiseleva
There is a rapid growth in the use of voice-controlled intelligent
personal assistants on mobile devices, such as Microsoft’s Cortana,
Google Now, and Apple’s Siri.
They significantly change the way users interact with search systems,
not only because of the voice control use and touch gestures,
but also due to the dialogue-style nature of the interactions and their
ability to preserve context across different queries. Predicting success
and failure of such search dialogues is a new problem, and
an important one for evaluating and further improving intelligent
assistants. While clicks in web search have been extensively used
to infer user satisfaction, their significance in search dialogues is
lower due to the partial replacement of clicks with voice control,
direct and voice answers, and touch gestures.
In this paper, we propose an automatic method to predict user
satisfaction with intelligent assistants that exploits all the interaction
signals, including voice commands and physical touch gestures
on the device.
First, we conduct an extensive user study to measure user satisfaction
with intelligent assistants, and simultaneously record all
user interactions. Second, we show that the dialogue style of interaction
makes it necessary to evaluate the user experience at the
overall task level as opposed to the query level. Third, we train a
model to predict user satisfaction, and find that interaction signals
that capture the user reading patterns have a high impact: when including
all available interaction signals, we are able to improve the
prediction accuracy of user satisfaction from 71% to 81% over a
baseline that utilizes only click and query features.
Using Contextual Information to Understand Searching and Browsing BehaviorJulia Kiseleva
Julia Kiseleva's slides for PhD defense on June 13 2016.
The thesis is available by the following link -- https://www.researchgate.net/publication/303285745_Using_Contextual_Information_to_Understand_Searching_and_Browsing_Behavior
V Jornadas eMadrid sobre “Educación Digital”. Roberto Centeno, Universidad Na...eMadrid network
V Jornadas eMadrid sobre “Educación Digital”. Roberto Centeno, Universidad Nacional de Educación a Distancia: Mecanismos de reputación en MOOCs. 2015-06-30
Quantifying the Invisible Audience in Social NetworksMichael Bernstein
Presented at CHI 2013
When you share content in an online social network, who is listening? Users have scarce information about who actually sees their content, making their audience seem invisible and difficult to estimate. However, understanding this invisible audience can impact both science and design, since perceived audiences influence content production and self-presentation online. In this paper, we combine survey and large-scale log data to examine how well users’ perceptions of their audience match their actual audience on Facebook. We find that social media users consistently underestimate their audience size for their posts, guessing that their audience is just 27% of its true size. Qualitative coding of survey responses reveals folk theories that attempt to reverse-engineer audience size using feedback and friend count, though none of these approaches are particularly accurate. We analyze audience
logs for 222,000 Facebook users’ posts over the course of one month and find that publicly visible signals — friend count, likes, and comments — vary widely and do not strongly indicate the audience of a single post. Despite the variation, users typically reach 61% of their friends each month. Together, our results begin to reveal the invisible undercurrents of audience attention and behavior in online social networks.
A Journey into Evaluation: from Retrieval Effectiveness to User EngagementMounia Lalmas-Roelleke
Slides of my presentation at SPIRE 2015 at King's College London.
The talk builds on my work on user engagement, and proposes to move beyond clicks and relevance in information retrieval evaluation, towards user engagement. The talk presents some of my work to show how this could be done. There are two focus: moving from intra- to inter-session evaluation, and moving from small- to large-scale evaluation. These focuses are there to acknoweldge that (1) happy users come back, and (2) we need to properly identify who are the happy users. I hope that this talk will bring new perspectives to those building search systems and wanting to evaluate them, beyond their retrieval effectiveness.
Describing Patterns and Disruptions in Large Scale Mobile App Usage DataMounia Lalmas-Roelleke
The advertising industry is seeking to use the unique data provided by the increasing usage of mobile devices and mobile applications (apps) to improve targeting and the experience with apps. As a consequence, understanding user behaviours with apps has gained increased interests from both academia and industry. In this paper we study user app engagement patterns and disruptions of those patterns in a data set unique in its scale and coverage of user activity. First, we provide a detailed account of temporal user activity patterns with apps and compare these to previous studies on app usage behavior. Then, in the second part, and the main contribution of this work, we take advantage of the scale and coverage of our sample and show how app usage behavior is disrupted through major political, social, and sports events.
Slides for paper presented at TempWeb 2017:
S. Van Canneyt, M. Bron, A. Haines and M. Lalmas. Describing Patterns and Disruptions in Large Scale Mobile App Usage Data, 7th Temporal Web Analytics Workshop (TempWeb), International World Wide Web Conference (WWW 2017), Industrial Track, Perth, Australia, 3-7 April, 2017.
From “Selena Gomez” to “Marlon Brando”: Understanding Explorative Entity SearchMounia Lalmas-Roelleke
Slides of our paper. Work with Iris Miliaraki and Roi Blanco. Paper published at 24th International World Wide Web Conference (WWW 2015), Florence, Italy.
Abstract: Consider a user who submits a search query "Shakira" having a specific search goal in mind (such as her age) but at the same time willing to explore information for other entities related to her, such as comparable singers. In previous work, a system called Spark, was developed to provide such search experience. Given a query submitted to the Yahoo search engine, Spark provides related entity suggestions for the query, exploiting, among else, public knowledge bases from the Semantic Web. We refer to this search scenario as explorative entity search. The effectiveness and efficiency of the approach has been demonstrated in previous work. The way users interact with these related entity suggestions and whether this interaction can be predicted have however not been studied. In this paper, we perform a large-scale analysis into how users interact with the entity results returned by Spark. We characterize the users, queries and sessions that appear to promote an explorative behavior. Based on this analysis, we develop a set of query and user-based features that reflect the click behavior of users and explore their
effectiveness in the context of a prediction task.
Slides for keynote "Social Media and AI: Don’t forget the users" at WWW 2017 workshop "International Workshop on Modeling Social Media: Machine Learning and AI for Modeling and Analyzing Social Media". I am arguing that we need consider two things: the source of what we use to make good algorithms and whether users are impacted the way we want to impact them. The talk is based on two uses cases around providing diversity (something many of us believe is good) to users:
1. Engaging through diversity: serendipity (same algorithm, different sources)
2. Engaging through diversity: awareness (effective algorithm, perception)
My goal is to say, we may have the best AI, but we may get it wrong if we forget the users. I don't have answers, but it is important that we ask the right questions in today's world.
Understanding User Satisfaction with Intelligent AssistantsJulia Kiseleva
Voice-controlled intelligent personal assistants, such as Cortana,
Google Now, Siri and Alexa, are increasingly becoming a part of
users’ daily lives, especially on mobile devices. They introduce
a significant change in information access, not only by introducing
voice control and touch gestures but also by enabling dialogues
where the context is preserved. This raises the need for evaluation
of their effectiveness in assisting users with their tasks. However,
in order to understand which type of user interactions reflect different
degrees of user satisfaction we need explicit judgements. In this
paper, we describe a user study that was designed to measure user
satisfaction over a range of typical scenarios of use: controlling a
device, web search, and structured search dialogue. Using this data,
we study how user satisfaction varied with different usage scenarios
and what signals can be used for modeling satisfaction in the
different scenarios. We find that the notion of satisfaction varies
across different scenarios, and show that, in some scenarios (e.g.
making a phone call), task completion is very important while for
others (e.g. planning a night out), the amount of effort spent is key.
We also study how the nature and complexity of the task at hand
affects user satisfaction, and find that preserving the conversation
context is essential and that overall task-level satisfaction cannot
be reduced to query-level satisfaction alone. Finally, we shed light
on the relative effectiveness and usefulness of voice-controlled intelligent
agents, explaining their increasing popularity and uptake
relative to the traditional query-response interaction.
Detecting Good Abandonment in Mobile SearchJulia Kiseleva
Web search queries for which there are no clicks are referred to as abandoned queries and are usually considered
as leading to user dissatisfaction. However, there are many
cases where a user may not click on any search result page
(SERP) but still be satised. This scenario is referred to
as good abandonment and presents a challenge for most ap-
proaches measuring search satisfaction, which are usually
based on clicks and dwell time. The problem is exacerbated
further on mobile devices where search providers try to in-
crease the likelihood of users being satised directly by the
SERP. This paper proposes a solution to this problem us-
ing gesture interactions, such as reading times and touch
actions, as signals for dierentiating between good and bad
abandonment. These signals go beyond clicks and charac-
terize user behavior in cases where clicks are not needed to
achieve satisfaction. We study different good abandonment
scenarios and investigate the dierent elements on a SERP
that may lead to good abandonment. We also present an
analysis of the correlation between user gesture features and
satisfaction. Finally, we use this analysis to build models to
automatically identify good abandonment in mobile search
achieving an accuracy of 75%, which is significantly better
than considering query and session signals alone. Our fundings have implications for the study and application of user
satisfaction in search systems.
Day 2 slides from a two-day workshop on UX Foundations by Meg Kurdziolek and Karen Tang. Day 2 covered research methods that can be used throughout the design process to evaluate and validate design.
UX playbook: Real world user exercisesInVision App
Users are full of surprises. And they have a way of finding confusing spots in a product even if your team meticulously planned and designed it. In this session, our very own Clark Wimberly walked us through a number of fun and challenging exercises aimed at keeping users happy.
10 Steps to Mapping Your Customer JourneyQualtrics
Customer journey mapping is an important piece of understanding how you can provide the best customer experience possible. Learn how in 10 essential steps.
Ten tips for surveys: on questions, process, and testing your survey.
Books mentioned are listed here: http://rosenfeldmedia.com/uxzeitgeist/lists/cjforms/10-tips-for-a-better-survey-stc2011
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
This presentation goes through all phases of research - planning, data collection and analysis - and show examples of habits you really should avoid unless you want to fake your own research.
Describing Patterns and Disruptions in Large Scale Mobile App Usage DataMounia Lalmas-Roelleke
The advertising industry is seeking to use the unique data provided by the increasing usage of mobile devices and mobile applications (apps) to improve targeting and the experience with apps. As a consequence, understanding user behaviours with apps has gained increased interests from both academia and industry. In this paper we study user app engagement patterns and disruptions of those patterns in a data set unique in its scale and coverage of user activity. First, we provide a detailed account of temporal user activity patterns with apps and compare these to previous studies on app usage behavior. Then, in the second part, and the main contribution of this work, we take advantage of the scale and coverage of our sample and show how app usage behavior is disrupted through major political, social, and sports events.
Slides for paper presented at TempWeb 2017:
S. Van Canneyt, M. Bron, A. Haines and M. Lalmas. Describing Patterns and Disruptions in Large Scale Mobile App Usage Data, 7th Temporal Web Analytics Workshop (TempWeb), International World Wide Web Conference (WWW 2017), Industrial Track, Perth, Australia, 3-7 April, 2017.
From “Selena Gomez” to “Marlon Brando”: Understanding Explorative Entity SearchMounia Lalmas-Roelleke
Slides of our paper. Work with Iris Miliaraki and Roi Blanco. Paper published at 24th International World Wide Web Conference (WWW 2015), Florence, Italy.
Abstract: Consider a user who submits a search query "Shakira" having a specific search goal in mind (such as her age) but at the same time willing to explore information for other entities related to her, such as comparable singers. In previous work, a system called Spark, was developed to provide such search experience. Given a query submitted to the Yahoo search engine, Spark provides related entity suggestions for the query, exploiting, among else, public knowledge bases from the Semantic Web. We refer to this search scenario as explorative entity search. The effectiveness and efficiency of the approach has been demonstrated in previous work. The way users interact with these related entity suggestions and whether this interaction can be predicted have however not been studied. In this paper, we perform a large-scale analysis into how users interact with the entity results returned by Spark. We characterize the users, queries and sessions that appear to promote an explorative behavior. Based on this analysis, we develop a set of query and user-based features that reflect the click behavior of users and explore their
effectiveness in the context of a prediction task.
Slides for keynote "Social Media and AI: Don’t forget the users" at WWW 2017 workshop "International Workshop on Modeling Social Media: Machine Learning and AI for Modeling and Analyzing Social Media". I am arguing that we need consider two things: the source of what we use to make good algorithms and whether users are impacted the way we want to impact them. The talk is based on two uses cases around providing diversity (something many of us believe is good) to users:
1. Engaging through diversity: serendipity (same algorithm, different sources)
2. Engaging through diversity: awareness (effective algorithm, perception)
My goal is to say, we may have the best AI, but we may get it wrong if we forget the users. I don't have answers, but it is important that we ask the right questions in today's world.
Understanding User Satisfaction with Intelligent AssistantsJulia Kiseleva
Voice-controlled intelligent personal assistants, such as Cortana,
Google Now, Siri and Alexa, are increasingly becoming a part of
users’ daily lives, especially on mobile devices. They introduce
a significant change in information access, not only by introducing
voice control and touch gestures but also by enabling dialogues
where the context is preserved. This raises the need for evaluation
of their effectiveness in assisting users with their tasks. However,
in order to understand which type of user interactions reflect different
degrees of user satisfaction we need explicit judgements. In this
paper, we describe a user study that was designed to measure user
satisfaction over a range of typical scenarios of use: controlling a
device, web search, and structured search dialogue. Using this data,
we study how user satisfaction varied with different usage scenarios
and what signals can be used for modeling satisfaction in the
different scenarios. We find that the notion of satisfaction varies
across different scenarios, and show that, in some scenarios (e.g.
making a phone call), task completion is very important while for
others (e.g. planning a night out), the amount of effort spent is key.
We also study how the nature and complexity of the task at hand
affects user satisfaction, and find that preserving the conversation
context is essential and that overall task-level satisfaction cannot
be reduced to query-level satisfaction alone. Finally, we shed light
on the relative effectiveness and usefulness of voice-controlled intelligent
agents, explaining their increasing popularity and uptake
relative to the traditional query-response interaction.
Detecting Good Abandonment in Mobile SearchJulia Kiseleva
Web search queries for which there are no clicks are referred to as abandoned queries and are usually considered
as leading to user dissatisfaction. However, there are many
cases where a user may not click on any search result page
(SERP) but still be satised. This scenario is referred to
as good abandonment and presents a challenge for most ap-
proaches measuring search satisfaction, which are usually
based on clicks and dwell time. The problem is exacerbated
further on mobile devices where search providers try to in-
crease the likelihood of users being satised directly by the
SERP. This paper proposes a solution to this problem us-
ing gesture interactions, such as reading times and touch
actions, as signals for dierentiating between good and bad
abandonment. These signals go beyond clicks and charac-
terize user behavior in cases where clicks are not needed to
achieve satisfaction. We study different good abandonment
scenarios and investigate the dierent elements on a SERP
that may lead to good abandonment. We also present an
analysis of the correlation between user gesture features and
satisfaction. Finally, we use this analysis to build models to
automatically identify good abandonment in mobile search
achieving an accuracy of 75%, which is significantly better
than considering query and session signals alone. Our fundings have implications for the study and application of user
satisfaction in search systems.
Day 2 slides from a two-day workshop on UX Foundations by Meg Kurdziolek and Karen Tang. Day 2 covered research methods that can be used throughout the design process to evaluate and validate design.
UX playbook: Real world user exercisesInVision App
Users are full of surprises. And they have a way of finding confusing spots in a product even if your team meticulously planned and designed it. In this session, our very own Clark Wimberly walked us through a number of fun and challenging exercises aimed at keeping users happy.
10 Steps to Mapping Your Customer JourneyQualtrics
Customer journey mapping is an important piece of understanding how you can provide the best customer experience possible. Learn how in 10 essential steps.
Ten tips for surveys: on questions, process, and testing your survey.
Books mentioned are listed here: http://rosenfeldmedia.com/uxzeitgeist/lists/cjforms/10-tips-for-a-better-survey-stc2011
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
This presentation goes through all phases of research - planning, data collection and analysis - and show examples of habits you really should avoid unless you want to fake your own research.
his presentation goes through all phases of research - planning, data collection and analysis - and show examples of habits you really should avoid unless you want to fake your own research.
Evaluating the search experience: from Retrieval Effectiveness to User Engage...Mounia Lalmas-Roelleke
These are my slides for my presentation at CLEF 2015 which is being held in Toulouse. I discuss evaluation in the context of search, and how to move towards looking at long-term effect of the search experience. I do this through the concept of absence time. I present examples for search but also in the context if mobile advertising. My aim is to frame evaluation within user engagement.
The first part of a workshop on user experience surveys. Topics: (1) how to improve the questions in surveys and (2) how to assess UX using a survey.
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Behavioral Dynamics from the SERP’s Perspective: What are Failed SERPs and Ho...Julia Kiseleva
Web search is always in a state of flux: queries, their intent, and the
most relevant content are changing over time, in predictable and unpredictable
ways. Modern search technology has made great strides
in keeping up to pace with these changes, but there remain cases of
failure where the organic search results on the search engine result
page (SERP) are outdated, and no relevant result is displayed.
Failing SERPs due to temporal drift are one of the greatest frustrations
of web searchers, leading to search abandonment or even
search engine switch. Detecting failed SERPs timely and providing
access to the desired out-of-SERP results has huge potential to improve
user satisfaction. Our main findings are threefold: First, we
refine the conceptual model of behavioral dynamics on the web by
including the SERP and defining (un)successful SERPs in terms of
observable behavior. Second, we analyse typical patterns of temporal
change and propose models to predict query drift beyond the
current SERP, and ways to adapt the SERP to include the desired
results. Third, we conduct extensive experiments on real world
search engine traffic demonstrating the viability of our approach.
Our analysis of behavioral dynamics at the SERP level gives new
insight in one of the primary causes of search failure due to temporal
query intent drifts. Our overall conclusion is that the most
detrimental cases in terms of (lack of) user satisfaction lead to the
largest changes in information seeking behavior, and hence to observable
changes in behavior we can exploit to detect failure, and
moreover not only detect them but also resolve them.
Where to Go on Your Next Trip? Optimizing Travel Destinations Based on User P...Julia Kiseleva
Recommendation based on user preferences is a common
task for e-commerce websites. New recommendation algorithms
are often evaluated by offline comparison to baseline
algorithms such as recommending random or the most
popular items. Here, we investigate how these algorithms
themselves perform and compare to the operational production
system in large scale online experiments in a real-world
application. Specifically, we focus on recommending travel
destinations at Booking.com, a major online travel site, to
users searching for their preferred vacation activities. To
build ranking models we use multi-criteria rating data provided
by previous users after their stay at a destination. We
implement three methods and compare them to the current
baseline in Booking.com: random, most popular, and Naive
Bayes. Our general conclusion is that, in an online A/B test
with live users, our Naive-Bayes based ranker increased user
engagement significantly over the current online system
Predicting Current User Intent with Contextual Markov ModelsJulia Kiseleva
Abstract—In many web information systems like e-shops and information portals predictive modeling is used to understand user intentions based on their browsing behavior. User behavior is inherently sensitive to various contexts. Identifying such relevant contexts can help to improve the prediction performance. In this work, we propose a formal approach in which the context
discovery process is defined as an optimization problem. For simplicity we assume a concrete yet generic scenario in which context is considered to be a secondary label of an instance that is either known from the available contextual attribute (e.g. user location) or can be induced from the training data (e.g. novice vs. expert user). In an ideal case, the objective function of the optimization problem has an analytical form enabling us
to design a context discovery algorithm solving the optimization problem directly. An example with Markov models, a typical approach for modeling user browsing behavior, shows that the derived analytical form of the optimization problem provides us with useful mathematical insights of the problem. Experiments with a real-world use-case show that we can discover useful contexts allowing us to significantly improve the prediction of
user intentions with contextual Markov models.
The talk at Twente University on 28 July 2014 Julia Kiseleva
Predictive Web Analytics is aimed at understanding behavioural patterns of users of various web-based applications: e-commerce, ubiquitous and mobile computing, and computational advertising. Within these applications business decisions often rely on two types of predictions: an overall or particular user segment demand predictions and individualised recommendations for visitors. Visitor behaviour is inherently sensitive to the context, which can be de ned as a collection of external factors. Context-awareness allows integrating external explanatory information into the learning process and adapting user behaviour accordingly. The importance of context-awareness has been recognised by researchers and practitioners in many disciplines, including recommendation systems, information retrieval, personalization, data mining, and marketing. We focus on studying ways of context discovery and its integration into predictive analytics.
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024APNIC
Ellisha Heppner, Grant Management Lead, presented an update on APNIC Foundation to the PNG DNS Forum held from 6 to 10 May, 2024 in Port Moresby, Papua New Guinea.
Multi-cluster Kubernetes Networking- Patterns, Projects and GuidelinesSanjeev Rampal
Talk presented at Kubernetes Community Day, New York, May 2024.
Technical summary of Multi-Cluster Kubernetes Networking architectures with focus on 4 key topics.
1) Key patterns for Multi-cluster architectures
2) Architectural comparison of several OSS/ CNCF projects to address these patterns
3) Evolution trends for the APIs of these projects
4) Some design recommendations & guidelines for adopting/ deploying these solutions.
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptxBrad Spiegel Macon GA
Brad Spiegel Macon GA’s journey exemplifies the profound impact that one individual can have on their community. Through his unwavering dedication to digital inclusion, he’s not only bridging the gap in Macon but also setting an example for others to follow.
# Internet Security: Safeguarding Your Digital World
In the contemporary digital age, the internet is a cornerstone of our daily lives. It connects us to vast amounts of information, provides platforms for communication, enables commerce, and offers endless entertainment. However, with these conveniences come significant security challenges. Internet security is essential to protect our digital identities, sensitive data, and overall online experience. This comprehensive guide explores the multifaceted world of internet security, providing insights into its importance, common threats, and effective strategies to safeguard your digital world.
## Understanding Internet Security
Internet security encompasses the measures and protocols used to protect information, devices, and networks from unauthorized access, attacks, and damage. It involves a wide range of practices designed to safeguard data confidentiality, integrity, and availability. Effective internet security is crucial for individuals, businesses, and governments alike, as cyber threats continue to evolve in complexity and scale.
### Key Components of Internet Security
1. **Confidentiality**: Ensuring that information is accessible only to those authorized to access it.
2. **Integrity**: Protecting information from being altered or tampered with by unauthorized parties.
3. **Availability**: Ensuring that authorized users have reliable access to information and resources when needed.
## Common Internet Security Threats
Cyber threats are numerous and constantly evolving. Understanding these threats is the first step in protecting against them. Some of the most common internet security threats include:
### Malware
Malware, or malicious software, is designed to harm, exploit, or otherwise compromise a device, network, or service. Common types of malware include:
- **Viruses**: Programs that attach themselves to legitimate software and replicate, spreading to other programs and files.
- **Worms**: Standalone malware that replicates itself to spread to other computers.
- **Trojan Horses**: Malicious software disguised as legitimate software.
- **Ransomware**: Malware that encrypts a user's files and demands a ransom for the decryption key.
- **Spyware**: Software that secretly monitors and collects user information.
### Phishing
Phishing is a social engineering attack that aims to steal sensitive information such as usernames, passwords, and credit card details. Attackers often masquerade as trusted entities in email or other communication channels, tricking victims into providing their information.
### Man-in-the-Middle (MitM) Attacks
MitM attacks occur when an attacker intercepts and potentially alters communication between two parties without their knowledge. This can lead to the unauthorized acquisition of sensitive information.
### Denial-of-Service (DoS) and Distributed Denial-of-Service (DDoS) Attacks
This 7-second Brain Wave Ritual Attracts Money To You.!nirahealhty
Discover the power of a simple 7-second brain wave ritual that can attract wealth and abundance into your life. By tapping into specific brain frequencies, this technique helps you manifest financial success effortlessly. Ready to transform your financial future? Try this powerful ritual and start attracting money today!
1.Wireless Communication System_Wireless communication is a broad term that i...JeyaPerumal1
Wireless communication involves the transmission of information over a distance without the help of wires, cables or any other forms of electrical conductors.
Wireless communication is a broad term that incorporates all procedures and forms of connecting and communicating between two or more devices using a wireless signal through wireless communication technologies and devices.
Features of Wireless Communication
The evolution of wireless technology has brought many advancements with its effective features.
The transmitted distance can be anywhere between a few meters (for example, a television's remote control) and thousands of kilometers (for example, radio communication).
Wireless communication can be used for cellular telephony, wireless access to the internet, wireless home networking, and so on.
1. From Queries to Dialogues:
Predicting User Satisfaction with
Intelligent Assistants
Julia Kiseleva, Kyle Williams, Ahmed Hassan Awadallah,
Aidan C. Crook, Imed Zitouni, Tasos Anastasakos
Eindhoven University of Technology
Pennsylvania State University
Microsoft
8. From Queries to Dialogues
Q1: how is the weather in Chicago
Q2: how is it this weekend
Q3: find me hotels
Q4: which one of these is the cheapest
Q5: which one of these has at least 4 stars
Q6: find me directions from the Chicago airport to
number one
User’s dialogue
with Cortana:
Task is “Finding
a hotel in
Chicago”
9. From Queries to Dialogues
Q1: find me a pharmacy nearby
Q2: which of these is highly rated
Q3: show more information about number 2
Q4: how long will it take me to get there
Q5: Thanks
User’s dialogue
with Cortana:
Task is “Finding
a pharmacy”
10. Cortana:
“Here are ten
restaurants
near you”
Cortana:
“Here are ten
restaurants near
you that have
good reviews”
Cortana:
“Getting you
direction to the
Mayuri Indian
Cuisine”
User:
“show
restauran
ts near
me”
User:
“show the
best ones”
User:
“show
directions
to the
second
one”
From Queries to Dialogues
11. Main Research Question
How can we automatically predict user
satisfaction with search dialogues on
intelligent assistants using
click, touch, and voice interactions?
12. User:
“Do I need
to have a
jacket
tomorrow?”
Cortana: “You
could probably
go without one.
The forecast
shows …”
Single Task Search Dialogue
13. Cortana:
“Here are ten
restaurants
near you”
Cortana:
“Here are ten
restaurants near
you that have
good reviews”
Cortana:
“Getting you
direction to the
Mayuri Indian
Cuisine”
User:
“show
restauran
ts near
me”
User:
“show the
best ones”
User:
“show
directions
to the
second
one”
Multi-Task Search Dialogues
14. How to define user satisfaction
with search dialogues?
15. Cortana:
“Here are ten
restaurants
near you”
Cortana:
“Here are ten
restaurants near
you that have
good reviews”
Cortana:
“Getting you
direction to the
Mayuri Indian
Cuisine”
User:
“show
restauran
ts near
me”
User:
“show the
best ones”
User:
“show
directions
to the
second
one”
No Clicks
???
16. Cortana:
“Here are ten
restaurants
near you”
Cortana:
“Here are ten
restaurants near
you that have
good reviews”
Cortana:
“Getting you
direction to the
Mayuri Indian
Cuisine”
User:
“show
restauran
ts near
me”
User:
“show the
best ones”
User:
“show
directions
to the
second
one”
SAT? SAT? SAT?
Overall
SAT?
? SAT? SAT? SAT?
17. User Frustration
Q1: what's the weather like in San Francisco
Q2: what's the weather like in Mountain View
Q3: can you find me a hotel close to Mountain
View
Q4: can you show me the cheapest ones
Q5: show me the third one
Q6: show me the directions from SFO to this
hotel
Q6: show me the directions from SFO to this
hotel
Q7: go back to first hotel (misrecognition)
Q8: show me hotels in Mountain View
Q9: show me cheap hotels in Mountain View
Q10: show me more about the third one
Dialog with
Intelligent Assistant
Task is “Planning a
weekend ”
RestartsearchAuserissatisfied
19. Tracking User Interaction:
Click Signals
• Number of queries in a dialogue
• Number of clicks in a dialogue
• Number of SAT clicks (> 30 sec. dwell time) in a dialogue
• Number of DSAT clicks (< 15 sec. dwell time) in a dialogue
• Time (seconds) until the first click in a dialogue
22. 3 seconds 6 seconds
33% of
ViewPort
66% of
ViewPort
ViewPortHeight
2 seconds
20% of
ViewPort
1s 4s 0.4s 5.4s+ + =
Tracking User Interaction
23. • Number of Swipes
• Number of up-swipes
• Number of down-swipes
• Total distance swiped (pixels)
• Number of swipes normalized by
time
• Total distance divided by num. of
swipes
• Total swiped distance divided by
time
• Number of swipe direction
changes
• SERP answer duration (seconds)
which is shown on screen (even
partially)
• Fraction of visible pixels belonging
to SERP answer
• Attributed time (seconds) to viewing
a particular element (answer) on
SERP
• Attributed time (seconds) per unit
height (pixels) associated with a
particular element on SERP
• Attributed time (milliseconds) per
unit area (square pixels) associated
with a particular element on SERP
Tracking User Interaction:
Touch Signals
25. User Study Participants
75%
25%
GENDER
Male Female
55%
45%
LANGUAGE
English Other
82%
8%
2% 8%
EDUCATION Computer
Science
Electrical
Engineering
Mathematics
Other
• 60 Participants
• 25.53 +/- 5.42 years
26. You are planning a
vacation. Pick a place.
Check if the weather is
good enough for the
period you are planning
the vacation. Find a hotel
that suits you. Find the
driving directions to this
place.
27. You are planning a
vacation. Pick a place.
Check if the weather is
good enough for the
period you are planning
the vacation. Find a hotel
that suits you. Find the
driving directions to this
place.
28. Questionnaire
• Were you able to complete the task?
o Yes/No
• How satisfied are you with your experience in this task?
o If the task has sub-tasks participants indicate their graded satisfaction e.g.
o a. How satisfied are you with your experience in finding a hotel?
o b. How satisfied are you with your experience in finding directions?
• How well did Cortana recognize what you said?
o 5-point Likert scale
• Did you put in a lot of effort to complete the task?
o 5-point Likert scale
29. Questionnaire
• Were you able to complete the task?
o Yes/No
• How satisfied are you with your experience in this task?
o If the task has sub-tasks participants indicate their graded satisfaction e.g.
o a. How satisfied are you with your experience in finding a hotel?
o b. How satisfied are you with your experience in finding directions?
• How well did Cortana recognize what you said?
o 5-point Likert scale
• Did you put in a lot of effort to complete the task?
o 5-point Likert scale
8 Tasks:
1 simple,
4 with 2 subtasks,
3 with 3 subtasks
~ 30 Minutes
30. Search Dialog Dataset
• Total amount of queries is 2, 040
• Amount of unique queries is 1, 969
• The average query-length is 7.07
31. Search Dialog Dataset
• Total amount of queries is 2, 040
• Amount of unique queries is 1, 969
• The average query-length is 7.07
• The simple task generated 130 queries
• Tasks with 2 context switches generated 685 queries
• Tasks with 3 context switches generated 1, 355
queries
32. How can we predict user
satisfaction
with search dialogues using
interaction signals?
33. Q1: what do you have medicine for the
stomach ache
Q2: stomach ache medicine over the counter
General
Web
SERP
User’s dialogue about the ‘stomach
ache’
34. Q1: what do you have medicine for the
stomach ache
Q2: stomach ache medicine over the counter
Q3: show me the nearest pharmacy
Q4: more information on the second one
General
Web
SERP
Structured
SERP
User’s dialogue about the ‘stomach
ache’
40. Quality of Interaction Model
Method Accuracy (%) Average F1 (%)
Baseline 70.62 61.38
Interaction Model 1 78.78*
(+11.55)
83.59*
(+35.90)
Interaction Model 2 80.21*
(+13.58)
83.31*
(+35.44)
Interaction Model 3 80.81*
(14.43)
79.08*
(28.83)
* Statistically significant improvement (p < 0,05 )
41. Which interaction signals have
the highest impact on predicting
user satisfaction with search
dialogues?
42. Predicting User Satisfaction
• F1: The SERP for a query is ordered by a measure of relevance as
determined by the system, then additional exploration is unlikely to achieve
user satisfaction, but is more likely an indication that the best-provided
results (i.e. the SERP top) are insufficient to address the user intent
43. Predicting User Satisfaction
• F1: The SERP for a query is ordered by a measure of relevance as
determined by the system, then additional exploration is unlikely to achieve
user satisfaction, but is more likely an indication that the best-provided
results (i.e. the SERP top) are insufficient to address the user intent
• F2: In the converse case of F1, when users find content that satisfies their
intent, their likelihood of scrolling is reduced, and they dwell for an extended
period on the top viewport
44. Predicting User Satisfaction
• F1: The SERP for a query is ordered by a measure of relevance as
determined by the system, then additional exploration is unlikely to achieve
user satisfaction, but is more likely an indication that the best-provided
results (i.e. the SERP top) are insufficient to address the user intent
• F2: In the converse case of F1, when users find content that satisfies their
intent, their likelihood of scrolling is reduced, and they dwell for an extended
period on the top viewport
• F3: When users are involved in a complex task, they are dissatisfied when
redirected to a general web SERP. Unlike F2, the absence of scrolling on this
landing page is an indication of dissatisfaction
45. How can we define user satisfaction with search dialogues?
• User satisfaction with search dialogues is defined in the generalized form,
which showed understanding the nature of user satisfaction as an
aggregation of satisfaction with all dialogue’s tasks and not as a satisfaction
with all dialogue’s queries separately
How can we predict user satisfaction with search dialogues using
interaction signals?
• We showed that features derived from voice and especially from touch and
voice interactions add significant gain in accuracy over the baseline
How can we predict user satisfaction with search dialogues using
interaction signals?
• Our analysis showed a strong negative correlation between user satisfaction
and swipe actions
Conclusion
46. • User satisfaction with search dialogues is defined in
the generalized form, which showed understanding
the nature of user satisfaction as an aggregation of
satisfaction with all dialogue’s tasks and not as a
satisfaction with all dialogue’s queries separately
• We showed that features derived from voice and
especially from touch and voice interactions add
significant gain in accuracy over the baseline
• Our analysis showed a strong negative correlation
between user satisfaction and swipe actions
Thank you!
Questions?
Editor's Notes
We utilize acoustic feature to characterize
voice interaction happening in search dialogues. More
specifically, we use the phonetic similarity between consecutive
requests to identify patterns of repetition. Metaphone representation
[39] is a way of indexing words by their pronunciation that allows
us to represent words by how they are pronounced as opposed
to how they are written.