The document discusses interfaces for semantic web applications and personalization. It covers topics like front-ends for semantics, personalization by adapting to users, example applications that are good or bad, and the importance of continuous evaluation. Throughout the document examples are given of semantic search, recommendation systems, social networking, and adaptive interfaces. The goal is to seamlessly combine content semantics with user context and preferences.
AI for IA's: Machine Learning Demystified at IA Summit 2017 - IAS17Carol Smith
What is machine learning? Is IA relevant in the age of AI? How can I take advantage of cognitive computing? Learn the basics of these concepts and the implications for your work in this presentation. Carol Smith provides examples of machine learning use and will discuss the challenges inherent in in AI.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
The second day of lectures from Aalto University School of Economics’ ITP summer programme’s Strategy and Experience. https://itp.hse.fi/
Contents: Empathic design, personas, design research and methods.
Using user personalized ontological profile to infer semantic knowledge for p...Joao Luis Tavares
The document proposes a new method for constructing personalized ontological profiles (POP) for users based on their interests and views. It introduces six types of semantic relations between concepts and uses these relations to group related concepts in a user's profile into either general groups or specific groups. It then describes a method for computing the strength of each group based on how semantically related the concepts in the group are, and using this strength to enhance the importance of concepts within strong groups. The approach aims to better model each user's perspective and infer additional interests from their stated preferences.
This document discusses collaborative systems and social media technologies from a course taught by Professor Paolo Nesi at the University of Florence. It begins by defining collaborative systems and distinguishing social networks as a type of collaborative system. It then discusses Forrester's trends in social networking and motivations for using social networks. The document outlines several key aspects of social networks, including user/content profiles, measures to analyze social networks, and objectives for both businesses and users in social networks.
The document summarizes the staff, doctoral students, resources, and laboratories of the HCI Group at Tallinn University. It lists the researchers, professors, and analysts that make up the staff. It also lists the doctoral students that have been or are currently affiliated with the group. Finally, it describes two laboratories managed by the group - the Interaction Design Laboratory and the User Experience Laboratory, including their purposes and example projects.
ESWC SS 2012 - Tuesday Tutorial Elena Simperl: Creating and Using Ontologieseswcsummerschool
Here are the steps I would suggest for aligning the ontologies:
1. Representatives present their ontology and explain key concepts and relationships.
2. Editor records all concepts and relationships on a whiteboard in a concept map format without evaluation.
3. Representatives discuss each concept and relationship to reach agreement on meaning and resolve any conflicts or ambiguities.
4. Editor incorporates agreed upon concepts and relationships into a single ontology, resolving any structural issues.
5. Representatives review the aligned ontology and provide feedback.
6. Editor incorporates final changes to produce the aligned ontology for use by all groups.
The goal is to understand each perspective, identify areas of overlap and conflict, and work together iteratively
● Comparative Analysis of Scheduling Algorithms Performance in a Long Term Evolution Network
● Efficient Authentication Algorithm for Secure Remote Access in Wireless Sensor Networks
● Emoji Essence: Detecting User Emotional Response on Visual Centre Field with Emoticons
● Enhanced Information Systems Success Model for Patient Information Assurance
● Natural Language Processing and Its Challenges on Omotic Language Group of Ethiopia
● Quick Quantum Circuit Simulation
AI for IA's: Machine Learning Demystified at IA Summit 2017 - IAS17Carol Smith
What is machine learning? Is IA relevant in the age of AI? How can I take advantage of cognitive computing? Learn the basics of these concepts and the implications for your work in this presentation. Carol Smith provides examples of machine learning use and will discuss the challenges inherent in in AI.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
The second day of lectures from Aalto University School of Economics’ ITP summer programme’s Strategy and Experience. https://itp.hse.fi/
Contents: Empathic design, personas, design research and methods.
Using user personalized ontological profile to infer semantic knowledge for p...Joao Luis Tavares
The document proposes a new method for constructing personalized ontological profiles (POP) for users based on their interests and views. It introduces six types of semantic relations between concepts and uses these relations to group related concepts in a user's profile into either general groups or specific groups. It then describes a method for computing the strength of each group based on how semantically related the concepts in the group are, and using this strength to enhance the importance of concepts within strong groups. The approach aims to better model each user's perspective and infer additional interests from their stated preferences.
This document discusses collaborative systems and social media technologies from a course taught by Professor Paolo Nesi at the University of Florence. It begins by defining collaborative systems and distinguishing social networks as a type of collaborative system. It then discusses Forrester's trends in social networking and motivations for using social networks. The document outlines several key aspects of social networks, including user/content profiles, measures to analyze social networks, and objectives for both businesses and users in social networks.
The document summarizes the staff, doctoral students, resources, and laboratories of the HCI Group at Tallinn University. It lists the researchers, professors, and analysts that make up the staff. It also lists the doctoral students that have been or are currently affiliated with the group. Finally, it describes two laboratories managed by the group - the Interaction Design Laboratory and the User Experience Laboratory, including their purposes and example projects.
ESWC SS 2012 - Tuesday Tutorial Elena Simperl: Creating and Using Ontologieseswcsummerschool
Here are the steps I would suggest for aligning the ontologies:
1. Representatives present their ontology and explain key concepts and relationships.
2. Editor records all concepts and relationships on a whiteboard in a concept map format without evaluation.
3. Representatives discuss each concept and relationship to reach agreement on meaning and resolve any conflicts or ambiguities.
4. Editor incorporates agreed upon concepts and relationships into a single ontology, resolving any structural issues.
5. Representatives review the aligned ontology and provide feedback.
6. Editor incorporates final changes to produce the aligned ontology for use by all groups.
The goal is to understand each perspective, identify areas of overlap and conflict, and work together iteratively
● Comparative Analysis of Scheduling Algorithms Performance in a Long Term Evolution Network
● Efficient Authentication Algorithm for Secure Remote Access in Wireless Sensor Networks
● Emoji Essence: Detecting User Emotional Response on Visual Centre Field with Emoticons
● Enhanced Information Systems Success Model for Patient Information Assurance
● Natural Language Processing and Its Challenges on Omotic Language Group of Ethiopia
● Quick Quantum Circuit Simulation
The technical report presents two social recommendation methods that incorporate semantics from tags: a user-based semantic collaborative filtering and an item-based semantic collaborative filtering. The methods aim to find semantically similar users/items and recommend relevant social items. Experimental results show the methods improve recommendation quality and address issues like polysemy, synonymy, and semantic interoperability compared to methods without semantics.
User Engagement: A Cross-Disciplinary Framework (Nim Dvir)Nim Dvir
(Nim Dvir, Department of Information Science, State University of New York at Albany, NY, USA)
In recent years, the term “engagement” has been increasingly used in broader academic literature as a possible framework to measure and explain human interactions with technology. However, current research is unstructured and spread across various disciplines, leading to wide-ranging, and sometimes disparate, perspectives, vocabularies and measurement methodologies. As result, the theoretical meaning and foundations underlying “Engagement” remain unclear in the literature to-date. To address this issue, I conducted a comprehensive literature review to synthesize knowledge on engagement from several academic fields - Informatics, Information Systems, Communications, Marketing, and Education. Specifically, my review will focus on textual and linguistic information, and how the presentation, framing and organization of such information influence user experience (UX) and online behavior. Using this interdisciplinary approach, I believe that it is possible to identify a basic process of engagement happening in many different contexts. This may lead to progress towards understanding and assessing user engagement by suggesting unified, generalized and overarching models and principles that can be applied across domains and applications.
Keywords: Information Organization, User Experience, Engagement, Content Strategy
The document discusses methods for evaluating ontologies. It proposes developing objective metrics to evaluate ontologies based on three criteria: correctness, completeness, and utility. Correctness evaluates how well an ontology expresses its design objectives. Completeness evaluates how fully an ontology captures required semantic components. Utility combines correctness and completeness and evaluates an ontology's usefulness for its intended use case. Examples are provided to illustrate evaluating ontologies based on the proposed metrics. The goal is to develop standardized evaluation methods to facilitate ontology development and reuse across different domains.
The document discusses the role of ontologies in linked data. It notes that while semantic web ontologies have been widely applied, linked data has grown rapidly using lightweight or no ontologies. However, ontologies could still provide benefits to linked data by helping integrate and reason over heterogeneous linked data sources. Open issues remain around how to best reuse and modularize ontologies for different linked data applications and domains.
This document discusses the state-of-the-art of Internet of Things (IoT) ontologies. It begins by defining ontology and describing important design criteria for ontologies including clarity, coherence, extendibility, and minimal encoding bias. It then discusses the challenges of IoT, including large scale networks, deep heterogeneity, and unknown topology. Several existing IoT ontologies are described, including SWAMO, MMI Device Ontology, and SSN. The document concludes that while no single global IoT ontology currently exists, ontologies are needed to address the semantic interoperability challenges of heterogeneous IoT devices and domains.
A Comparative Study of Recent Ontology Visualization Tools with a Case of Dia...IJORCS
Ontology is a conceptualization of a domain into machine readable format. Ontologies are becoming increasingly popular modelling schemas for knowledge management services and applications. Focus on developing tools to graphically visualise ontologies is rising to aid their assessment and analysis. Graph visualisation helps to browse and comprehend the structure of ontologies. A number of ontology visualizations exist that have been embedded in ontology management tools. The primary goal of this paper is to analyze recently implemented ontology visualization tools and their contributions in the enrichment of users’ cognitive support. This work also presents the preliminary results of an evaluation of three visualization tools to determine the suitability of each method for end user applications where ontologies are used as browsing aids with a case of Diabetes data
John Cook, LTRI, London Metropolitan University
Norbert Pachler, Institute of Education, University of London
SoMobNet International Roundtable on “Social Mobile Networking for Informal Learning” Institute of Education, Nov 21 2011: http://cloudworks.ac.uk/cloudscape/view/2363
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document discusses using personalized ontologies to improve web information gathering by representing user profiles. It proposes a model that constructs personalized ontologies by adopting user feedback from a world knowledge base. The model also uses users' local instance repositories to discover background knowledge and populate the ontologies. The proposed ontology model is evaluated against benchmark models through experiments using a large standard dataset.
Research Inventy : International Journal of Engineering and Scienceresearchinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Multimodal and Affective Human Computer Interaction - Abhinav SharmaAbhinav Sharma
This document discusses human-computer interaction and related topics such as multimodal and affective HCI. It begins with an introduction to the history and development of HCI. It then discusses more recent developments like touch interfaces, voice assistants, and consistent cross-device experiences. Two areas of interest in HCI are identified as multimodal interaction using multiple modes like voice and touch simultaneously, and affective HCI which aims to understand human emotions during interaction. Several research papers are summarized that explore topics like the definitions of multimodal HCI, challenges and opportunities it presents, as well as efforts in the field of automatic emotion recognition in HCI. Overall issues discussed include how to design more natural and seamless multimodal experiences, and how
The document discusses requirements engineering (RE) and how language, knowledge, and mirror neurons impact the RE process. RE involves elicitation and analysis activities between customers and software vendors. It is a social process that requires understanding stakeholders' perspectives which can be hindered by differences in language and domain knowledge. Mirror neurons activate when observing and understanding behaviors and help with social cognition and empathy. For effective RE, organizations need methods to expose participants to different domains early to overcome knowledge and language barriers, clearly define business objects, and recognize the social nature of RE where technical tools enable human understanding.
Ux research to go ux day_sf 2013_userlensUser Lens
UX Research To Go discusses mobile UX research methods that product teams can use anytime, anywhere. It introduces the idea of mobile researchers accessing users in their natural contexts using methods like in-field ethnography through home visits and intercept surveys using mobile tools. The goal is to understand user behaviors, motivations, and gather feedback on prototypes to test usability before product launch. The document provides an overview of different mobile research methods that can be applied during idea generation, prototyping, and product development stages.
The document provides an overview of social networks. It defines social networks and user generated content. It classifies social networks into content-based networks like YouTube and Flickr, user-based networks like Facebook and Orkut, and other network types. It discusses measures used to analyze social networks, including relevance of users based on connections, activities, and centrality measures. It provides examples of social network analysis metrics and measures of network structure.
Theo Mandel - "Designing Object-Oriented User Experiences" IUE2013 ConferenceTheo Mandel, PhD
Theo Mandel, Ph.D. was invited to give this presentation at the IUE 2013 conference, Phoenix, AZ on April 3, 2013.
Object-oriented design is a critical skill for today's user experience designers.
"Designing Object-Oriented User Experiences" is a new presentation based on 20 years of research and experience in object-oriented user interface design. Mandel was part of the IBM Common User Access (CUA) team that designed the object-oriented OS/2 operating system interface. The IBM team wrote and published the first industry object-oriented design guide, titled "Object-Oriented Interface Design: IBM Common User Access Guidelines" (Que, 1992).
OOUIs are described in Mandel's well-known book, "The Elements of User Interface Design" (John Wiley & Sons, 1997). The presentation describes the history of OOUIs, what is isn't, what it is, design models and the OO UX process. Resources are provided.
For more information, contact:
Theo Mandel, Ph.D.
theo (at) theomandel.com
www.theomandel.com
On the perception of software quality requirements during the project lifecycleNeil Ernst
The document summarizes research into understanding software quality requirements throughout a project's lifecycle by analyzing data from eight open source Gnome projects. It aims to test Lehman's law that quality appears to decline over time, examine if high-level quality requirements are treated equally across projects, and determine if the methodology is viable. The researchers used word lists and taxonomies to identify quality-related concepts in project artifacts and analyzed occurrences over time to address the research questions. The results provided limited support for Lehman's law and showed quality requirement treatment varies between projects. Threats to the approach's validity and future work were also discussed.
Engelman.2011.exploring interaction modes for image retrievalmrgazer
This document discusses exploring different interaction modes for image retrieval. It describes developing a framework that allows multimodal interaction using techniques like eye tracking, voice recognition, and multi-touch. An experiment was conducted to compare the usability of different interaction methods for query by example image retrieval. Nine participants used four methods - anchor, gaze, mouse, and touch - to select regions in images. Metrics like accuracy, precision and time were measured. Preliminary results showed touch interaction had the most consistent performance and shortest completion times.
This document discusses varieties of self-awareness and their uses in natural and artificial systems. It proposes a conceptual framework for metacognition and natural cognition. The document contains slides for presentations on this topic, including:
- Discussing how to analyze requirements by examining natural and artificial systems to understand design discontinuities.
- Explaining how environments can have agent-relative structure that produces varied information processing demands.
- Outlining a conceptual framework that includes reactive and deliberative architectures in natural systems, with different layers providing varieties of self-awareness.
CATS4ML Data Challenge: Crowdsourcing Adverse Test Sets for Machine LearningLora Aroyo
The document introduces CATS4ML, a crowdsourcing challenge to discover blindspots in machine learning models by having participants label images in the Open Images Dataset that are incorrectly labeled by AI. The goal is to crowdsource adverse test sets that can capture biases and improve evaluation of AI. The challenge runs through April 2021 and invites individuals and teams to discover interesting mislabeled images and contribute them for review and inclusion in the test sets. Winning contributions will be promoted at the next CrowdCamp conference.
More Related Content
Similar to ESWC2011 Summer School: Front-end to the Semantic Web
The technical report presents two social recommendation methods that incorporate semantics from tags: a user-based semantic collaborative filtering and an item-based semantic collaborative filtering. The methods aim to find semantically similar users/items and recommend relevant social items. Experimental results show the methods improve recommendation quality and address issues like polysemy, synonymy, and semantic interoperability compared to methods without semantics.
User Engagement: A Cross-Disciplinary Framework (Nim Dvir)Nim Dvir
(Nim Dvir, Department of Information Science, State University of New York at Albany, NY, USA)
In recent years, the term “engagement” has been increasingly used in broader academic literature as a possible framework to measure and explain human interactions with technology. However, current research is unstructured and spread across various disciplines, leading to wide-ranging, and sometimes disparate, perspectives, vocabularies and measurement methodologies. As result, the theoretical meaning and foundations underlying “Engagement” remain unclear in the literature to-date. To address this issue, I conducted a comprehensive literature review to synthesize knowledge on engagement from several academic fields - Informatics, Information Systems, Communications, Marketing, and Education. Specifically, my review will focus on textual and linguistic information, and how the presentation, framing and organization of such information influence user experience (UX) and online behavior. Using this interdisciplinary approach, I believe that it is possible to identify a basic process of engagement happening in many different contexts. This may lead to progress towards understanding and assessing user engagement by suggesting unified, generalized and overarching models and principles that can be applied across domains and applications.
Keywords: Information Organization, User Experience, Engagement, Content Strategy
The document discusses methods for evaluating ontologies. It proposes developing objective metrics to evaluate ontologies based on three criteria: correctness, completeness, and utility. Correctness evaluates how well an ontology expresses its design objectives. Completeness evaluates how fully an ontology captures required semantic components. Utility combines correctness and completeness and evaluates an ontology's usefulness for its intended use case. Examples are provided to illustrate evaluating ontologies based on the proposed metrics. The goal is to develop standardized evaluation methods to facilitate ontology development and reuse across different domains.
The document discusses the role of ontologies in linked data. It notes that while semantic web ontologies have been widely applied, linked data has grown rapidly using lightweight or no ontologies. However, ontologies could still provide benefits to linked data by helping integrate and reason over heterogeneous linked data sources. Open issues remain around how to best reuse and modularize ontologies for different linked data applications and domains.
This document discusses the state-of-the-art of Internet of Things (IoT) ontologies. It begins by defining ontology and describing important design criteria for ontologies including clarity, coherence, extendibility, and minimal encoding bias. It then discusses the challenges of IoT, including large scale networks, deep heterogeneity, and unknown topology. Several existing IoT ontologies are described, including SWAMO, MMI Device Ontology, and SSN. The document concludes that while no single global IoT ontology currently exists, ontologies are needed to address the semantic interoperability challenges of heterogeneous IoT devices and domains.
A Comparative Study of Recent Ontology Visualization Tools with a Case of Dia...IJORCS
Ontology is a conceptualization of a domain into machine readable format. Ontologies are becoming increasingly popular modelling schemas for knowledge management services and applications. Focus on developing tools to graphically visualise ontologies is rising to aid their assessment and analysis. Graph visualisation helps to browse and comprehend the structure of ontologies. A number of ontology visualizations exist that have been embedded in ontology management tools. The primary goal of this paper is to analyze recently implemented ontology visualization tools and their contributions in the enrichment of users’ cognitive support. This work also presents the preliminary results of an evaluation of three visualization tools to determine the suitability of each method for end user applications where ontologies are used as browsing aids with a case of Diabetes data
John Cook, LTRI, London Metropolitan University
Norbert Pachler, Institute of Education, University of London
SoMobNet International Roundtable on “Social Mobile Networking for Informal Learning” Institute of Education, Nov 21 2011: http://cloudworks.ac.uk/cloudscape/view/2363
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document discusses using personalized ontologies to improve web information gathering by representing user profiles. It proposes a model that constructs personalized ontologies by adopting user feedback from a world knowledge base. The model also uses users' local instance repositories to discover background knowledge and populate the ontologies. The proposed ontology model is evaluated against benchmark models through experiments using a large standard dataset.
Research Inventy : International Journal of Engineering and Scienceresearchinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Multimodal and Affective Human Computer Interaction - Abhinav SharmaAbhinav Sharma
This document discusses human-computer interaction and related topics such as multimodal and affective HCI. It begins with an introduction to the history and development of HCI. It then discusses more recent developments like touch interfaces, voice assistants, and consistent cross-device experiences. Two areas of interest in HCI are identified as multimodal interaction using multiple modes like voice and touch simultaneously, and affective HCI which aims to understand human emotions during interaction. Several research papers are summarized that explore topics like the definitions of multimodal HCI, challenges and opportunities it presents, as well as efforts in the field of automatic emotion recognition in HCI. Overall issues discussed include how to design more natural and seamless multimodal experiences, and how
The document discusses requirements engineering (RE) and how language, knowledge, and mirror neurons impact the RE process. RE involves elicitation and analysis activities between customers and software vendors. It is a social process that requires understanding stakeholders' perspectives which can be hindered by differences in language and domain knowledge. Mirror neurons activate when observing and understanding behaviors and help with social cognition and empathy. For effective RE, organizations need methods to expose participants to different domains early to overcome knowledge and language barriers, clearly define business objects, and recognize the social nature of RE where technical tools enable human understanding.
Ux research to go ux day_sf 2013_userlensUser Lens
UX Research To Go discusses mobile UX research methods that product teams can use anytime, anywhere. It introduces the idea of mobile researchers accessing users in their natural contexts using methods like in-field ethnography through home visits and intercept surveys using mobile tools. The goal is to understand user behaviors, motivations, and gather feedback on prototypes to test usability before product launch. The document provides an overview of different mobile research methods that can be applied during idea generation, prototyping, and product development stages.
The document provides an overview of social networks. It defines social networks and user generated content. It classifies social networks into content-based networks like YouTube and Flickr, user-based networks like Facebook and Orkut, and other network types. It discusses measures used to analyze social networks, including relevance of users based on connections, activities, and centrality measures. It provides examples of social network analysis metrics and measures of network structure.
Theo Mandel - "Designing Object-Oriented User Experiences" IUE2013 ConferenceTheo Mandel, PhD
Theo Mandel, Ph.D. was invited to give this presentation at the IUE 2013 conference, Phoenix, AZ on April 3, 2013.
Object-oriented design is a critical skill for today's user experience designers.
"Designing Object-Oriented User Experiences" is a new presentation based on 20 years of research and experience in object-oriented user interface design. Mandel was part of the IBM Common User Access (CUA) team that designed the object-oriented OS/2 operating system interface. The IBM team wrote and published the first industry object-oriented design guide, titled "Object-Oriented Interface Design: IBM Common User Access Guidelines" (Que, 1992).
OOUIs are described in Mandel's well-known book, "The Elements of User Interface Design" (John Wiley & Sons, 1997). The presentation describes the history of OOUIs, what is isn't, what it is, design models and the OO UX process. Resources are provided.
For more information, contact:
Theo Mandel, Ph.D.
theo (at) theomandel.com
www.theomandel.com
On the perception of software quality requirements during the project lifecycleNeil Ernst
The document summarizes research into understanding software quality requirements throughout a project's lifecycle by analyzing data from eight open source Gnome projects. It aims to test Lehman's law that quality appears to decline over time, examine if high-level quality requirements are treated equally across projects, and determine if the methodology is viable. The researchers used word lists and taxonomies to identify quality-related concepts in project artifacts and analyzed occurrences over time to address the research questions. The results provided limited support for Lehman's law and showed quality requirement treatment varies between projects. Threats to the approach's validity and future work were also discussed.
Engelman.2011.exploring interaction modes for image retrievalmrgazer
This document discusses exploring different interaction modes for image retrieval. It describes developing a framework that allows multimodal interaction using techniques like eye tracking, voice recognition, and multi-touch. An experiment was conducted to compare the usability of different interaction methods for query by example image retrieval. Nine participants used four methods - anchor, gaze, mouse, and touch - to select regions in images. Metrics like accuracy, precision and time were measured. Preliminary results showed touch interaction had the most consistent performance and shortest completion times.
This document discusses varieties of self-awareness and their uses in natural and artificial systems. It proposes a conceptual framework for metacognition and natural cognition. The document contains slides for presentations on this topic, including:
- Discussing how to analyze requirements by examining natural and artificial systems to understand design discontinuities.
- Explaining how environments can have agent-relative structure that produces varied information processing demands.
- Outlining a conceptual framework that includes reactive and deliberative architectures in natural systems, with different layers providing varieties of self-awareness.
Similar to ESWC2011 Summer School: Front-end to the Semantic Web (20)
CATS4ML Data Challenge: Crowdsourcing Adverse Test Sets for Machine LearningLora Aroyo
The document introduces CATS4ML, a crowdsourcing challenge to discover blindspots in machine learning models by having participants label images in the Open Images Dataset that are incorrectly labeled by AI. The goal is to crowdsource adverse test sets that can capture biases and improve evaluation of AI. The challenge runs through April 2021 and invites individuals and teams to discover interesting mislabeled images and contribute them for review and inclusion in the test sets. Winning contributions will be promoted at the next CrowdCamp conference.
Harnessing Human Semantics at Scale (updated)Lora Aroyo
The document appears to be a series of tweets and posts by Lora Aroyo discussing data science and crowdsourcing techniques. Some key points discussed include harnessing human semantics at scale through crowdsourcing and nichesourcing, measuring quality and reproducibility of crowdsourced results, and experimenting with different task designs and payment models to assess their impact. Specific examples mentioned include using crowdsourcing to add detailed annotations to museum collections and to find "blindspots" in AI models through a data challenge.
Data excellence: Better data for better AILora Aroyo
The document discusses the importance of data quality and a data lifecycle approach for artificial intelligence. Some key points made include:
- A data lifecycle is needed to guide best practices for data research and development, similar to how a software lifecycle guides software engineering.
- Data quality must be addressed through practices and standards to help avoid unintended AI behaviors that can result from low quality data.
- Disagreement in annotation tasks can provide valuable signals about ambiguity and diversity rather than just being considered noise.
- Achieving high quality, reliable data requires consideration of aspects like validity, fidelity, reproducibility and maintaining data over time - an approach toward "data excellence".
This document summarizes the CHIP project, which aims to use semantic metadata about cultural heritage objects to improve personalized access and recommendations for museum visitors. The CHIP approach involves making metadata and vocabularies available as RDF/OWL, aligning and enriching the data, and using it to build a combined user model for generating virtual and physical museum tours. Experiments show semantic relations can enhance content-based recommendations for novices and experts. Follow-up projects include Agora, deploying the techniques at the Rijksmuseum in Amsterdam.
The Rijksmuseum Collection as Linked DataLora Aroyo
Presentation at ISWC2018: http://iswc2018.semanticweb.org/sessions/the-rijksmuseum-collection-as-linked-data/ of our paper published originally in the Semantic Web Journal: http://www.semantic-web-journal.net/content/rijksmuseum-collection-linked-data-2
Many museums are currently providing online access to their collections. The state of the art research in the last decade shows that it is beneficial for institutions to provide their datasets as Linked Data in order to achieve easy cross-referencing, interlinking and integration. In this paper, we present the Rijksmuseum linked dataset (accessible at http://datahub.io/dataset/rijksmuseum), along with collection and vocabulary statistics, as well as lessons learned from the process of converting the collection to Linked Data. The version of March 2016 contains over 350,000 objects, including detailed descriptions and high-quality images released under a public domain license.
Keynote at International Conference of Art Libraries 2018 @RijksmuseumLora Aroyo
Lora Aroyo presents on data science for smart cultural heritage. Some key points:
- Cultural heritage organizations are traditionally seen as inventories but aim to engage people.
- Bringing collections online increased access but interpretation was still needed for engagement.
- Data should be at the center of processes to evolve with users. There is a spectrum of truth, not just one view.
FAIRview: Responsible Video Summarization @NYCML'18Lora Aroyo
Presentation at the NYC Media Lab (NYCML2018). There is a growing demand for news videos online, with more consumers preferring to watch the news than read or listen to it. On the publisher side, there is a growing effort to use video summarization technology in order to create easy-to-consume previews (trailers) for different types of broadcast programs. How can we measure the quality of video summaries and their potential to misinform? This workshop will inform participants about automatic video summarization algorithms and how to produce more “representative” video summaries. The research presented is from the FAIRview project and is supported by the Digital News Innovation Fund (DNI Fund), which is part of the Google News Initiative.
StorySourcing: Telling Stories with Humans & MachinesLora Aroyo
This document discusses Lora Aroyo's work on using events and narratives to enhance access to cultural heritage collections. It describes early projects that linked cultural objects to events and entities to provide more context and engagement for online users. This led to work modeling historical events and extracting event properties and relationships to generate "proto-narratives". Later projects like DIVE and DIVE+ developed event-centric exploratory search tools and media suites. More recent efforts focus on crowdsourcing event tagging and curating to further engage audiences and remix archival stories. A key challenge discussed is the lack of standardized event vocabularies across cultural heritage communities.
Digital Humanities Benelux 2017: Keynote Lora AroyoLora Aroyo
This document discusses harnessing human semantics at scale through crowdsourcing and nichesourcing. It addresses making crowdsourcing efforts measurable, reproducible, engaging and sustainable. Some key points discussed are identifying crowdsourcing goals, assessing the impact of task and result designs, measuring quality and progress over time, and running continuous campaigns to reproduce and sustain results at scale.
DH Benelux 2017 Panel: A Pragmatic Approach to Understanding and Utilising Ev...Lora Aroyo
Lora Aroyo, Chiel van den Akker, Marnix van Berchum, Lodewijk
Petram, Gerard Kuys, Tommaso Caselli, Jacco van Ossenbruggen, Victor de Boer, Sabrina Sauer, Berber Hagedoorn
Crowdsourcing ambiguity aware ground truth - collective intelligence 2017Lora Aroyo
The process of gathering ground truth data through human annotation is a major bottleneck in the use of information extraction methods. Crowdsourcing-based approaches are gaining popularity in the attempt to solve the issues related to the volume of data and lack of annotators. Typically these practices use inter-annotator agreement as a measure of quality. However, this assumption often creates issues in practice. Previous experiments we performed found that inter-annotator disagreement is usually never captured, either because the number of annotators is too small to capture the full diversity of opinion, or because the crowd data is aggregated with metrics that enforce consensus, such as majority vote. These practices create artificial data that is neither general nor reflects the ambiguity inherent in the data.
To address these issues, we proposed the method for crowdsourcing ground truth by harnessing inter-annotator disagreement. We present an alternative approach for crowdsourcing ground truth data that, instead of enforcing an agreement between annotators, captures the ambiguity inherent in semantic annotation through the use of disagreement-aware metrics for aggregating crowdsourcing responses. Based on this principle, we have implemented the CrowdTruth framework for machine-human computation, that first introduced the disagreement-aware metrics and built a pipeline to process crowdsourcing data with these metrics.
In this paper, we apply the CrowdTruth methodology to collect data over a set of diverse tasks: medical relation extraction, Twitter event identification, news event extraction and sound interpretation. We prove that capturing disagreement is essential for acquiring a high-quality ground truth. We achieve this by comparing the quality of the data aggregated with CrowdTruth metrics with a majority vote, a method which enforces consensus among annotators. By applying our analysis over a set of diverse tasks we show that, even though ambiguity manifests differently depending on the task, our theory of inter-annotator disagreement as a property of ambiguity is generalizable.
My ESWC 2017 keynote: Disrupting the Semantic Comfort ZoneLora Aroyo
Ambiguity in interpreting signs is not a new idea, yet the vast majority of research in machine interpretation of signals such as speech, language, images, video, audio, etc., tend to ignore ambiguity. This is evidenced by the fact that metrics for quality of machine understanding rely on a ground truth, in which each instance (a sentence, a photo, a sound clip, etc) is assigned a discrete label, or set of labels, and the machine’s prediction for that instance is compared to the label to determine if it is correct. This determination yields the familiar precision, recall, accuracy, and f-measure metrics, but clearly presupposes that this determination can be made. CrowdTruth is a form of collective intelligence based on a vector representation that accommodates diverse interpretation perspectives and encourages human annotators to disagree with each other, in order to expose latent elements such as ambiguity and worker quality. In other words, CrowdTruth assumes that when annotators disagree on how to label an example, it is because the example is ambiguous, the worker isn’t doing the right thing, or the task itself is not clear. In previous work on CrowdTruth, the focus was on how the disagreement signals from low quality workers and from unclear tasks can be isolated. Recently, we observed that disagreement can also signal ambiguity. The basic hypothesis is that, if workers disagree on the correct label for an example, then it will be more difficult for a machine to classify that example. The elaborate data analysis to determine if the source of the disagreement is ambiguity supports our intuition that low clarity signals ambiguity, while high clarity sentences quite obviously express one or more of the target relations. In this talk I will share the experiences and lessons learned on the path to understanding diversity in human interpretation and the ways to capture it as ground truth to enable machines to deal with such diversity.
Data Science with Human in the Loop @Faculty of Science #Leiden UniversityLora Aroyo
Software systems are becoming ever more intelligent and more useful, but the way we interact with these machines too often reveals that they don’t actually understand people. Knowledge Representation and Semantic Web focus on the scientific challenges involved in providing human knowledge in machine-readable form. However, we observe that various types of human knowledge cannot yet be captured by machines, especially when dealing with wide ranges of real-world tasks and contexts. The key scientific challenge is to provide an approach to capturing human knowledge in a way that is scalable and adequate to real-world needs. Human Computation has begun to scientifically study how human intelligence at scale can be used to methodologically improve machine-based knowledge and data management. My research is focusing on understanding human computation for improving how machine-based systems can acquire, capture and harness human knowledge and thus become even more intelligent. In this talk I will show how the CrowdTruth framework (http://crowdtruth.org) facilitates data collection, processing and analytics of human computation knowledge.
Some project links:
- http://controcurator.org/
- http://crowdtruth.org/
- http://diveproject.beeldengeluid.nl/
- http://vu-amsterdam-web-media-group.github.io/linkflows/
Europeana GA 2016: Harnessing Crowds, Niches & Professionals in the Digital AgeLora Aroyo
The document discusses harnessing crowds, niches, and professionals in the digital age. The key points are:
- Software is becoming less important as data takes center stage; cultural institutions must know their data and crowds.
- Different crowds have different expertise and abilities; nichesourcing can access specialized knowledge.
- Crowdsourcing initiatives should be part of an overall strategy and integrated into existing systems.
- Novel interactions and user-driven augmentations can empower users and align the digital and physical.
"Video Killed the Radio Star": From MTV to SnapchatLora Aroyo
The document discusses bridging the gap between people and the massive amount of online multimedia content. It proposes decomposing videos and images into smaller fragments and building a media graph to link these fragments based on semantic relationships. Both machine learning and crowdsourcing are used to analyze and enrich media with metadata at scale. The goal is to turn "mute" images and context-free videos into relationship-aware media that allows nonlinear exploration. This would provide a more engaging experience for online audiences.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Nunit vs XUnit vs MSTest Differences Between These Unit Testing Frameworks.pdfflufftailshop
When it comes to unit testing in the .NET ecosystem, developers have a wide range of options available. Among the most popular choices are NUnit, XUnit, and MSTest. These unit testing frameworks provide essential tools and features to help ensure the quality and reliability of code. However, understanding the differences between these frameworks is crucial for selecting the most suitable one for your projects.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Letter and Document Automation for Bonterra Impact Management (fka Social Sol...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on automated letter generation for Bonterra Impact Management using Google Workspace or Microsoft 365.
Interested in deploying letter generation automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Dive into the realm of operating systems (OS) with Pravash Chandra Das, a seasoned Digital Forensic Analyst, as your guide. 🚀 This comprehensive presentation illuminates the core concepts, types, and evolution of OS, essential for understanding modern computing landscapes.
Beginning with the foundational definition, Das clarifies the pivotal role of OS as system software orchestrating hardware resources, software applications, and user interactions. Through succinct descriptions, he delineates the diverse types of OS, from single-user, single-task environments like early MS-DOS iterations, to multi-user, multi-tasking systems exemplified by modern Linux distributions.
Crucial components like the kernel and shell are dissected, highlighting their indispensable functions in resource management and user interface interaction. Das elucidates how the kernel acts as the central nervous system, orchestrating process scheduling, memory allocation, and device management. Meanwhile, the shell serves as the gateway for user commands, bridging the gap between human input and machine execution. 💻
The narrative then shifts to a captivating exploration of prominent desktop OSs, Windows, macOS, and Linux. Windows, with its globally ubiquitous presence and user-friendly interface, emerges as a cornerstone in personal computing history. macOS, lauded for its sleek design and seamless integration with Apple's ecosystem, stands as a beacon of stability and creativity. Linux, an open-source marvel, offers unparalleled flexibility and security, revolutionizing the computing landscape. 🖥️
Moving to the realm of mobile devices, Das unravels the dominance of Android and iOS. Android's open-source ethos fosters a vibrant ecosystem of customization and innovation, while iOS boasts a seamless user experience and robust security infrastructure. Meanwhile, discontinued platforms like Symbian and Palm OS evoke nostalgia for their pioneering roles in the smartphone revolution.
The journey concludes with a reflection on the ever-evolving landscape of OS, underscored by the emergence of real-time operating systems (RTOS) and the persistent quest for innovation and efficiency. As technology continues to shape our world, understanding the foundations and evolution of operating systems remains paramount. Join Pravash Chandra Das on this illuminating journey through the heart of computing. 🌟
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Introduction of Cybersecurity with OSS at Code Europe 2024
ESWC2011 Summer School: Front-end to the Semantic Web
1. “interface is the message”
on the path to a usable & personal Semantic Web
Lora Aroyo
VU University Amsterdam
@laroyo
Wednesday, June 1, 2011 1
2. o utline
front-end to semantics: how do we interact with SemWeb Apps?
personalization: what do we need to adapt to users?
example applications: what good & bad is out there?
evaluation: why is continuous evaluation so important?
Wednesday, June 1, 2011 2
3. why interfaces?
invisible computers
multitude of interaction modes
context-sensitive apps
networked devices: bridges between virtual & physical worlds
GUI become central
constantly increasing competition
Wednesday, June 1, 2011 3
4. take ho me message
combine content semantics with user context
integrate seamlessly physical & web worlds
identify relevance to user to rank & select information to present
continuous feedback cycle: to and from user
you need to deal with GUI on configuration level
perform continuous user testing
use real world data
Wednesday, June 1, 2011 4
5. “interface is the message”
Aaron Koblin: Artfully visualizing our humanity, TED Talk, 2011
Wednesday, June 1, 2011 5
6. FRONT-END TO SEMANTICS
how do we interact with the SemWeb Apps?
Wednesday, June 1, 2011 6
8. semantics: what’s special?
explicit semantics (often from open sources, e.g. LOD) used
for system decisions and results
use facetted presentation, searching and browsing of
information
use typically classifications, typologies or other structures of
concepts
integrate data from different sources
aggregate data
Wednesday, June 1, 2011 8
15. PERSONALIZATION
what do we need to adapt to us?
Wednesday, June 1, 2011 15
16. the user matters
when we consider interaction & interfaces, then the user plays a
key role
for good interface design, a good characterization of the user
is needed
first, some concept from theory and literature
Wednesday, June 1, 2011 16
17. user profile
Definition: A ‘user profile’ is a data structure that represents a
characterization of a user (u) at a particular moment of time (t)
So, a user profile represents what (from a given (system)
perspective) there is to know about a user.
The data in a user profile can be explicitly given by the user or
have been derived.
Wednesday, June 1, 2011 17
18. user characteris tics
Personal data
Friend and relations
Experience
System access
Browsing history
Knowledge (learning)
Device data
Location data
Preferences
Wednesday, June 1, 2011 18
19. user mo del
Definition: The ‘user model’ contains the definitions and rules
for the interpretation of observations about the user and about
the translation of that interpretation into the characteristics in a
user profile.
So, a user model is the recipe for obtaining and interpreting user
profiles.
Wednesday, June 1, 2011 19
20. user mo deling
Definition: ‘user modeling’ is the process of creating user
profiles following the definitions and rules of the user model.
This includes the derivation of new user profile characteristics
from observations about the user and the old user profile based
on the user model.
So, user modeling is the process of representing the user.
Wednesday, June 1, 2011 20
21. stereotyping
Stereotyping is one example of user modeling.
A user is considered to be part of a group of similar people, the
stereotype.
Question: What could be stereotypes for conference participants
(when we design the conference website)?
Wednesday, June 1, 2011 21
22. user-adaptive system
Definition: A ‘user-adaptive system’ is a system that adapts itself to a
specific user.
Often, a user-adaptive system (or adaptive system, in short) uses user
profiles to base its adaptation on.
So, designing an adaptive system implies designing the user modeling.
Wednesday, June 1, 2011 22
23. user adaptation
User-adaptation is often used for personalization, i.e. making a
system appear to function in a personalized way.
Question: What user profile characteristics would be useful in
personalizing the conference’s registration site?
Question: How would you obtain those characteristics?
Wednesday, June 1, 2011 23
24. examples: user adaptation
Device-dependence
Accessibility (disabilities)
Location-dependence
Adaptive workflow
Question: Can you give concrete examples for interface adaptation,
both the adaptation effect as the prior user modeling necessary?
Wednesday, June 1, 2011 24
25. adaptive hyperme d ia
Well-studied example of adaptation is ‘adaptive hypermedia’: a
hypertext’s content and navigation are then adapted to the user’s
browsing of the hypertext.
Wednesday, June 1, 2011 25
27. d ialog principles [Grice]
Be cooperative
Be informative
Be truthful
Be relevant
Be perspicuous (be clear)
Wednesday, June 1, 2011 27
28. UI principles [Shnei der mann]
Strive for consistency
Enable frequent users to use shortcuts
Offer informative feedback
Design dialog to yield closure
Offer simple error handling
Permit easy reversal of actions
Support internal locus of control
Reduce short-term memory load
Wednesday, June 1, 2011 28
29. usability heuristics [Nielsen]
Visibility of system status
Match between system and real world
User control and freedom
Consistency and standards
Error prevention
Recognition rather than recall
Flexibility and efficiency of use
Aesthetic and minimalist design
Help users recognize, diagnose and recover from errors
Help and documentation
Wednesday, June 1, 2011 29
30. all abo ut the user’s perspective
modeling the user: what are user’s preferences, interests, history,
activities, etc.
modeling the user’s context: e.g. location, time, device
which of all the data available is relevant
for this user in this context
also called context-aware
Wednesday, June 1, 2011 30
31. user’s context d is tribute d
switching between one context and another
doing things not only for him/herself, e.g. buying present for a
girlfriend
Wednesday, June 1, 2011 31
33. interaction mo des
search, e.g. keyword, faceted
browse, story lines, narratives through collections
annotations of multimedia, e.g. (collaborative) tagging, professional
annotation of text, images and video, tagging games
explanations, hints, user feedback, e.g. explanation of
recommendation results, explanation of autocompletion suggestions
Wednesday, June 1, 2011 33
34. typical examples
recommendation systems, e.g. movies, music, art
user statistics and analysis, e.g. user usage data, profile, group
profiles, etc.
social networking
Wednesday, June 1, 2011 34
35. reco m mender systems
Definition: A ‘recommender system’ is a system that recommends to
a user, based on her individual interests, items that the user could find
interesting.
Examples: music, movies, people, restaurants
Types: collaborative (reason about similar users), content-based
(reason about similar items)
Problems: new users, new items, sparsity, gray sheep
Wednesday, June 1, 2011 35
36. reco m mender systems
movies & TV programs, e.g. Netflix, MovieLens, TiVo, personalized TV
guides
music, e.g. LastFM, Pandora, iTunes Genius
food & tourism, e.g. guides adapted to location, current time, preferences
news, e.g. Google reader, news filters
e-shopping, e.g. Amazon’s recommendations
advertisement, e.g. Facebook personalized ads
art, museums, e.g. personalized search, personalized museum guides
Wednesday, June 1, 2011 36
37. consi derations
Collection of activities/context/attention data
Derive interests from this data
Recommender-specific problems, e.g. cold start, over-specialization
Surface items of interest in the ‘long tail’
Cross-domain recommendations
Multi-person recommending
Granular control for users
Wednesday, June 1, 2011 37
38. user profiles & stats
overview of user preferences, e.g. settings, privacy
overview of user interests, e.g. ranking of interests, links to content
overview of user/group activities, e.g. per topics, per activity, per
date, over a period, overall
comparative views between users, e.g. LastFM, livingSocial movies
user similarity, Twitter similar users to you
different views/visualization over the same set of user data
Wednesday, June 1, 2011 38
41. social networking
professional networks & events, e.g. LinkedIn, Mendeley
people, organizations, e.g. Facebook, MySpace
Twitter
social bookmarking, e.g. Delicious, StumbleUpon, Diggit
GetGlue
Books, e.g. LibabryThing
Wednesday, June 1, 2011 41
42. EXAMPLE APPLICATIONS
Interfaces & Personalization on SemWeb
Wednesday, June 1, 2011 42
58. personalized experience
Personalized
Web
Access Online
Tour
Wizard Personalized
Mobile
Tour
Interactive tours
Semantic Search
Interactive user modeling
On-the-fly adaptation
Museum tour maps
Recommendations of
artworks & art topics Synchronized user
Historic timeline
profile
Wednesday, June 1, 2011 58
69. dynamic adaptation
For each artwork in the museum:
Related works
Include in the tour ( & recalculate the map/tour)
Indicate relevance in terms of e.g. personal interest, position, recommended by friends, by Rijks, on view
Rate to indicate interest
At any point of the tour:
Include/exclude artworks
Adjust tour length
Change navigation in and outside of the tour
Save for other tours
Wednesday, June 1, 2011 66
70. EXAMPLE 2
professionals vs. lay users on Web 2.0
semantic annotation of Rijksmuseum prints
http://e-culture.multimedian.nl/pk/annotate?
semantic tagging: http://waisda.nl
Wednesday, June 1, 2011 67
71. Autocompletion with multiple
vocabularies
http://slashfacet.semanticweb.org/wordnet/search
http://slashfacet.semanticweb.org/autocomplete/demos/
Wednesday, June 1, 2011 68
87. watching TV in a group
Environment Age
Interact with the second 15 - 35 years old
screen as a group
Friend interaction at home Type of Activities
Watching as a group quiz and betting games
change camera view
Synchronization information regarding the
TV & Second Screen content of the program
between second screens textual captions
between second screens &
TV show content provider Type of Program
Sports
Wednesday, June 1, 2011 81
88. observations
for more details check out our blog at http://notube.tv
Wednesday, June 1, 2011 82
89. observations
for more details check out our blog at http://notube.tv
Wednesday, June 1, 2011 83
90. second screen & TV
functionalities
shared virtual space synchronization with second
voice dubbing screen
subtitles “overlay” on top of the main
related information TV-picture
quizzes censoring
voting & betting different camera views
scene-grab & share group alerts
social interaction
live-chat
parental advisory
uncensored version
different camera views
Wednesday, June 1, 2011 84
92. CHIP users
Target users’ characteristics
small groups with 2-4 persons and a male taking the leading role
(67%)
middle-aged people in 30-60 years old (75%)
higher-educated (62%)
no prior knowledge about the Rijksmuseum collection (62%)
visit the museum for education (98%)
Wednesday, June 1, 2011 86
94. contextual analysis
Context
ual obse
rvations
Define familiarity with the
domain
s Define familiarity with
iew collections/vocabularies
ter v
r in
Use
Va Identify use cases
lid
ate
Identify navigation patterns
sks
Model user’s ta Identify requirements for
user groups
Wednesday, June 1, 2011 88
105. take ho me message
combine content semantics with user context
integrate seamlessly physical & web worlds
identify relevance to user to rank & select information to present
continuous feedback cycle: to and from user
you need to deal with GUI on configuration level
perform continuous user testing
use real world data
Wednesday, June 1, 2011 97