Slide deck used to present and defend my master's thesis project. The project is detailed in a paper published in the Proceedings of the 2009 Conference on Intelligent User Interfaces (http://doi.acm.org/10.1145/1502650.1502696).
This document discusses validity and validation in qualitative and quantitative research. It explores different epistemological perspectives on validity including positivism, post-positivism, and constructivism. The origins and emergence of validity concepts are traced from early statistical works to more recent qualitative frameworks. Different validity types are examined, including construct validity, internal validity, statistical conclusion validity, and external validity. Both quantitative and qualitative validation strategies are considered for each type of validity. The document argues that the concept of validity depends on the type of inferences being made, not just the data type.
Unification Of Randomized Anomaly In Deception Detection Using Fuzzy Logic Un...IJORCS
In the recent era of computer electronic communication we are currently facing the critical impact of Deception which plays its vital role in the mode of affecting efficient information sharing system. Identifying Deception in any mode of communication is a tedious process without using the proper tool for detecting those vulnerabilities. This paper deals with the efficient tools of Deception detection in which combined application implementation is our main focus rather than with its individuality. We propose a research model which comprises Fuzzy logic, Uncertainty and Randomization. This paper deals with an experiment which implements the scenario of mixture application with its revealed results. We also discuss the combined approach rather than with its individual performance.
This document discusses various machine learning classifiers that have been used for emotion recognition from speech, including neural networks, Gaussian mixture models, linear regression, and decision trees. Neural networks are identified as the most suitable classifier for this complex problem due to their ability to learn patterns from data and model complex nonlinear relationships. The document provides details on different neural network architectures and training methods that have been employed for emotion recognition from speech in previous studies.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Action research for_librarians_carl2012srosenblatt
This document provides an overview of an action research workshop for librarians. The workshop aims to teach participants how to incorporate evidence-based research into their practice through action research. It discusses the action research cycle of planning, acting, observing, and reflecting. Participants will learn about generating research questions based on problems in their work, collecting and analyzing both quantitative and qualitative data, and sharing and applying the results to make changes and ask new questions. The workshop involves hands-on activities for participants to analyze sample datasets and plan their own action research projects to investigate issues in their own practice.
Action research for_librarians_carl2012srosenblatt
This document provides an overview of an action research workshop for librarians. The workshop aims to teach participants how to incorporate evidence-based research into their practice. It covers the basics of the action research process, including identifying a problem or question, collecting and analyzing data, reflecting on findings, and planning changes. The document outlines the learning outcomes, introduces the action research cycle, and discusses different research methodologies and tools for data collection and analysis that can be used, such as interviews, surveys, and Excel. Participants are guided through practicing these steps by analyzing sample datasets and are encouraged to begin planning their own action research projects.
Research Methods for Identifying and Analysing Virtual Learning CommunitiesRichard Schwier
Presentation at the University of Otago in Dunedin New Zealand on research methods we have employed at the Virtual Learning Communities Research Laboratory at the University of Saskatchewan.
This document discusses validity and validation in qualitative and quantitative research. It explores different epistemological perspectives on validity including positivism, post-positivism, and constructivism. The origins and emergence of validity concepts are traced from early statistical works to more recent qualitative frameworks. Different validity types are examined, including construct validity, internal validity, statistical conclusion validity, and external validity. Both quantitative and qualitative validation strategies are considered for each type of validity. The document argues that the concept of validity depends on the type of inferences being made, not just the data type.
Unification Of Randomized Anomaly In Deception Detection Using Fuzzy Logic Un...IJORCS
In the recent era of computer electronic communication we are currently facing the critical impact of Deception which plays its vital role in the mode of affecting efficient information sharing system. Identifying Deception in any mode of communication is a tedious process without using the proper tool for detecting those vulnerabilities. This paper deals with the efficient tools of Deception detection in which combined application implementation is our main focus rather than with its individuality. We propose a research model which comprises Fuzzy logic, Uncertainty and Randomization. This paper deals with an experiment which implements the scenario of mixture application with its revealed results. We also discuss the combined approach rather than with its individual performance.
This document discusses various machine learning classifiers that have been used for emotion recognition from speech, including neural networks, Gaussian mixture models, linear regression, and decision trees. Neural networks are identified as the most suitable classifier for this complex problem due to their ability to learn patterns from data and model complex nonlinear relationships. The document provides details on different neural network architectures and training methods that have been employed for emotion recognition from speech in previous studies.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Action research for_librarians_carl2012srosenblatt
This document provides an overview of an action research workshop for librarians. The workshop aims to teach participants how to incorporate evidence-based research into their practice through action research. It discusses the action research cycle of planning, acting, observing, and reflecting. Participants will learn about generating research questions based on problems in their work, collecting and analyzing both quantitative and qualitative data, and sharing and applying the results to make changes and ask new questions. The workshop involves hands-on activities for participants to analyze sample datasets and plan their own action research projects to investigate issues in their own practice.
Action research for_librarians_carl2012srosenblatt
This document provides an overview of an action research workshop for librarians. The workshop aims to teach participants how to incorporate evidence-based research into their practice. It covers the basics of the action research process, including identifying a problem or question, collecting and analyzing data, reflecting on findings, and planning changes. The document outlines the learning outcomes, introduces the action research cycle, and discusses different research methodologies and tools for data collection and analysis that can be used, such as interviews, surveys, and Excel. Participants are guided through practicing these steps by analyzing sample datasets and are encouraged to begin planning their own action research projects.
Research Methods for Identifying and Analysing Virtual Learning CommunitiesRichard Schwier
Presentation at the University of Otago in Dunedin New Zealand on research methods we have employed at the Virtual Learning Communities Research Laboratory at the University of Saskatchewan.
Reshaping Scientific Knowledge Dissemination and Evaluation in the Age of the...Aliaksandr Birukou
This talk tries to unveil some of the problems inherent in the current knowledge creation, dissemination, and evaluation practices, also based on models and quantitative analyses of the effectiveness of peer review as gatekeeping/assessment method and of citations as measure of impact. The speaker will present the recent research and development threads aiming at making the knowledge generation and dissemination process efficient, and the evaluation process (more) fair and accurate. He will in particular present the models and tools being developed to this end, which are essentially based on applying to knowledge dissemination the lessons learned from open source development and the social web. The presentation will be interactive and discussion-oriented.
Messy Research: How to Make Qualitative Data Quantifiable and Make Messy Data...Gigi Johnson
This document discusses qualitative research methods for business. It addresses challenges in making qualitative data understandable for real decisions. It discusses why businesses conduct research, how to determine what data and analysis is needed, and issues with determining "truth" in business contexts. Finally, it discusses four types of qualitative data and focuses on making qualitative data more quantitative by addressing issues like validity, sample sizes, and coding consistency.
This document summarizes Leonardo Leite's thesis defense presented at the University of São Paulo, Brazil on organizational structures for development and infrastructure professionals in software organizations. The thesis presented a grounded theory on organizational structures developed through three phases of research involving interviews with software professionals. The research developed a taxonomy of four common organizational structures and analyzed why different structures are adopted, how their drawbacks are addressed, and implications for practitioners and scholars.
Recommender Systems Fairness Evaluation via Generalized Cross EntropyVito Walter Anelli
Fairness in recommender systems has been considered with respect to sensitive attributes of users (e.g., gender, race) or items (e.g., revenue in a multistakeholder setting). Regardless, the concept has been commonly interpreted as some form of equality – i.e., the degree to which the system is meeting the information needs of all its users in an equal sense. In this paper, we argue that fairness in recommender systems does not necessarily imply equality, but instead it should consider a distribution of resources based on merits and needs. We present a probabilistic framework based on generalized cross entropy to evaluate fairness of recommender systems under this perspective, where we show that the proposed framework is flexible and explanatory by allowing to incorporate domain knowledge (through an ideal fair distribution) that can help to understand which item or user aspects a recommendation algorithm is over- or under-representing. Results on two real-world datasets show the merits of the proposed evaluation framework both in terms of user and item fairness.
Building a Game for a Assessment Nursing GameBrock Dubbels
In this presentation, issues of planning game design for transfer and assessment are discussed. A review of the role of play is provided in relation to game design. Play can be part of a problem because of the lack of certainty in learning transfer. Serious games are developed to deliver learning outcomes. When there are specific learning outcomes, the game must make sure that learning that happens in games, does not stay in games. This is described here as the Vegas Effect. A simple methodological recommendation with examples is provided for improving validity and reliability in the independent variable (game interventions). This is known as inter rater reliability.
Extending Recommendation Systems With Semantics And Context AwarenessVictor Codina
This document proposes extending recommendation systems with semantics and context-awareness. It discusses limitations of traditional recommendation models and how semantics and context could help overcome those limitations. The authors propose a model that uses domain concepts with implicit semantics relationships and contextual concepts without semantics. An offline experiment on a pruned MovieLens dataset compares the proposed model to baselines. Results show the proposed contextual-semantic model improves prediction accuracy overall and for cold-start users compared to static and non-semantic models.
Human-centered AI: how can we support lay users to understand AI?Katrien Verbert
The document summarizes research on human-centered AI and how to support lay users in understanding AI. It discusses various research projects that aim to explain model outcomes to increase user trust and acceptance. It explores how personal characteristics like need for cognition can impact the effectiveness of explanations. The research also looks at different application domains for AI like healthcare, education, agriculture and recommendations. It emphasizes the importance of user involvement, personalization and domain expertise in developing AI systems that non-experts can understand and trust.
Designing an effective information architectureoptimalworkshop
It’s such a waste when stuff is hard to find. In the book Ambient Findability, Peter Morville quotes a study that estimates that in a medium-sized hospital, 8,000 hours a year of staff time are spent explaining signs and redirecting people. That’s 4 person years!
Finding stuff online is even worse. According to IBM’s chairman, it’s estimated that there will be 44 times as much data and content coming over the next decade, reaching 35 zettabytes by 2020. That’s 35 followed by 21 zeros.
There is one thing you can do to help the madness. You can create an effective information architecture (IA) to connect people with the content that they’re looking for. In this practical workshop you’ll learn how to create an effective IA which will help ensure that your stuff is easy to find and provide your visitors with a great experience. You’ll leave with an armload of practical insights and tips, and with the inspiration to refine and test your own IA.
This document provides an overview of qualitative analysis methods for coding interview and document data. It begins with an agenda for covering two main qualitative approaches, coding exercises, slides on qualitative analysis, and potential brainstorming and affinity diagramming exercises if time allows. It then discusses common features of qualitative analytic methods including affixing codes, noting reflections, sorting materials to identify patterns, and gradually developing generalizations. Finally, it provides details on coding and categorization procedures, the iterative nature of qualitative analysis, and ensuring the credibility and rigor of qualitative findings.
Validation and mechanism: exploring the limits of evaluationAlan Dix
Talk at Evaluation, SummerPIT 2019, Aarhus University, 15th August 2019
https://alandix.com/academic/talks/PIT-2019-validation-and-mechanism/
Sometimes evaluation is straightforward. Perhaps our goal is to create a system in a well-understood environment that is fastest to use or with least errors. In this case, and if we believe design choices are effectively independent, then we can run a lab or in-situ study to compare design alternatives. However many things do not fit into this easy-to-evaluate category. Sometimes our goals or more diffuse or long term: sustainability, behavioural change, improving education. Sometimes the thing we wish to 'evaluate' is 'generative' such as toolkits or frameworks used by developers or designers to create systems that then are used by others. In these cases simple post-hoc 'try it and measure it' approaches to evaluation fail, or at best give partial results. However post-hoc evaluation is only one way to validate work – data (quantitative or qualitative) should be combined with an understanding of mechanism, how things work, in order to justify, generalise and innovate.
The study reviewed literature on ICT for governance and policy modelling to identify gaps in the research area. It found 4 relevant background references and listed 64 related projects with about 80M euros in total grant funding. However, it did not include any primary research or analysis. The study concluded that more work is needed to address gaps in using ICT for governance and policy modelling, but it did not specify what particular gaps need to be addressed.
This chapter discusses exploratory research and qualitative analysis. It defines exploratory research as research that is used to gain insights, screen alternatives, and discover new ideas with little prior knowledge. Qualitative research methods that are commonly used in exploratory research include surveys of experts, analysis of secondary data, pilot studies, and depth interviews/focus groups. The key advantages of exploratory research are that it provides a rich understanding of problems and motivations, while the main disadvantage is that results cannot be generalized.
Ontology quality, ontology design patterns, and competency questionsNicola Guarino
This document discusses ontology quality and the role of ontology design patterns (ODPs) in improving quality. It addresses three dimensions of ontology quality: correctness, precision, and accuracy. While ODPs aim to improve reusability, their simplicity may decrease interoperability if connections between patterns are overlooked. The original intent of competency questions was for more complex queries than simple lookups. Properly defining terms and examples/counter-examples for a target community helps improve an ontology's quality.
Metaphors as design points for collaboration 2012KM Chicago
The document discusses optimization cycles for improving collaboration and search practices, noting that multiple factors should be maintained in constant ratios to achieve predictable outcomes in key metrics, and experiments allow measuring these figures of merit to identify tradeoffs and potential improvements. It provides examples of how aspects of metadata and measuring effective precision can facilitate collaboration.
Current Approaches in Search Result DiversificationMario Sangiorgio
The document discusses current approaches to search result diversification. It defines diversity as providing both relevant and diverse results to ambiguous queries. Diversification aims to optimize relevance and diversity through measures like semantic distance, categorical distance, and novel information. The tradeoff between relevance and diversity makes the problem NP-hard. Common objectives include maximizing sum or product. Evaluation benchmarks adapt existing metrics or use datasets with ground truths. Open issues include defining new diversity types and integrating diversity earlier in the ranking process.
This document discusses brainstorming ideas for experimentation approaches in AI/ML. It covers various topics such as the vision and mission for using AI, challenges and opportunities of AI, different types of human and machine reasoning, biases and fairness in AI, how to conceive experimentation ideas, how to onboard AI into practice, different types of data features, visualization methods, statistical and machine learning methodological approaches, and how XOPs can bridge humans and AI to build a better future.
Human-Machine Collaboration in Organizations: Impact of Algorithm Bias on De...Anh Luong
This document summarizes a research study on the impact of algorithmic bias on decision-making in human-machine collaboration. The study finds that when human decision-makers collaborate with an artificially intelligent system that provides biased predictions, it leads the human-machine teams to make more biased decisions and lower organizational profits over time compared to teams using an unbiased AI system. However, the researchers also find that human decision-makers are able to implicitly recognize the AI's bias over repeated interactions and adapt their decision-making to improve organizational profits and reduce decision bias, especially in later periods. The researchers conclude that exposing human decision-makers to algorithmic bias mitigation over time can help decrease bias in human-machine collaborative decision-making.
XPLODIV: An Exploitation-Exploration Aware Diversification Approach for Recom...Andrea Barraza-Urbina
Recommender Systems (RS) have emerged to guide users in the task of efficiently browsing/exploring a large product space, helping users to quickly identify interesting products. However, suggestions generated with traditional RS usually do not produce diverse results though it has been argued that diversity is a desirable feature. The study of diversity-aware RS has become an important research challenge in recent years, drawing inspiration from diversification solutions for Information Retrieval (IR). However, we argue it is not enough to adapt IR techniques to RS as they do not place the necessary importance to factors such as serendipity, novelty and discovery which are imperative to RS. In this work, we propose a diversification technique for RS that generates a diversified list of results which not only balances the tradeoff between quality (in terms of accuracy) and diversity, but also considers the trade-off between exploitation of the user profile and exploration of novel products. Our experimental evaluation shows that the proposed approach has comparable results to state of the art approaches. Moreover, through control parameters, our approach can be tuned towards more explorative or exploitative recommendations.
Brasília, Brazil's capital city, has been called "the only true hope for the nation [of Brazil]," the "brain of all high-minded national decisions", and "a ceremonial slum infested with Volkswagens." We'll discuss Brasília's famous architecture, its utopian city planning, and how it relates to the rest of Brazil and to its citizens.
From the Un-Distinguished Lecture Series (http://ws.cs.ubc.ca/~udls/). The talk was given Oct. 17, 2008.
Overview of Coltheart's Dual-Route Model and Seidenberg & McClelland's neural network models of word recognition.
Course presentation for PSYC365*, Fall 2004, Dr. Butler, Queen's University.
Images used without permission.
More Related Content
Similar to A Multimedia Interface For Facilitating Comparisons Of Opinions (Thesis Presentation)
Reshaping Scientific Knowledge Dissemination and Evaluation in the Age of the...Aliaksandr Birukou
This talk tries to unveil some of the problems inherent in the current knowledge creation, dissemination, and evaluation practices, also based on models and quantitative analyses of the effectiveness of peer review as gatekeeping/assessment method and of citations as measure of impact. The speaker will present the recent research and development threads aiming at making the knowledge generation and dissemination process efficient, and the evaluation process (more) fair and accurate. He will in particular present the models and tools being developed to this end, which are essentially based on applying to knowledge dissemination the lessons learned from open source development and the social web. The presentation will be interactive and discussion-oriented.
Messy Research: How to Make Qualitative Data Quantifiable and Make Messy Data...Gigi Johnson
This document discusses qualitative research methods for business. It addresses challenges in making qualitative data understandable for real decisions. It discusses why businesses conduct research, how to determine what data and analysis is needed, and issues with determining "truth" in business contexts. Finally, it discusses four types of qualitative data and focuses on making qualitative data more quantitative by addressing issues like validity, sample sizes, and coding consistency.
This document summarizes Leonardo Leite's thesis defense presented at the University of São Paulo, Brazil on organizational structures for development and infrastructure professionals in software organizations. The thesis presented a grounded theory on organizational structures developed through three phases of research involving interviews with software professionals. The research developed a taxonomy of four common organizational structures and analyzed why different structures are adopted, how their drawbacks are addressed, and implications for practitioners and scholars.
Recommender Systems Fairness Evaluation via Generalized Cross EntropyVito Walter Anelli
Fairness in recommender systems has been considered with respect to sensitive attributes of users (e.g., gender, race) or items (e.g., revenue in a multistakeholder setting). Regardless, the concept has been commonly interpreted as some form of equality – i.e., the degree to which the system is meeting the information needs of all its users in an equal sense. In this paper, we argue that fairness in recommender systems does not necessarily imply equality, but instead it should consider a distribution of resources based on merits and needs. We present a probabilistic framework based on generalized cross entropy to evaluate fairness of recommender systems under this perspective, where we show that the proposed framework is flexible and explanatory by allowing to incorporate domain knowledge (through an ideal fair distribution) that can help to understand which item or user aspects a recommendation algorithm is over- or under-representing. Results on two real-world datasets show the merits of the proposed evaluation framework both in terms of user and item fairness.
Building a Game for a Assessment Nursing GameBrock Dubbels
In this presentation, issues of planning game design for transfer and assessment are discussed. A review of the role of play is provided in relation to game design. Play can be part of a problem because of the lack of certainty in learning transfer. Serious games are developed to deliver learning outcomes. When there are specific learning outcomes, the game must make sure that learning that happens in games, does not stay in games. This is described here as the Vegas Effect. A simple methodological recommendation with examples is provided for improving validity and reliability in the independent variable (game interventions). This is known as inter rater reliability.
Extending Recommendation Systems With Semantics And Context AwarenessVictor Codina
This document proposes extending recommendation systems with semantics and context-awareness. It discusses limitations of traditional recommendation models and how semantics and context could help overcome those limitations. The authors propose a model that uses domain concepts with implicit semantics relationships and contextual concepts without semantics. An offline experiment on a pruned MovieLens dataset compares the proposed model to baselines. Results show the proposed contextual-semantic model improves prediction accuracy overall and for cold-start users compared to static and non-semantic models.
Human-centered AI: how can we support lay users to understand AI?Katrien Verbert
The document summarizes research on human-centered AI and how to support lay users in understanding AI. It discusses various research projects that aim to explain model outcomes to increase user trust and acceptance. It explores how personal characteristics like need for cognition can impact the effectiveness of explanations. The research also looks at different application domains for AI like healthcare, education, agriculture and recommendations. It emphasizes the importance of user involvement, personalization and domain expertise in developing AI systems that non-experts can understand and trust.
Designing an effective information architectureoptimalworkshop
It’s such a waste when stuff is hard to find. In the book Ambient Findability, Peter Morville quotes a study that estimates that in a medium-sized hospital, 8,000 hours a year of staff time are spent explaining signs and redirecting people. That’s 4 person years!
Finding stuff online is even worse. According to IBM’s chairman, it’s estimated that there will be 44 times as much data and content coming over the next decade, reaching 35 zettabytes by 2020. That’s 35 followed by 21 zeros.
There is one thing you can do to help the madness. You can create an effective information architecture (IA) to connect people with the content that they’re looking for. In this practical workshop you’ll learn how to create an effective IA which will help ensure that your stuff is easy to find and provide your visitors with a great experience. You’ll leave with an armload of practical insights and tips, and with the inspiration to refine and test your own IA.
This document provides an overview of qualitative analysis methods for coding interview and document data. It begins with an agenda for covering two main qualitative approaches, coding exercises, slides on qualitative analysis, and potential brainstorming and affinity diagramming exercises if time allows. It then discusses common features of qualitative analytic methods including affixing codes, noting reflections, sorting materials to identify patterns, and gradually developing generalizations. Finally, it provides details on coding and categorization procedures, the iterative nature of qualitative analysis, and ensuring the credibility and rigor of qualitative findings.
Validation and mechanism: exploring the limits of evaluationAlan Dix
Talk at Evaluation, SummerPIT 2019, Aarhus University, 15th August 2019
https://alandix.com/academic/talks/PIT-2019-validation-and-mechanism/
Sometimes evaluation is straightforward. Perhaps our goal is to create a system in a well-understood environment that is fastest to use or with least errors. In this case, and if we believe design choices are effectively independent, then we can run a lab or in-situ study to compare design alternatives. However many things do not fit into this easy-to-evaluate category. Sometimes our goals or more diffuse or long term: sustainability, behavioural change, improving education. Sometimes the thing we wish to 'evaluate' is 'generative' such as toolkits or frameworks used by developers or designers to create systems that then are used by others. In these cases simple post-hoc 'try it and measure it' approaches to evaluation fail, or at best give partial results. However post-hoc evaluation is only one way to validate work – data (quantitative or qualitative) should be combined with an understanding of mechanism, how things work, in order to justify, generalise and innovate.
The study reviewed literature on ICT for governance and policy modelling to identify gaps in the research area. It found 4 relevant background references and listed 64 related projects with about 80M euros in total grant funding. However, it did not include any primary research or analysis. The study concluded that more work is needed to address gaps in using ICT for governance and policy modelling, but it did not specify what particular gaps need to be addressed.
This chapter discusses exploratory research and qualitative analysis. It defines exploratory research as research that is used to gain insights, screen alternatives, and discover new ideas with little prior knowledge. Qualitative research methods that are commonly used in exploratory research include surveys of experts, analysis of secondary data, pilot studies, and depth interviews/focus groups. The key advantages of exploratory research are that it provides a rich understanding of problems and motivations, while the main disadvantage is that results cannot be generalized.
Ontology quality, ontology design patterns, and competency questionsNicola Guarino
This document discusses ontology quality and the role of ontology design patterns (ODPs) in improving quality. It addresses three dimensions of ontology quality: correctness, precision, and accuracy. While ODPs aim to improve reusability, their simplicity may decrease interoperability if connections between patterns are overlooked. The original intent of competency questions was for more complex queries than simple lookups. Properly defining terms and examples/counter-examples for a target community helps improve an ontology's quality.
Metaphors as design points for collaboration 2012KM Chicago
The document discusses optimization cycles for improving collaboration and search practices, noting that multiple factors should be maintained in constant ratios to achieve predictable outcomes in key metrics, and experiments allow measuring these figures of merit to identify tradeoffs and potential improvements. It provides examples of how aspects of metadata and measuring effective precision can facilitate collaboration.
Current Approaches in Search Result DiversificationMario Sangiorgio
The document discusses current approaches to search result diversification. It defines diversity as providing both relevant and diverse results to ambiguous queries. Diversification aims to optimize relevance and diversity through measures like semantic distance, categorical distance, and novel information. The tradeoff between relevance and diversity makes the problem NP-hard. Common objectives include maximizing sum or product. Evaluation benchmarks adapt existing metrics or use datasets with ground truths. Open issues include defining new diversity types and integrating diversity earlier in the ranking process.
This document discusses brainstorming ideas for experimentation approaches in AI/ML. It covers various topics such as the vision and mission for using AI, challenges and opportunities of AI, different types of human and machine reasoning, biases and fairness in AI, how to conceive experimentation ideas, how to onboard AI into practice, different types of data features, visualization methods, statistical and machine learning methodological approaches, and how XOPs can bridge humans and AI to build a better future.
Human-Machine Collaboration in Organizations: Impact of Algorithm Bias on De...Anh Luong
This document summarizes a research study on the impact of algorithmic bias on decision-making in human-machine collaboration. The study finds that when human decision-makers collaborate with an artificially intelligent system that provides biased predictions, it leads the human-machine teams to make more biased decisions and lower organizational profits over time compared to teams using an unbiased AI system. However, the researchers also find that human decision-makers are able to implicitly recognize the AI's bias over repeated interactions and adapt their decision-making to improve organizational profits and reduce decision bias, especially in later periods. The researchers conclude that exposing human decision-makers to algorithmic bias mitigation over time can help decrease bias in human-machine collaborative decision-making.
XPLODIV: An Exploitation-Exploration Aware Diversification Approach for Recom...Andrea Barraza-Urbina
Recommender Systems (RS) have emerged to guide users in the task of efficiently browsing/exploring a large product space, helping users to quickly identify interesting products. However, suggestions generated with traditional RS usually do not produce diverse results though it has been argued that diversity is a desirable feature. The study of diversity-aware RS has become an important research challenge in recent years, drawing inspiration from diversification solutions for Information Retrieval (IR). However, we argue it is not enough to adapt IR techniques to RS as they do not place the necessary importance to factors such as serendipity, novelty and discovery which are imperative to RS. In this work, we propose a diversification technique for RS that generates a diversified list of results which not only balances the tradeoff between quality (in terms of accuracy) and diversity, but also considers the trade-off between exploitation of the user profile and exploration of novel products. Our experimental evaluation shows that the proposed approach has comparable results to state of the art approaches. Moreover, through control parameters, our approach can be tuned towards more explorative or exploitative recommendations.
Similar to A Multimedia Interface For Facilitating Comparisons Of Opinions (Thesis Presentation) (20)
Brasília, Brazil's capital city, has been called "the only true hope for the nation [of Brazil]," the "brain of all high-minded national decisions", and "a ceremonial slum infested with Volkswagens." We'll discuss Brasília's famous architecture, its utopian city planning, and how it relates to the rest of Brazil and to its citizens.
From the Un-Distinguished Lecture Series (http://ws.cs.ubc.ca/~udls/). The talk was given Oct. 17, 2008.
Overview of Coltheart's Dual-Route Model and Seidenberg & McClelland's neural network models of word recognition.
Course presentation for PSYC365*, Fall 2004, Dr. Butler, Queen's University.
Images used without permission.
Thoughts on the use of Analogies in Understanding and Solving Complex Problem...Lucas Rizoli
We need to be able to solve a great number of problems that are very hard to understand. One way in which we can build an understanding of these is through analogy. There are situations in which analogies can be a powerful way of building an accurate conceptual model of a problem or system. They can help provide the ingenuity necessary to solve today's difficult problems. However, analogies can be themselves problematic, even harmful. There may also be things so intangible that they are beyond human understanding.
From the Un-Distinguished Lecture Series (http://ws.cs.ubc.ca/~udls/). The talk was given Jun. 13, 2008.
What is the World Bank? What does it do? How did it come to be? Why do some people dislike it so much?
From the Un-Distinguished Lecture Series (http://ws.cs.ubc.ca/~udls/). The talk was given Feb. 15, 2008.
Recognizing Strong and Weak Opinion ClausesLucas Rizoli
This document summarizes research on recognizing the intensity of subjective opinion clauses, ranging from strongly negative to strongly positive. It discusses how opinion intensity can vary beyond being simply positive or negative. The researchers annotated over 10,000 sentences with subjective opinion clauses and their intensities. Several machine learning classifiers were trained on these annotations to predict the intensity of new clauses, with support vector machines performing best. Key factors included using bag-of-words features along with intensity or type groups of subjective clues.
Modeling and Adapting to Cognitive LoadLucas Rizoli
A summary of three papers on assessing users' cognitive load and adapting interfaces to it, used as a starting point for class discussion.
Presented on Nov. 20, 2007 for CPSC 532B (http://www.cs.ubc.ca/~conati/532b-2007/532-description.html)
The theory behind Fitts' well-known pointing law, commonly used in human-computer interaction. Also, some recent work in modelling users' pointing performance.
Presented in the Fall of 2006 for CPSC 544 (http://www.cs.ubc.ca/~cs544/Fall2006/)
This document provides a list of image sources from various websites to accompany an un-distinguished lecture on Victorian era from 1819-1901. The images include maps, portraits, photographs and illustrations related to topics like telegraph networks, Charles Darwin, steam power, Queen Victoria, and British colonialism during that period.
We'll take a look at one of the most successful post-bubble internet companies: Google. It's a major success as a brand and a business, as a director of traffic and as a nearly ubiquitous middle-man. How does Google create so much wealth? Why does it continue to grow and reap massive profits? And what of its editorial and political policies?
From the Un-Distinguished Lecture Series (http://ws.cs.ubc.ca/~udls/). The talk was given Jul. 13, 2007.
Communication can spread like a virus through populations in three main ways:
1) Ideas and information spread from person to person through social contact, maintaining and propagating themselves, just as viruses do.
2) Memes are informational and cultural phenomena that behave like viruses by spreading widely and affecting how people think and act.
3) New forms of communication like viral marketing and internet memes are able to propagate and maintain themselves on a large scale through social networks, demonstrating how communication itself can spread virally.
Practical eLearning Makeovers for EveryoneBianca Woods
Welcome to Practical eLearning Makeovers for Everyone. In this presentation, we’ll take a look at a bunch of easy-to-use visual design tips and tricks. And we’ll do this by using them to spruce up some eLearning screens that are in dire need of a new look.
A Multimedia Interface For Facilitating Comparisons Of Opinions (Thesis Presentation)
1. A Multimedia Interface for Facilitating
Comparisons of Opinions
Lucas Rizoli
Supervisor Giuseppe Carenini
University of British Columbia
February 11, 2009
2. Opinion data is
abundant and useful,
but analysis is expensive and difficult
2
3. Our interface
supports analysis of opinion data,
particularly comparison across entities
3
4. It supports analysis by
visualizing the data and
summarizing notable comparisons
4
5. Our user study shows
the visualization is usable,
the summarizer’s choices are human-like
5
40. Study goals
● Evaluate content selection strategy
– Matches human selections?
● Better than baseline?
– Humans like selections?
● Usability of visualization
40
43. Baseline selection strategies
● Naïve
– Randomly select which and how many
● Semi-informed
– Likely select same how many as subjects
– Randomly select which
43
48. Opinion data is
abundant and useful,
but analysis is expensive and difficult
Our interface
supports analysis of opinion data,
particularly comparison across entities, by
visualizing the data and
summarizing notable comparisons
Our user study shows
the visualization is usable,
the summarizer’s choices are human-like
48
50. Future work
● Tune thresholds and aspects
● Analysis of human reasoning
● Machine learning for selection
50
51. Future work
● More evaluation of visualization
– Interaction
– Deeper heirarchies
– Different data
– Insight
● Multiple entities
● Improved summarization
– Visual cues
51
52. Future work
● Machine learning–based selection
– Trained on study data
– Which
● Regression on comparison selection scores
– How many
● Max # of comparisons (2)
52
53. Sentence construction
● Always main claim: counts
● All other aspects relate to counts
– Support: same (dis)similarity as counts
– Contrast: different
● Always mention means
● Mention contros, dists when they are extreme
53
54. counts
rt
co
o
pp
nt
u
ra
s
st
means contros dists
54
55. counts
rt
co
o
pp
nt
u
ra
s
st
means contros dists
55
56. counts
rt
co
o
pp
nt
u
ra
s
st
contros means
56
57. Study
● 36 subjects
– 24 female
– 19–43 years old
● 22 different pairs of entities
– Subjects saw ~4 each
57
58. Data generation
● No existing dataset
● Generated similar to existing datasets
– Distribution, modality
● Explore space of possible data
– Too large
– Representative of larger space
58
59. Generated data
● Generic camera features
– Consistent with scenario
● Simple heirarchy
– Reduce visual and task complication
59
60. Generated data
● 9 comparison types
– Constraints on aspects
– Range of support/contrast
● 22 summary cases
– Summaries by type
– Range of
● overall,
● how many,
● which,
● others.
60