Designing Human-AI Partnerships to Combat Misinfomation Matthew Lease
The document discusses designing human-AI partnerships to combat misinformation. It describes a prototype partnership where a human and AI work together to fact-check claims. The partnership aims to make the AI more transparent and address user bias by allowing the user to adjust the perceived reliability of news sources, which then changes the AI's political leaning analysis and fact checking results. The discussion wraps up by noting challenges like avoiding echo chambers and assessing potential harms, as well as opportunities for AI to reduce bias and increase trust through explainable, interactive systems.
Presentation given at the Linguistic Data Consortium (LDC), University of Pennsylvania, April 2019. Based on presentations at the 6th ACM Collective Intelligence Conference, 2018 and the 6th AAAI Conference on Human Computation & Crowdsourcing (HCOMP), 2018. Blog post: https://blog.humancomputation.com/?p=9932.
Designing at the Intersection of HCI & AI: Misinformation & Crowdsourced Anno...Matthew Lease
This document summarizes a presentation about designing human-AI partnerships for fact-checking misinformation. It discusses using crowdsourced rationales to improve the accuracy and cost-efficiency of annotation tasks. It also addresses challenges in designing interfaces for automatic fact-checking models, such as integrating human knowledge and reasoning to correct errors and account for bias. The goal is to develop mixed-initiative systems where humans and AI can jointly reason and personalize fact-checking.
AI & Work, with Transparency & the Crowd Matthew Lease
The document discusses designing human-AI partnerships and the role of crowdsourcing in AI systems. It summarizes work on designing AI assistants to work with humans, using crowds to help fact-check information, and explores challenges around protecting crowd workers who review harmful content or do "dirty jobs". It advocates for more research on ethics in AI and using crowds to help check work for ethical issues.
Talk given at Delft University speaker series on "Crowd Computing & Human-Centered AI" (https://www.academicfringe.org/). November 23, 2020. Covers two 2020 works:
(1) Anubrata Das, Brandon Dang, and Matthew Lease. Fast, Accurate, and Healthier: Interactive Blurring Helps Moderators Reduce Exposure to Harmful Content. In Proceedings of the 8th AAAI Conference on Human Computation and Crowdsourcing (HCOMP), 2020.
Alexander Braylan and Matthew Lease. Modeling and Aggregation of Complex Annotations via Annotation Distances. In Proceedings of the Web Conference, pages 1807--1818, 2020.
Explainable Fact Checking with Humans in-the-loopMatthew Lease
Invited Keynote at KDD 2021 TrueFact Workshop: Making a Credible Web for Tomorrow, August 15, 2021.
https://www.microsoft.com/en-us/research/event/kdd-2021-truefact-workshop-making-a-credible-web-for-tomorrow/#!program-schedule
Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact...Matthew Lease
Presented at the 31st ACM User Interface Software and Technology Symposium (UIST), 2018. Paper: https://www.ischool.utexas.edu/~ml/papers/nguyen-uist18.pdf
Social Machines - 2017 Update (University of Iowa)James Hendler
This is an update to the talk entitled "Social Machines: the coming collision of artificial intelligence, social networks and humanity." It was presented as an ACM Distinguished Speaker lecture at the "University of Iowa Computing Conference" 2017-02-24
Designing Human-AI Partnerships to Combat Misinfomation Matthew Lease
The document discusses designing human-AI partnerships to combat misinformation. It describes a prototype partnership where a human and AI work together to fact-check claims. The partnership aims to make the AI more transparent and address user bias by allowing the user to adjust the perceived reliability of news sources, which then changes the AI's political leaning analysis and fact checking results. The discussion wraps up by noting challenges like avoiding echo chambers and assessing potential harms, as well as opportunities for AI to reduce bias and increase trust through explainable, interactive systems.
Presentation given at the Linguistic Data Consortium (LDC), University of Pennsylvania, April 2019. Based on presentations at the 6th ACM Collective Intelligence Conference, 2018 and the 6th AAAI Conference on Human Computation & Crowdsourcing (HCOMP), 2018. Blog post: https://blog.humancomputation.com/?p=9932.
Designing at the Intersection of HCI & AI: Misinformation & Crowdsourced Anno...Matthew Lease
This document summarizes a presentation about designing human-AI partnerships for fact-checking misinformation. It discusses using crowdsourced rationales to improve the accuracy and cost-efficiency of annotation tasks. It also addresses challenges in designing interfaces for automatic fact-checking models, such as integrating human knowledge and reasoning to correct errors and account for bias. The goal is to develop mixed-initiative systems where humans and AI can jointly reason and personalize fact-checking.
AI & Work, with Transparency & the Crowd Matthew Lease
The document discusses designing human-AI partnerships and the role of crowdsourcing in AI systems. It summarizes work on designing AI assistants to work with humans, using crowds to help fact-check information, and explores challenges around protecting crowd workers who review harmful content or do "dirty jobs". It advocates for more research on ethics in AI and using crowds to help check work for ethical issues.
Talk given at Delft University speaker series on "Crowd Computing & Human-Centered AI" (https://www.academicfringe.org/). November 23, 2020. Covers two 2020 works:
(1) Anubrata Das, Brandon Dang, and Matthew Lease. Fast, Accurate, and Healthier: Interactive Blurring Helps Moderators Reduce Exposure to Harmful Content. In Proceedings of the 8th AAAI Conference on Human Computation and Crowdsourcing (HCOMP), 2020.
Alexander Braylan and Matthew Lease. Modeling and Aggregation of Complex Annotations via Annotation Distances. In Proceedings of the Web Conference, pages 1807--1818, 2020.
Explainable Fact Checking with Humans in-the-loopMatthew Lease
Invited Keynote at KDD 2021 TrueFact Workshop: Making a Credible Web for Tomorrow, August 15, 2021.
https://www.microsoft.com/en-us/research/event/kdd-2021-truefact-workshop-making-a-credible-web-for-tomorrow/#!program-schedule
Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact...Matthew Lease
Presented at the 31st ACM User Interface Software and Technology Symposium (UIST), 2018. Paper: https://www.ischool.utexas.edu/~ml/papers/nguyen-uist18.pdf
Social Machines - 2017 Update (University of Iowa)James Hendler
This is an update to the talk entitled "Social Machines: the coming collision of artificial intelligence, social networks and humanity." It was presented as an ACM Distinguished Speaker lecture at the "University of Iowa Computing Conference" 2017-02-24
Machine Learning for Non-technical Peopleindico data
Machine learning is one of the most promising and most difficult to understand fields of the modern age. Here are the slides from Slater Victoroff's (CEO of indico) talk at General Assembly Boston for non-technical folks on how to separate the signal from the noise -- stay tuned for the next time he speaks:
https://generalassemb.ly/education/machine-learning-for-non-technical-people
Roger hoerl say award presentation 2013Roger Hoerl
This document discusses how statistical engineering principles can help address challenges with "Big Data" projects. It argues that simply having powerful algorithms and large datasets does not guarantee good models or results. The leadership challenge for statisticians is to ensure Big Data projects are built on sound modeling foundations rather than hype. Statistical engineering principles like understanding data quality, using sequential approaches, and integrating subject matter knowledge can help improve the success of Big Data analyses and provide the statistical profession an opportunity for leadership in this area. Statistical engineering provides a framework to structure Big Data projects and incorporate fundamentals of good science that are sometimes overlooked.
Towards Contested Collective Intelligence
Simon Buckingham Shum, Director Connected Intelligence Centre, University of Technology Sydney
This talk is to open up a dialogue with the important work of the SWARM project. I’ll introduce the key ideas that have shaped my work on interactive software tools to make thinking visible, shareable and contestable, some of the design prototypes, and some of the lessons we’ve learnt en route.
Lecture on ethical issues taught as part of Heriot-Watt's course on Conversational Agents (2021). Topics covered:
- General Research Ethics with Human Subjects
- Bias and fairness in Machine Learning
- Specific Issues for ConvAI
Crowdsourcing for Search Evaluation and Social-Algorithmic SearchMatthew Lease
The document discusses using crowdsourcing for search evaluation and social-algorithmic search. It covers topics like using crowds to collect data for search relevance judging, training machine learning models, and answering queries. It also discusses different crowdsourcing platforms, designing tasks for crowds, and quality control. Examples are given of using crowds for tasks in natural language processing, computer vision, information retrieval and more. The social aspects of search are also discussed, like integrating social networks and allowing community question answering.
Crowd computing utilizes both crowdsourcing and human computation to solve problems. Crowdsourcing enables more efficient and scalable data collection and processing by outsourcing tasks to a large, undefined group of people. Human computation allows software developers to incorporate human intelligence and judgment into applications to provide capabilities beyond current artificial intelligence. Examples discussed include Amazon Mechanical Turk, various crowd-powered applications, and how crowdsourcing has helped label large datasets to train machine learning models.
Teaching, Assessment and Learning Analytics: Time to Question AssumptionsSimon Buckingham Shum
Presented by the Assessment Research Centre
and the Melbourne Centre for the Study of Higher Education
Teaching, Assessment and Learning Analytics: Time to Question Assumptions
Simon Buckingham Shum
Professor of Learning Informatics, and Director of the Connected Intelligence Centre (CIC)
University of Technology Sydney
When: 11.30 -12.30 pm, Wed. 13 Sep 2017
Where: Frank Tate Room, Level 9, 100 Leicester St, Carlton
This will be a non-technical talk accessible to a broad range of educational practitioners and researchers, designed to provoke a conversation that provides time to question assumptions. The field of Learning Analytics sits at the convergence of two fields: Learning (including learning technology, educational research and learning/assessment sciences) and Analytics (statistics; visualisation; computer science; data science; AI). Many would add Human-Computer Interaction (e.g. participatory design; user experience; usability evaluation) as a differentiator from related fields such as Educational Data Mining, since the Learning Analytics community attracts many with a concern for the sociotechnical implications of designing and embedding analytics in educational organisations.
Learning Analytics is viewed by many educators with the same suspicion they reserve for AI or “learning management systems”. While in some cases this is justified, I will question other assumptions with some learning analytics examples which can serve as objects for us to think with. I am curious to know what connections/questions arise when these are shared..
Simon Buckingham Shum is Professor of Learning Informatics at the University of Technology Sydney, where he was appointed in August 2014 to direct the new Connected Intelligence Centre. Previously he was Professor of Learning Informatics and an Associate Director at The UK Open University’s Knowledge Media Institute. He is active in the field of Learning Analytics as a co-founder and former Vice President of the Society for Learning Analytics Research, and Program Co-Chair of LAK18, the International Learning Analytics and Knowledge Conference. Previously he co-founded the Compendium Institute and Learning Emergence networks. Simon brings a Human-Centred Informatics (HCI) approach to his work, with a background in Psychology (BSc, York), Ergonomics (MSc, London) and HCI Design Argumentation (PhD, York). He co-edited Visualizing Argumentation (2003) followed by Knowledge Cartography (2008, 2nd Edn. 2014), and with Al Selvin, wrote Constructing Knowledge Art (2015). He was recently appointed as a Fellow of The RSA. http://Simon.BuckinghamShum.net
Data science involves transforming data into valuable insights, products, and stories. It utilizes elements like coding, statistics, machine learning, domain expertise, and visualization. The goals are to avoid issues like overfitting, make causal inferences, and go beyond accuracy to consider speed, simplicity and cost of obtaining data. Data scientists work with big, messy data and aim to measure the right things and explore data characteristics through visualization.
Crowdsourcing: From Aggregation to Search Engine EvaluationMatthew Lease
This document provides an overview of statistical crowdsourcing and its applications. It discusses crowdsourcing platforms like Amazon Mechanical Turk and how they have enabled large-scale data labeling for tasks in areas like natural language processing. It also summarizes research on using crowdsourcing to evaluate search engines and benchmarks different statistical consensus methods for aggregating judgments from crowds. Finally, it presents work on using psychometrics and crowdsourcing to model multidimensional relevance through structured surveys and factor analysis.
The document discusses the Vienna Data Science Group (VDSG), a nonprofit organization that aims to promote data science. It has diverse members from various academic and professional fields. VDSG brings data science to life through talks, conferences, workshops, and networking events. It also discusses the impact of data science on society through applications like autonomous vehicles, smart home devices, and predictive analytics. Data science is changing areas like mobility, sports, finance, and advertising. Emerging technologies like the Internet of Things and predictive modeling raise important questions for society regarding privacy, ethics, and the limits of data-driven decisions.
The document discusses challenges in analytics for big data. It notes that big data refers to data that exceeds the capabilities of conventional algorithms and techniques to derive useful value. Some key challenges discussed include handling the large volume, high velocity, and variety of data types from different sources. Additional challenges include scalability for hierarchical and temporal data, representing uncertainty, and making the results understandable to users. The document advocates for distributed analytics from the edge to the cloud to help address issues of scale.
Data Science For Social Scientists WorkshopIan Hopkinson
The slides from a Workshop presentation on Data Science and Big Data given to academic social scientists. Lots of links to sources, should be interesting to those outside the original target field.
The document discusses learning analytics and cognitive automation, and their implications for education. It begins by outlining how cognitive automation is automating routine cognitive work. This will impact learning analytics, as analytics aggregate lower-level data and AI automates routine cognitive tasks. As a result, humans must focus on higher-order skills like creativity, ethics, resilience and curiosity. The document then provides examples of learning analytics research focusing on dispositions, teamwork and learning beyond the classroom. It argues analytics could assess holistic development if they evaluate integration of knowledge, skills and dispositions over time.
The Internet of Things, or the IoT is a vision for a ubiquitous society wherein people and “Things” are connected in an immersively networked computing environment, with the connected “Things” providing utility to people/enterprises and their digital shadows, through intelligent social and commercial services. However, translating this idea to a conceivable reality is a work in progress for close to two decades; mostly, due to assumptions favoured more towards a “Things”-centric rather than a “Human”-centric approach coupled with the evolution/deployment ecosystem of IoT technologies.
Estimates on the spread and economic impact of IoT over the next few years are in the neighborhood of 50 billion or more connected “Things” with a market exceeding $350 billion through smarter cities and infrastructure, intelligent appliances, and healthier lifestyles. While many of these potential benefits from IoT are real and achievable, the road to accomplish these may need an rethink.
In the last few years, there has been a realization that an effective architecture for IoT (particularly, for emerging nations with limited technology penetration at the national scale) that is both affordable and sustainable should be based on tangible technology advances in the present, ubiquitous capabilities of the present/future, and practical application scenarios of social and entrepreneurial value. Hence, there is a revitalized interest to rethink the above assumptions, and this exercise has led to a more plausible set of scenarios wherein humans along with data, communication and devices play key roles.
In this presentation, an attempt is made to disaggregate these core problems; and offer a trajectory with a set of design paradigms for a renewed IoT ecosystem.
This document provides an overview of data science. It defines data science as using computer science, statistics, machine learning, visualization, and human-computer interaction to analyze and interact with data. The key topics covered include prerequisites for data science like computer science, statistics, machine learning and visualization. Common data science tasks are also outlined such as data analysis, modeling, engineering and prototyping. The document discusses what a data scientist does and how to tackle a data problem by consulting subject matter experts, identifying anomalies, and reducing risk and uncertainty in the data.
This document discusses big data and its characteristics. It provides examples of how companies like Walmart and Facebook handle large amounts of data. It defines big data and describes the types of data: structured, unstructured, and semi-structured. The key characteristics of big data are identified as volume, variety, velocity, and variability. The document concludes that with billions more people gaining internet access, the amount of data will continue growing exponentially and we have only begun to see the potential of big data.
How Machine Learning is Shaping Digital Marketingindico data
Dan Kuster held a workshop at General Assembly Boston on how machine learning is changing -- and improving -- the way digital marketers do their jobs.
Overview:
"Machine learning allows a marketer to target people based on an actual understanding of their interests, habits, and personality, rather than typical demographic data. To get more concrete here, machine learning lets you say: I want to target people that have posted a picture of a guitar in the last three months, or: I want to target people with the INTP personality type that posted something angry about Bernie Sanders recently.
It also allows marketers to look strategically at the content they use to engage their audience and reflect on what works and what doesn't work in a scientific way. If you make 30 posts with very different engagement rates, you can use your own intuition, but then also scientifically vet the wording of your message to get a sense ahead of time about how engaging it may be."
Learning analytics gaining good actionable insightMartin Hawksey
Presented as part of the University of Sussex's TEL Seminar Series
There is greater awareness of the use of data to make improvements in the world around us including learning and teaching. From improvements in business processes to recommendations to what to buy on Amazon all are driven by data. Data by itself does not make a better learner experience and only analytics, the process of making an actionable insight, can help identify gains. As an emerging area 'Learning Analytics' is abound with new opportunities but at the same time these opportunities also raise new ethical and operational concerns. In this presentation we introduce some basic learning analytics concepts, identifying tools and workflows staff may wish to consider. As part of this we also consider the dangers of analytics identifying areas which may lead to learner demotivation or misconception and the questions we should all be asking ourselves to make sure we are always gaining *good* actionable insight.
http://www.sussex.ac.uk/tel/workshops/seminar/martin-hawksey
A short presentation about the challenges associated with balancing IT innovation and operation excellence - and how Katz IS research and education focus on these issues.
Machine Learning for Non-technical Peopleindico data
Machine learning is one of the most promising and most difficult to understand fields of the modern age. Here are the slides from Slater Victoroff's (CEO of indico) talk at General Assembly Boston for non-technical folks on how to separate the signal from the noise -- stay tuned for the next time he speaks:
https://generalassemb.ly/education/machine-learning-for-non-technical-people
Roger hoerl say award presentation 2013Roger Hoerl
This document discusses how statistical engineering principles can help address challenges with "Big Data" projects. It argues that simply having powerful algorithms and large datasets does not guarantee good models or results. The leadership challenge for statisticians is to ensure Big Data projects are built on sound modeling foundations rather than hype. Statistical engineering principles like understanding data quality, using sequential approaches, and integrating subject matter knowledge can help improve the success of Big Data analyses and provide the statistical profession an opportunity for leadership in this area. Statistical engineering provides a framework to structure Big Data projects and incorporate fundamentals of good science that are sometimes overlooked.
Towards Contested Collective Intelligence
Simon Buckingham Shum, Director Connected Intelligence Centre, University of Technology Sydney
This talk is to open up a dialogue with the important work of the SWARM project. I’ll introduce the key ideas that have shaped my work on interactive software tools to make thinking visible, shareable and contestable, some of the design prototypes, and some of the lessons we’ve learnt en route.
Lecture on ethical issues taught as part of Heriot-Watt's course on Conversational Agents (2021). Topics covered:
- General Research Ethics with Human Subjects
- Bias and fairness in Machine Learning
- Specific Issues for ConvAI
Crowdsourcing for Search Evaluation and Social-Algorithmic SearchMatthew Lease
The document discusses using crowdsourcing for search evaluation and social-algorithmic search. It covers topics like using crowds to collect data for search relevance judging, training machine learning models, and answering queries. It also discusses different crowdsourcing platforms, designing tasks for crowds, and quality control. Examples are given of using crowds for tasks in natural language processing, computer vision, information retrieval and more. The social aspects of search are also discussed, like integrating social networks and allowing community question answering.
Crowd computing utilizes both crowdsourcing and human computation to solve problems. Crowdsourcing enables more efficient and scalable data collection and processing by outsourcing tasks to a large, undefined group of people. Human computation allows software developers to incorporate human intelligence and judgment into applications to provide capabilities beyond current artificial intelligence. Examples discussed include Amazon Mechanical Turk, various crowd-powered applications, and how crowdsourcing has helped label large datasets to train machine learning models.
Teaching, Assessment and Learning Analytics: Time to Question AssumptionsSimon Buckingham Shum
Presented by the Assessment Research Centre
and the Melbourne Centre for the Study of Higher Education
Teaching, Assessment and Learning Analytics: Time to Question Assumptions
Simon Buckingham Shum
Professor of Learning Informatics, and Director of the Connected Intelligence Centre (CIC)
University of Technology Sydney
When: 11.30 -12.30 pm, Wed. 13 Sep 2017
Where: Frank Tate Room, Level 9, 100 Leicester St, Carlton
This will be a non-technical talk accessible to a broad range of educational practitioners and researchers, designed to provoke a conversation that provides time to question assumptions. The field of Learning Analytics sits at the convergence of two fields: Learning (including learning technology, educational research and learning/assessment sciences) and Analytics (statistics; visualisation; computer science; data science; AI). Many would add Human-Computer Interaction (e.g. participatory design; user experience; usability evaluation) as a differentiator from related fields such as Educational Data Mining, since the Learning Analytics community attracts many with a concern for the sociotechnical implications of designing and embedding analytics in educational organisations.
Learning Analytics is viewed by many educators with the same suspicion they reserve for AI or “learning management systems”. While in some cases this is justified, I will question other assumptions with some learning analytics examples which can serve as objects for us to think with. I am curious to know what connections/questions arise when these are shared..
Simon Buckingham Shum is Professor of Learning Informatics at the University of Technology Sydney, where he was appointed in August 2014 to direct the new Connected Intelligence Centre. Previously he was Professor of Learning Informatics and an Associate Director at The UK Open University’s Knowledge Media Institute. He is active in the field of Learning Analytics as a co-founder and former Vice President of the Society for Learning Analytics Research, and Program Co-Chair of LAK18, the International Learning Analytics and Knowledge Conference. Previously he co-founded the Compendium Institute and Learning Emergence networks. Simon brings a Human-Centred Informatics (HCI) approach to his work, with a background in Psychology (BSc, York), Ergonomics (MSc, London) and HCI Design Argumentation (PhD, York). He co-edited Visualizing Argumentation (2003) followed by Knowledge Cartography (2008, 2nd Edn. 2014), and with Al Selvin, wrote Constructing Knowledge Art (2015). He was recently appointed as a Fellow of The RSA. http://Simon.BuckinghamShum.net
Data science involves transforming data into valuable insights, products, and stories. It utilizes elements like coding, statistics, machine learning, domain expertise, and visualization. The goals are to avoid issues like overfitting, make causal inferences, and go beyond accuracy to consider speed, simplicity and cost of obtaining data. Data scientists work with big, messy data and aim to measure the right things and explore data characteristics through visualization.
Crowdsourcing: From Aggregation to Search Engine EvaluationMatthew Lease
This document provides an overview of statistical crowdsourcing and its applications. It discusses crowdsourcing platforms like Amazon Mechanical Turk and how they have enabled large-scale data labeling for tasks in areas like natural language processing. It also summarizes research on using crowdsourcing to evaluate search engines and benchmarks different statistical consensus methods for aggregating judgments from crowds. Finally, it presents work on using psychometrics and crowdsourcing to model multidimensional relevance through structured surveys and factor analysis.
The document discusses the Vienna Data Science Group (VDSG), a nonprofit organization that aims to promote data science. It has diverse members from various academic and professional fields. VDSG brings data science to life through talks, conferences, workshops, and networking events. It also discusses the impact of data science on society through applications like autonomous vehicles, smart home devices, and predictive analytics. Data science is changing areas like mobility, sports, finance, and advertising. Emerging technologies like the Internet of Things and predictive modeling raise important questions for society regarding privacy, ethics, and the limits of data-driven decisions.
The document discusses challenges in analytics for big data. It notes that big data refers to data that exceeds the capabilities of conventional algorithms and techniques to derive useful value. Some key challenges discussed include handling the large volume, high velocity, and variety of data types from different sources. Additional challenges include scalability for hierarchical and temporal data, representing uncertainty, and making the results understandable to users. The document advocates for distributed analytics from the edge to the cloud to help address issues of scale.
Data Science For Social Scientists WorkshopIan Hopkinson
The slides from a Workshop presentation on Data Science and Big Data given to academic social scientists. Lots of links to sources, should be interesting to those outside the original target field.
The document discusses learning analytics and cognitive automation, and their implications for education. It begins by outlining how cognitive automation is automating routine cognitive work. This will impact learning analytics, as analytics aggregate lower-level data and AI automates routine cognitive tasks. As a result, humans must focus on higher-order skills like creativity, ethics, resilience and curiosity. The document then provides examples of learning analytics research focusing on dispositions, teamwork and learning beyond the classroom. It argues analytics could assess holistic development if they evaluate integration of knowledge, skills and dispositions over time.
The Internet of Things, or the IoT is a vision for a ubiquitous society wherein people and “Things” are connected in an immersively networked computing environment, with the connected “Things” providing utility to people/enterprises and their digital shadows, through intelligent social and commercial services. However, translating this idea to a conceivable reality is a work in progress for close to two decades; mostly, due to assumptions favoured more towards a “Things”-centric rather than a “Human”-centric approach coupled with the evolution/deployment ecosystem of IoT technologies.
Estimates on the spread and economic impact of IoT over the next few years are in the neighborhood of 50 billion or more connected “Things” with a market exceeding $350 billion through smarter cities and infrastructure, intelligent appliances, and healthier lifestyles. While many of these potential benefits from IoT are real and achievable, the road to accomplish these may need an rethink.
In the last few years, there has been a realization that an effective architecture for IoT (particularly, for emerging nations with limited technology penetration at the national scale) that is both affordable and sustainable should be based on tangible technology advances in the present, ubiquitous capabilities of the present/future, and practical application scenarios of social and entrepreneurial value. Hence, there is a revitalized interest to rethink the above assumptions, and this exercise has led to a more plausible set of scenarios wherein humans along with data, communication and devices play key roles.
In this presentation, an attempt is made to disaggregate these core problems; and offer a trajectory with a set of design paradigms for a renewed IoT ecosystem.
This document provides an overview of data science. It defines data science as using computer science, statistics, machine learning, visualization, and human-computer interaction to analyze and interact with data. The key topics covered include prerequisites for data science like computer science, statistics, machine learning and visualization. Common data science tasks are also outlined such as data analysis, modeling, engineering and prototyping. The document discusses what a data scientist does and how to tackle a data problem by consulting subject matter experts, identifying anomalies, and reducing risk and uncertainty in the data.
This document discusses big data and its characteristics. It provides examples of how companies like Walmart and Facebook handle large amounts of data. It defines big data and describes the types of data: structured, unstructured, and semi-structured. The key characteristics of big data are identified as volume, variety, velocity, and variability. The document concludes that with billions more people gaining internet access, the amount of data will continue growing exponentially and we have only begun to see the potential of big data.
How Machine Learning is Shaping Digital Marketingindico data
Dan Kuster held a workshop at General Assembly Boston on how machine learning is changing -- and improving -- the way digital marketers do their jobs.
Overview:
"Machine learning allows a marketer to target people based on an actual understanding of their interests, habits, and personality, rather than typical demographic data. To get more concrete here, machine learning lets you say: I want to target people that have posted a picture of a guitar in the last three months, or: I want to target people with the INTP personality type that posted something angry about Bernie Sanders recently.
It also allows marketers to look strategically at the content they use to engage their audience and reflect on what works and what doesn't work in a scientific way. If you make 30 posts with very different engagement rates, you can use your own intuition, but then also scientifically vet the wording of your message to get a sense ahead of time about how engaging it may be."
Learning analytics gaining good actionable insightMartin Hawksey
Presented as part of the University of Sussex's TEL Seminar Series
There is greater awareness of the use of data to make improvements in the world around us including learning and teaching. From improvements in business processes to recommendations to what to buy on Amazon all are driven by data. Data by itself does not make a better learner experience and only analytics, the process of making an actionable insight, can help identify gains. As an emerging area 'Learning Analytics' is abound with new opportunities but at the same time these opportunities also raise new ethical and operational concerns. In this presentation we introduce some basic learning analytics concepts, identifying tools and workflows staff may wish to consider. As part of this we also consider the dangers of analytics identifying areas which may lead to learner demotivation or misconception and the questions we should all be asking ourselves to make sure we are always gaining *good* actionable insight.
http://www.sussex.ac.uk/tel/workshops/seminar/martin-hawksey
A short presentation about the challenges associated with balancing IT innovation and operation excellence - and how Katz IS research and education focus on these issues.
ABSTRACT : Computational social science (CSS) is an academic discipline that combines the traditional social sciences with computer science. While social scientists provide research questions, data sources, and acquisition methods, computer scientists contribute mathematical models and computational tools. CSS uses computationally methods and statistical tools to analyze and model social phenomena, social structures, and human social behavior. The purpose of this paper is to provide a brief introduction to computational social science.
Key Words: computational social science, social-computational systems, social simulation models, agent-based models
The Rise of Crowd Computing (December 2015)Matthew Lease
Crowd computing is rising with two waves - the first using crowds to label large amounts of data for artificial intelligence applications. The second wave delivers applications that go beyond AI abilities by incorporating human computation. Open problems remain around ensuring high quality outputs, task design, understanding the worker context and experience, and addressing ethics concerns around opaque platforms and working conditions. The future holds potential for empowering crowd work but also risks like digital sweatshops if worker freedoms and conditions are not considered.
Digital data is increasingly being used to track and analyze human activities like work, learning, and living. This document discusses how the "datafication" of these areas is redistributing responsibilities between humans and algorithms. It explores issues around accountability, control, and transparency when important decisions are made based on data. The author advocates developing new "literacies" to ensure data practices align with public interests and values, and calls for a posthuman perspective that sees humans and technology as deeply entangled.
Presentation given at the HEA Social Sciences learning and teaching summit 'Exploring the implications of ‘the era of big data’ for learning and teaching'.
A blog post outlining the issues discussed at the summit is available via: http://bit.ly/1lCBUIB
The challenges of the Digital Age creates a sea of opportunities for technologists. Developing software transforms the economic, political, cultural, and social reality of countries.
On the one hand, a larger part of the population does not know the downside of IT, which does not decrease our great responsibility. On the other hand, technologists do not always know how to make ethical decisions in day-to-day systems development. There is also a long discussion about the role of technology in the sustainability of the planet: after all, when IT is good or bad?
This lecture is an introduction to ethics and sustainability aimed at technologists who want to learn how to position themselves as professionals in the face of so many challenges and opportunities of the 21st century.
Opening/Framing Comments: John Behrens, Vice President, Center for Digital Data, Analytics, & Adaptive Learning Pearson
Discussion of how the field of educational measurement is changing; how long held assumptions may no longer be taken for granted and that new terminology and language are coming into the.
Panel 1: Beyond the Construct: New Forms of Measurement
This panel presents new views of what assessment can be and new species of big data that push our understanding for what can be used in evidentiary arguments.
Marcia Linn, Lydia Liu from UC Berkeley and ETS discuss continuous assessment of science and new kinds of constructs that relate to collaboration and student reasoning.
John Byrnes from SRI International discusses text and other semi-structured data sources and different methods of analysis.
Kristin Dicerbo from Pearson discusses hidden assessments and the different student interactions and events that can be used in inferential processes.
Panel 2: The Test is Just the Beginning: Assessments Meet Systems Context
This panel looks at how assessments are not the end game, but often the first step in larger big-data practices at districts/state/national levels.
Gerald Tindal from the University of Oregon discusses State data systems and special education, including curriculum-based measurement across geographic settings.
Jack Buckley Commissioner of the National Center for Educational Statistics discussing national datasets where tests and other data connect.
Lindsay Page, Will Marinell from the Strategic Data Project at Harvard discussing state and district datasets used for evaluating teachers, colleges of education, and student progress.
Panel 3: Connecting the Dots: Research Agendas to Integrate Different Worlds
This panel will look at how research organizations are viewing the connections between the perspectives presented in Panels 1 and 2; what is known, what is still yet to be discovered in order to achieve the promised of big connected data in education.
Andrea Conklin Bueschel Program Director at the Spencer Foundation
Ed Dieterle Senior Program Officer at the Bill and Melinda Gates Foundation
Edith Gummer Program Manager at National Science Foundation
Learning analytics: Threats and opportunitiesMartin Hawksey
Learning analytics is the measurement, collection, analysis and reporting of data about learners and their contexts in order to understand and optimize the learning environment. It involves techniques from computer science, statistics, programming and other disciplines. While learning analytics can provide opportunities to give feedback and improve learning, it also poses threats regarding privacy, ethics, and the misuse of visualizations and absence of educational theory. Overall, learning analytics should be used to start conversations to improve learning rather than make definitive decisions, and it is important that the needs and experiences of learners guide its application.
The machine in the ghost: a socio-technical perspective...Cliff Lampe
This document discusses sociotechnical systems and the challenges of collaboration between researchers studying these systems and practitioners. It defines sociotechnical systems as the interrelation between technological and human systems. It argues that truly understanding these systems requires combining the theories and techniques of multiple fields including social science, computer science, and engaging with practitioners. However, bringing these different groups together is difficult due to differences in culture, goals, and incentives between academics and practitioners. It provides some strategies for encouraging collaboration, such as phenomena-based research, workshops, funding incentives, and mixed academic/practitioner events and project partnerships.
Guest presentation: SASUF Symposium: Digital Technologies, Big Data, and Cybersecurity, Vaal University of Technology, Vanderbijlpark, South Africa, 15 May 2018
Artificial intelligence (AI) refers to a constellation of technologies, including machine learning, perception, reasoning, and natural language processing. While the field has been pursuing principles and applications for over 65 years, recent advances, uses, and attendant public excitement have returned it to the spotlight. The impact of early AI 1 systems is already being felt, bringing with it challenges and opportunities, and laying the foundation on which future advances in AI will be integrated into social and economic domains. The potential wide-ranging impact make it necessary to look carefully at the ways in which these technologies are being applied now, whom they’re benefiting, and how they’re structuring our social, economic, and interpersonal lives.
Ethical and Legal Issues in Computational Social Science - Lecture 7 in Intro...Lauri Eloranta
Seventh lecture of the course CSS01: Introduction to Computational Social Science at the University of Helsinki, Spring 2015.(http://blogs.helsinki.fi/computationalsocialscience/).
Lecturer: Lauri Eloranta
Questions & Comments: https://twitter.com/laurieloranta
The document summarizes crowd computing and crowdsourcing. It discusses how tasks traditionally performed by employees can now be outsourced to large online groups through platforms like Amazon Mechanical Turk. It provides examples of how crowds have been used for tasks like data labeling, content analysis, and computer vision. It also discusses some of the opportunities, challenges, and open questions around using crowds for human computation, including ensuring data quality, addressing fraud and ethics concerns, understanding the demographics of workers, and regulating the emerging field.
This document provides notes from a digital business workshop. It includes:
- An agenda covering introductions, reviewing previous topics, exercises and presentations, and next steps.
- An introduction to the workshop facilitator and information on how to connect with her online.
- A discussion of previous workshop topics including the scope of AI and jobs of the future.
- Key topics for the day including defining digital business, analyzing macro trends through PESTLE analysis, and discussing waves of digital disruption.
Biomedical Data Science: We Are Not AlonePhilip Bourne
This document discusses biomedical data science and the opportunities and challenges presented by new developments in data science. Some key points:
- We are at a tipping point where biomedical research is no longer the sole leader in data science due to advances in many other fields. Biomedical researchers need to become data scientists to stay relevant.
- Data science is being driven by the massive growth of digital data and requires an interdisciplinary approach. It is touching every field and attracting many students.
- Developing effective data systems and infrastructure is a major challenge to enable open sharing and analysis of data. Initiatives are underway but more collaboration is needed across sectors.
- Advances in machine learning, like Alpha
The document discusses solutions to overcoming the tragedy of the data commons through shared metadata. It describes how large scientific projects can share data at low cost by starting from overlapping common metadata terms and having their metadata teams work together. Reusing shared metadata leads to increased reusability of data across projects. The document advocates for developing metadata as evolving, linked resources rather than predefined standards, and provides examples of how this approach has helped scientific collaborations and government data sharing initiatives succeed.
Similar to Key Challenges in Moderating Social Media: Accuracy, Cost, Scalability, and Safety (20)
Automated Models for Quantifying Centrality of Survey ResponsesMatthew Lease
Research talk presented at "Innovations in Online Research" (October 1, 2021)
Event URL: https://web.cvent.com/event/d063e447-1f16-4f70-a375-5d6978b3feea/websitePage:b8d4ce12-3d02-4d24-897d-fd469ca4808a.
Mix and Match: Collaborative Expert-Crowd Judging for Building Test Collectio...Matthew Lease
Presentation at the 1st Biannual Conference on Design of Experimental Search & Information Retrieval Systems (DESIRES 2018). August 30, 2018. Paper: https://www.ischool.utexas.edu/~ml/papers/kutlu-desires18.pdf
Talk given August 29, 2018 at the 1st Biannual Conference on Design of Experimental Search & Information Retrieval Systems (DESIRES 2018). Paper: https://www.ischool.utexas.edu/~ml/papers/lease-desires18.pdf
Your Behavior Signals Your Reliability: Modeling Crowd Behavioral Traces to E...Matthew Lease
Presentation at the 6th AAAI Conference on Human Computation and Crowdsourcing (HCOMP), July 7, 2018. Work by Tanya Goyal, Tyler McDonnell, Mucahid Kutlu, Tamer Elsayed, and Matthew Lease. Pages 41-49 in conference proceedings. Online version of paper includes corrections to official version in proceedings: https://www.ischool.utexas.edu/~ml/papers/goyal-hcomp18
What Can Machine Learning & Crowdsourcing Do for You? Exploring New Tools for...Matthew Lease
Invited Talk at the ACM JCDL 2018 WORKSHOP ON CYBERINFRASTRUCTURE AND MACHINE LEARNING FOR DIGITAL LIBRARIES AND ARCHIVES. https://www.tacc.utexas.edu/conference/jcdl18
Deep Learning for Information Retrieval: Models, Progress, & OpportunitiesMatthew Lease
Talk given at the 8th Forum for Information Retrieval Evaluation (FIRE, http://fire.irsi.res.in/fire/2016/), December 10, 2016, and at the Qatar Computing Research Institute (QCRI), December 15, 2016.
Systematic Review is e-Discovery in Doctor’s ClothingMatthew Lease
This document discusses opportunities for collaboration between researchers working in systematic reviews and electronic discovery (e-discovery). It notes similarities in the challenges both fields face, including the need for high recall with bounded costs and reliance on multi-stage review pipelines. The document proposes that technologies developed for semi-automated citation screening and crowdsourcing could help address current limitations. It concludes by encouraging information retrieval researchers to investigate open problems in systematic reviews as opportunities to advance technologies beyond other tasks and help bring together interested parties through forums like the TREC Total Recall track.
Beyond Mechanical Turk: An Analysis of Paid Crowd Work PlatformsMatthew Lease
The document summarizes a presentation about analyzing paid crowd work platforms beyond Mechanical Turk. It discusses how Mechanical Turk has dominated research on paid crowdsourcing due to its early popularity, but that it has limitations. The presentation conducts a qualitative study of 7 alternative crowd work platforms to identify distinguishing capabilities not found on MTurk, such as different payment models, richer worker profiles, and support for confidential tasks. It aims to increase awareness of other platforms to further inform practice and research on crowdsourcing.
Toward Effective and Sustainable Online Crowd WorkMatthew Lease
New forms of online crowd work enabled by technology present both opportunities for innovation and risks of harm that require careful consideration. This document discusses three main issues. First, some crowd work tasks may enable illegal or unethical goals. Second, the lack of regulation means crowd work practices sometimes exploit vulnerable workers by not ensuring informed consent. Third, multi-stakeholder discussions are needed to develop win-win solutions that balance costs, quality, and what is fair for all parties in a global context. The goal is to learn from each other and find ways to encourage ethical practices.
Talk at AAAI Human Computation 2013 Workshop on Scaling Speech, Language Understanding and Dialogue through Crowdsourcing (November 9, 2013): http://faculty.washington.edu/mtjalve/HCOMP2013.Workshop.html
Crowdsourcing & ethics: a few thoughts and refences. Matthew Lease
Extracts and addendums from an earlier talk, for those interested in ethics and related issues in regard to crowdsourcing, particularly research uses. Slides updated Sept. 2, 2013.
Crowdsourcing & Human Computation Labeling Data & Building Hybrid SystemsMatthew Lease
This document provides an overview of crowdsourcing and human computation. It begins with examples of using Amazon Mechanical Turk for basic tasks like labeling data. It then discusses how crowdsourcing can be used for more complex applications and discusses factors like incentive design, quality control, and platform selection. The document provides guidance on task design, experiment workflow, and usability considerations for effective crowdsourcing.
Talk presented at the ID360 Conference (http://identity.utexas.edu/id360), May 1, 2013. Paper: http://ssrn.com/abstract=2228728. Joint work with Jessica Hullman, Jeffrey P. Bigham, Michael S. Bernstein, Juho Kim, Walter S. Lasecki, Saeideh Bakhshi, Tanushree Mitra, and Robert C. Miller.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Nunit vs XUnit vs MSTest Differences Between These Unit Testing Frameworks.pdfflufftailshop
When it comes to unit testing in the .NET ecosystem, developers have a wide range of options available. Among the most popular choices are NUnit, XUnit, and MSTest. These unit testing frameworks provide essential tools and features to help ensure the quality and reliability of code. However, understanding the differences between these frameworks is crucial for selecting the most suitable one for your projects.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Key Challenges in Moderating Social Media: Accuracy, Cost, Scalability, and Safety
1. MATT LEASE
Associate Professor
School of Information
The University of Texas at Austin
KEY CHALLENGES IN MODERATING
SOCIAL MEDIA: ACCURACY, COST,
SCALABILITY, & SAFETY
Lab: ir.ischool.utexas.edu
@mattlease
Slides: slideshare.net/mattlease
7. Content Moderation Challenges
• Internet scale (+ high cost and latency of manual human reviews)
• High accuracy requirements (high cost of mistakes)
• What is considered acceptable varies by platform & region (legal
& cultural), is continually evolving, and faces adversarial attacks
• Issues of free speech & due process in removal & remediation
7
17. 17
Content moderators work at a
Facebook office in Austin, Texas.
“A counselor in Austin, who is one of five on staff for
roughly 450 moderators spread across several
offices in the Texas capital, said the job could cause
a form of post-traumatic stress disorder known as
vicarious trauma.”
“Finding the right balance between content reviewer
well-being and resiliency, quality, and productivity
[and responsiveness] is very challenging at the
scale we operate in.” ~ Facebook
18. 18
“…so many people have written to me just to say that
they didn't know that human beings were actually
doing this work. They assumed it was all automated.”
19. The Great Irony
19
The sort of task we most want AI to do
(emotionally disturbing) is what people
are doing because AI isn’t good enough
20. • Improve accuracy of AI prediction models
• Develop effective human-in-the-loop systems
• Design HCI methods for safe & accurate work
• Promote social justice for human moderators
20
Information Retrieval &
Crowdsourcing Lab
http://ir.ischool.utexas.edu
21. with An Thanh Nguyen (UT), Byron Wallace (Northeastern), & more!
Believe it or not: Designing a
Human-AI Partnership for Mixed-
Initiative Fact-Checking
21
Lab: ir.ischool.utexas.edu
@mattlease
Slides: slideshare.net/mattlease
23. Anubrata Das, Brandon Dang and Matthew Lease
School of Information
The University of Texas at Austin
Fast, Accurate, and Healthier:
Interactive Blurring Helps Moderators
Reduce Exposure to Harmful Content
23
Lab: ir.ischool.utexas.edu
@mattlease
Slides: slideshare.net/mattlease
24. Research Question
24
By revealing less of an image, can we reduce the emotional
labor of image moderation without compromising
moderator accuracy and efficiency?
25. Design and Demo
http://ir.ischool.utexas.edu/CM/demo/
25
Dang, Brandon, Martin J. Riedl, and Matthew Lease. "But who protects the moderators? the case of crowdsourced image
moderation." arXiv preprint arXiv:1804.10999 (2018).
Code: https://github.com/budang/content-moderation
29. The Myth of Automation
Computer systems often embody
hidden human labor
• Gray and Suri (2019) “ghost work”
• Ekbia and Nardi (2014) ”heteromation”
• Irani and Silberman (2013) “invisible work”
29
31. 31
As the coronavirus pandemic swept the world, social media giants like Facebook,
Google and Twitter did what other companies did. They sent workers home —
including the tens of thousands of people tasked with sifting through mountains of
online material and weeding out hateful, illegal and sexually explicit content.
The COVID-driven experiment represented a real-world baptism of fire for something
social media companies have long dreamed of: using machine-learning tools and
artificial intelligence — not humans — to police posts on their platforms.
In their place, the companies turned to algorithms to do the job. It did not go well
35. Health Effects for Moderators
“The psychological effects of viewing harmful content
is well documented, with reports of moderators
experiencing posttraumatic stress disorder (PTSD)
symptoms and other mental health issues...”
(Cambridge Consultants, 2019)
35
“…many other employees develop long-lasting mental
health symptoms that stop short of full-blown PTSD,
including depression, anxiety, and insomnia.”
(Casey Newton, 2020)
Image Source: The Verge