Somewhat extended and tidied up text of HB keynote at the ALT winter summit on AI and Ethics, December 2023. Slides draft quality for navigation only - a better quality set of slides is also available.
ALT Ethical AI summit, HB keynote, Dec 2023Helen Beetham
The document discusses issues around whose ethics and values are embedded in generative AI tools. It notes that while ethics codes exist, users cannot easily verify what values are incorporated. It advocates for a relational approach that considers the dynamic contexts and relationships in which AI is developed and used. The document outlines how generative AI works by training on large datasets and being refined through user prompts, but this process can encode biases and privilege some voices over others. It raises questions about the environmental impact, risks to education and jobs, and how AI may define and value humanity. It argues we need an ecosystem that fosters agency, care, accountability and representation when developing and using generative AI technologies.
Mike Sharples - Generative AI and Large Language Models in Digital Education....EADTU
The document discusses opportunities and challenges for using generative AI in digital education. It provides examples of how AI could be used as a "possibility engine" to generate responses to open questions, as a "Socratic opponent" in argumentative exercises, and as a "guide on the side" to navigate learning. However, it also notes risks like AI generating fake research or plagiarizing work. It suggests moving to more authentic assessments like projects, establishing guidelines for AI use, and developing students' critical thinking and AI literacy. Overall, the document advocates a cautious, strategic approach to integrating generative AI in a way that enhances learning.
This presentation discuss major applications of AI in Healthcare including medical diagnostics, personalized treatments and optimizing US healthcare system. This presentation also discuss some of the challenges of implementing AI in healthcare.
Artificial Intelligence in Education|Evolve Machine LearnersMian Ashar
This document discusses how artificial intelligence can help address issues in global education. It notes that millions of children are not learning basic skills despite years of schooling. AI has the potential to help teachers meet the diverse needs of all students by personalizing instruction. Intelligent tutoring systems already use data to provide feedback and work directly with students. AI systems can easily adapt to individual student needs and target instruction based on strengths and weaknesses. The document also discusses how AI can help improve courses by identifying gaps in materials that confuse students. Overall, AI offers a way to make trial and error learning less intimidating for students through judgment-free experimentation with intelligent tutors providing solutions for improvement.
The document discusses the "black box problem" in artificial intelligence and neural networks. Specifically, it notes that while these systems can perform complex tasks, the inner workings and decision-making processes are not fully understood. It argues that developing theoretical frameworks grounded in other domains, like physics, could help increase transparency and interpretability of these technologies. More work is needed to better understand and explain how artificial intelligence systems learn and operate.
Generative AI Use cases for Enterprise - Second SessionGene Leybzon
This document provides an overview of generative AI use cases for enterprises. It begins with addressing concerns that generative AI will replace jobs. The presentation then defines generative AI as AI that generates new content like text, images or code based on patterns learned from training data.
Several examples of generative AI outputs are shown including code, text, images and advice. Potential use cases for enterprises are then outlined, including synthetic data generation, code generation, code quality checks, customer service, and data analysis. The presentation concludes by emphasizing that people will be "replaced by someone who knows how to use AI", not AI itself.
The document provides an overview of the threats and opportunities of generative AI for businesses, outlining practical steps for adopting generative AI technologies including understanding the impacts on industries and business models, discovering opportunities to improve productivity and monetize assets, and starting the adoption journey with prioritized use cases and pilots.
This document discusses the impact of artificial intelligence and automation on the job market. It notes that while previous industrial revolutions initially caused job losses, they ultimately led to new jobs being created as technologies advanced. However, there is concern that this time the pace and scale of disruption may be different. Many existing jobs such as drivers, cashiers and fast food workers are at risk of automation. While some propose technologies like brain-computer interfaces and augmented reality as solutions, others argue more fundamental economic and policy changes may be needed to deal with potential widespread unemployment. The document cautions that productivity gains do not necessarily translate to new jobs and calls for rethinking economic theories and policies in light of technological disruption.
ALT Ethical AI summit, HB keynote, Dec 2023Helen Beetham
The document discusses issues around whose ethics and values are embedded in generative AI tools. It notes that while ethics codes exist, users cannot easily verify what values are incorporated. It advocates for a relational approach that considers the dynamic contexts and relationships in which AI is developed and used. The document outlines how generative AI works by training on large datasets and being refined through user prompts, but this process can encode biases and privilege some voices over others. It raises questions about the environmental impact, risks to education and jobs, and how AI may define and value humanity. It argues we need an ecosystem that fosters agency, care, accountability and representation when developing and using generative AI technologies.
Mike Sharples - Generative AI and Large Language Models in Digital Education....EADTU
The document discusses opportunities and challenges for using generative AI in digital education. It provides examples of how AI could be used as a "possibility engine" to generate responses to open questions, as a "Socratic opponent" in argumentative exercises, and as a "guide on the side" to navigate learning. However, it also notes risks like AI generating fake research or plagiarizing work. It suggests moving to more authentic assessments like projects, establishing guidelines for AI use, and developing students' critical thinking and AI literacy. Overall, the document advocates a cautious, strategic approach to integrating generative AI in a way that enhances learning.
This presentation discuss major applications of AI in Healthcare including medical diagnostics, personalized treatments and optimizing US healthcare system. This presentation also discuss some of the challenges of implementing AI in healthcare.
Artificial Intelligence in Education|Evolve Machine LearnersMian Ashar
This document discusses how artificial intelligence can help address issues in global education. It notes that millions of children are not learning basic skills despite years of schooling. AI has the potential to help teachers meet the diverse needs of all students by personalizing instruction. Intelligent tutoring systems already use data to provide feedback and work directly with students. AI systems can easily adapt to individual student needs and target instruction based on strengths and weaknesses. The document also discusses how AI can help improve courses by identifying gaps in materials that confuse students. Overall, AI offers a way to make trial and error learning less intimidating for students through judgment-free experimentation with intelligent tutors providing solutions for improvement.
The document discusses the "black box problem" in artificial intelligence and neural networks. Specifically, it notes that while these systems can perform complex tasks, the inner workings and decision-making processes are not fully understood. It argues that developing theoretical frameworks grounded in other domains, like physics, could help increase transparency and interpretability of these technologies. More work is needed to better understand and explain how artificial intelligence systems learn and operate.
Generative AI Use cases for Enterprise - Second SessionGene Leybzon
This document provides an overview of generative AI use cases for enterprises. It begins with addressing concerns that generative AI will replace jobs. The presentation then defines generative AI as AI that generates new content like text, images or code based on patterns learned from training data.
Several examples of generative AI outputs are shown including code, text, images and advice. Potential use cases for enterprises are then outlined, including synthetic data generation, code generation, code quality checks, customer service, and data analysis. The presentation concludes by emphasizing that people will be "replaced by someone who knows how to use AI", not AI itself.
The document provides an overview of the threats and opportunities of generative AI for businesses, outlining practical steps for adopting generative AI technologies including understanding the impacts on industries and business models, discovering opportunities to improve productivity and monetize assets, and starting the adoption journey with prioritized use cases and pilots.
This document discusses the impact of artificial intelligence and automation on the job market. It notes that while previous industrial revolutions initially caused job losses, they ultimately led to new jobs being created as technologies advanced. However, there is concern that this time the pace and scale of disruption may be different. Many existing jobs such as drivers, cashiers and fast food workers are at risk of automation. While some propose technologies like brain-computer interfaces and augmented reality as solutions, others argue more fundamental economic and policy changes may be needed to deal with potential widespread unemployment. The document cautions that productivity gains do not necessarily translate to new jobs and calls for rethinking economic theories and policies in light of technological disruption.
Artificial intelligence or AI in short is the latest technology on which the whole world is working today. We at myassignmenthelp.net are providing help with all the assignments and projects. So when ever you need help with any work related to AI feel free to get in touch
Algorithmic Bias: Challenges and Opportunities for AI in HealthcareGregory Nelson
Gregory S. Nelson, VP, Analytics and Strategy – Vidant Health | Adjunct Faculty Duke University
The promise of AI is quickly becoming a reality for a number of industries including healthcare. For example, we have seen early successes in the augmenting clinical intelligence for diagnostic imaging and in early detection of pneumonia and sepsis. But what happens when the algorithms are biased? In this presentation, we will outline a framework for AI governance and discuss ways in which we can address algorithmic bias in machine learning.
Objective 1: Illustrate the issues of bias in AI through examples specific to healthcare.
Objective 2: Summarize the growing body of work in the legal, regulatory, and ethical oversight of AI models and the implications for healthcare.
Objective 3: Outline steps that we can take to establish an AI governance strategy for our organizations.
How can artificial intelligence be used in e learning GlobalTechCouncil
Artificial Intelligence allows for machines to learn from past experience, adjust to present inputs and perform human-like tasks, with utmost perfection. Research estimates that the artificial intelligence market will grow to a $190 billion industry by 2025. And by 2021, uses of artificial intelligence in education industry will grow by 47.5%.
Measures and mismeasures of algorithmic fairnessManojit Nandi
This document discusses various measures and challenges of achieving algorithmic fairness. It begins by defining algorithmic fairness and noting it is inherently a social concept. It then covers three main types of algorithmic biases: bias in allocation, representation, and weaponization. It outlines three families of fairness measures: anti-classification, classification parity, and calibration. It notes each approach has dangers and no single definition of fairness exists. The document concludes by discussing proposed standards for documenting datasets and models to improve algorithmic transparency and accountability.
This document discusses artificial intelligence in education. It provides examples of AI education tools like Thinkster Math, a math tutoring app, and Brainly, a social question and answer site. The potential impacts of AI include personalized learning through tools like Thinkster Math, intelligent tutoring systems to supplement teachers, and adaptive grouping of students. While AI may improve education through personalized learning, it could also lead to lower costs but lack of personal connections and unemployment of some educators over time.
Artificial Intelligence,
History of Artificial Intelligence,
Artificial Intelligence Use Cases,
Artificial Intelligence Applications,
Ways of Achieving AI,
Machine Learning,
Deep Learning,
Supervised and Unsupervised Learning,
Classification Vs Prediction,
TensorFlow,
TensorFlow Graphs,
History of TensorFlow,
Companies using TensorFlow,
Using Deep Q Networks to Learn Video Game Strategies,
TensorFlow Use Cases,
AI & Deep Learning with TensorFlow,
How TensorFlow used today
For more updates on Big Data, Cloud Computing, Data Analytics, Artificial Intelligence, IoT subscribe to http://www.mybigdataanalytics.in
This document provides an introduction to artificial intelligence (AI). It discusses definitions of intelligence and what AI aims to achieve, including acting humanly through techniques like the Turing Test. The document outlines key disciplines related to AI and provides a short history of the field from its origins in 1943 to modern successes. Challenges, conferences, courses and books relevant to AI are also listed. It concludes with questions and sources.
Introduction of Artificial Intelligence and Machine Learning bigdata trunk
A Workshop to introduce Artificial Intelligence and Machine Learning for beginners. It starts with basics , terminologies and concepts for machine learning, compares with deep learning and artificial Intelligence. Highlights the ML and AI offerings like Jupyter Notebook, Azure ML , Amazon Sagemaker, Tensorflow etc.
Benchmark comparison of Large Language ModelsMatej Varga
The document summarizes the results of a benchmark comparison that tested several large language models across different skillsets and domains. It shows that GPT-4 performed the best overall based on metrics like logical robustness, correctness, efficiency, factuality, and common sense. Tables display the scores each model received for different skillsets and how they compare between open-sourced, proprietary, and oracle models. The source is listed as an unreviewed preprint paper and related GitHub page under a Creative Commons license.
Artificial Intelligence Course | AI Tutorial For Beginners | Artificial Intel...Simplilearn
This Artificial Intelligence presentation will help you understand what is Artificial Intelligence, types of Artificial Intelligence, ways of achieving Artificial Intelligence and applications of Artificial Intelligence. In the end, we will also implement a use case on TensorFlow in which we will predict whether a person has diabetes or not. Artificial Intelligence is a method of making a computer, a computer-controlled robot or a software think intelligently in a manner similar to the human mind. AI is accomplished by studying the patterns of the human brain and by analyzing the cognitive process. Artificial Intelligence is emerging as the next big thing in the technology field. Organizations are adopting AI and budgeting for certified professionals in the field, thus the demand for trained and certified professionals in AI is increasing. As this new field continues to grow, it will have an impact on everyday life and lead to considerable implications for many industries. Now, let us deep dive into the AI tutorial video and understand what is this Artificial Intelligence all about and how it can impact human life.
The topics covered in this Artificial Intelligence presentation are as follows:
1. What is Artificial intelligence?
2. Types of Artificial intelligence
3. Ways of achieving artificial intelligence
4. Applications of Artificial intelligence
5. Use case - Predicting if a person has diabetes or not
Simplilearn’s Artificial Intelligence course provides training in the skills required for a career in AI. You will master TensorFlow, Machine Learning and other AI concepts, plus the programming languages needed to design intelligent agents, deep learning algorithms & advanced artificial neural networks that use predictive analytics to solve real-time decision-making problems without explicit programming.
Why learn Artificial Intelligence?
The current and future demand for AI engineers is staggering. The New York Times reports a candidate shortage for certified AI Engineers, with fewer than 10,000 qualified people in the world to fill these jobs, which according to Paysa earn an average salary of $172,000 per year in the U.S. (or Rs.17 lakhs to Rs. 25 lakhs in India) for engineers with the required skills.
Those who complete the course will be able to:
1. Master the concepts of supervised and unsupervised learning
2. Gain practical mastery over principles, algorithms, and applications of machine learning through a hands-on approach which includes working on 28 projects and one capstone project.
3. Acquire thorough knowledge of the mathematical and heuristic aspects of machine learning.
4. Understand the concepts and operation of support vector machines, kernel SVM, Naive Bayes, decision tree classifier, random forest classifier, logistic regression, K-nearest neighbors, K-means clustering and more.
Comprehend the theoretic
Learn more at: https://www.simplilearn.com
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for EveryoneJames Anderson
If Artificial Intelligence (AI) is a black-box, how can a human comprehend and trust the results of Machine Learning (ML) alogrithms? Explainable AI (XAI) tries to shed light into that AI black-box so humans can trust what is going on. Our speaker Meg Dickey-Kurdziolek is currently a UX Researcher for Google Cloud AI and Industry Solutions, where she focuses her research on Explainable AI and Model Understanding. Recording of the presentation: https://youtu.be/6N2DNN_HDWU
AI, Machine Learning, and Data Science ConceptsDan O'Leary
An overview of AI, Machine Learning, and Data Science concepts, contrasting popular conceptions of AI to state-of-the-art methods in Data Science. An introduction to Machine Learning will compare supervised and unsupervised methods, give high-level descriptions of key methods, and discuss current use cases and trends.
Web version of presentation given to the Data Science Society of Auburn, a mix of undergraduate and graduate students interested in Data Science.
This document provides an overview of artificial intelligence (AI) including definitions of different types of AI, a brief history of AI, potential application fields and use cases, and the future outlook for AI. It defines AI as ranging from everyday applications to self-driving cars. It discusses narrow AI, general AI, and superintelligence. The document also summarizes key milestones in the development of AI from 1955 to the present and potential opportunities and challenges of AI including automation, ethics, and politics. It provides examples of Austrian AI startups and their technologies. The outlook suggests that human-level AI may be achieved by 2040 and superintelligence by 2060 with impacts on robotics, climate change, human enhancement, and autonomous
Introduction To Artificial Intelligence PowerPoint Presentation SlidesSlideTeam
Introduction to Artificial Intelligence is for the mid level managers giving information about what is AI, AI levels, types of AI, where AI is used. You can also know the difference between AI vs Machine learning vs Deep learning to understand expert system in a better way for business growth. https://bit.ly/3er7KWI
This document discusses the rise of intelligent control systems using techniques like artificial intelligence, neural networks, fuzzy logic and genetic algorithms. It lists several applications of intelligent control systems, such as in manufacturing, aerospace, healthcare and consumer products. Intelligent control systems are characterized by their ability to adapt, learn autonomously, and form hierarchical structures. The document also briefly mentions uses of intelligent control in areas like truss construction, spacecraft assembly, and health monitoring.
Presenting this set of slides with name - Artificial Intelligence Overview Powerpoint Presentation Slides. This complete deck is oriented to make sure you do not lag in your presentations. Our creatively crafted slides come with apt research and planning. This exclusive deck with thirtyseven slides is here to help you to strategize, plan, analyse, or segment the topic with clear understanding and apprehension. Utilize ready to use presentation slides on Artificial Intelligence Overview Powerpoint Presentation Slides with all sorts of editable templates, charts and graphs, overviews, analysis templates. It is usable for marking important decisions and covering critical issues. Display and present all possible kinds of underlying nuances, progress factors for an all inclusive presentation for the teams. This presentation deck can be used by all professionals, managers, individuals, internal external teams involved in any company organization.
artificial intelligence - in need of an ethical layer?Inge de Waard
This document discusses the need for an ethical framework for artificial intelligence (AI) used in education. It notes that algorithms and AI systems are developed by humans and can reflect their biases, potentially limiting opportunities for some groups. It suggests establishing an ethics commission or requiring ethics reviews of AI systems to ensure they promote values like diversity, inclusion, and student well-being rather than just replicating existing social norms. The document also questions whether establishing ethical guidelines is even possible given that AI systems are complex and outcomes are hard to predict. It asks readers to consider what an ethical approach to AI in education might include.
The document provides background on Emperor Nero of Rome, who was born outside Rome and whose mother married his great uncle Emperor Claudius so that Nero would become the next Emperor. Nero went on to become one of the most tyrannical and oppressive leaders in Roman history, known for his malevolent acts. The document signals it will provide more details on Nero's life and rule.
Artificial intelligence or AI in short is the latest technology on which the whole world is working today. We at myassignmenthelp.net are providing help with all the assignments and projects. So when ever you need help with any work related to AI feel free to get in touch
Algorithmic Bias: Challenges and Opportunities for AI in HealthcareGregory Nelson
Gregory S. Nelson, VP, Analytics and Strategy – Vidant Health | Adjunct Faculty Duke University
The promise of AI is quickly becoming a reality for a number of industries including healthcare. For example, we have seen early successes in the augmenting clinical intelligence for diagnostic imaging and in early detection of pneumonia and sepsis. But what happens when the algorithms are biased? In this presentation, we will outline a framework for AI governance and discuss ways in which we can address algorithmic bias in machine learning.
Objective 1: Illustrate the issues of bias in AI through examples specific to healthcare.
Objective 2: Summarize the growing body of work in the legal, regulatory, and ethical oversight of AI models and the implications for healthcare.
Objective 3: Outline steps that we can take to establish an AI governance strategy for our organizations.
How can artificial intelligence be used in e learning GlobalTechCouncil
Artificial Intelligence allows for machines to learn from past experience, adjust to present inputs and perform human-like tasks, with utmost perfection. Research estimates that the artificial intelligence market will grow to a $190 billion industry by 2025. And by 2021, uses of artificial intelligence in education industry will grow by 47.5%.
Measures and mismeasures of algorithmic fairnessManojit Nandi
This document discusses various measures and challenges of achieving algorithmic fairness. It begins by defining algorithmic fairness and noting it is inherently a social concept. It then covers three main types of algorithmic biases: bias in allocation, representation, and weaponization. It outlines three families of fairness measures: anti-classification, classification parity, and calibration. It notes each approach has dangers and no single definition of fairness exists. The document concludes by discussing proposed standards for documenting datasets and models to improve algorithmic transparency and accountability.
This document discusses artificial intelligence in education. It provides examples of AI education tools like Thinkster Math, a math tutoring app, and Brainly, a social question and answer site. The potential impacts of AI include personalized learning through tools like Thinkster Math, intelligent tutoring systems to supplement teachers, and adaptive grouping of students. While AI may improve education through personalized learning, it could also lead to lower costs but lack of personal connections and unemployment of some educators over time.
Artificial Intelligence,
History of Artificial Intelligence,
Artificial Intelligence Use Cases,
Artificial Intelligence Applications,
Ways of Achieving AI,
Machine Learning,
Deep Learning,
Supervised and Unsupervised Learning,
Classification Vs Prediction,
TensorFlow,
TensorFlow Graphs,
History of TensorFlow,
Companies using TensorFlow,
Using Deep Q Networks to Learn Video Game Strategies,
TensorFlow Use Cases,
AI & Deep Learning with TensorFlow,
How TensorFlow used today
For more updates on Big Data, Cloud Computing, Data Analytics, Artificial Intelligence, IoT subscribe to http://www.mybigdataanalytics.in
This document provides an introduction to artificial intelligence (AI). It discusses definitions of intelligence and what AI aims to achieve, including acting humanly through techniques like the Turing Test. The document outlines key disciplines related to AI and provides a short history of the field from its origins in 1943 to modern successes. Challenges, conferences, courses and books relevant to AI are also listed. It concludes with questions and sources.
Introduction of Artificial Intelligence and Machine Learning bigdata trunk
A Workshop to introduce Artificial Intelligence and Machine Learning for beginners. It starts with basics , terminologies and concepts for machine learning, compares with deep learning and artificial Intelligence. Highlights the ML and AI offerings like Jupyter Notebook, Azure ML , Amazon Sagemaker, Tensorflow etc.
Benchmark comparison of Large Language ModelsMatej Varga
The document summarizes the results of a benchmark comparison that tested several large language models across different skillsets and domains. It shows that GPT-4 performed the best overall based on metrics like logical robustness, correctness, efficiency, factuality, and common sense. Tables display the scores each model received for different skillsets and how they compare between open-sourced, proprietary, and oracle models. The source is listed as an unreviewed preprint paper and related GitHub page under a Creative Commons license.
Artificial Intelligence Course | AI Tutorial For Beginners | Artificial Intel...Simplilearn
This Artificial Intelligence presentation will help you understand what is Artificial Intelligence, types of Artificial Intelligence, ways of achieving Artificial Intelligence and applications of Artificial Intelligence. In the end, we will also implement a use case on TensorFlow in which we will predict whether a person has diabetes or not. Artificial Intelligence is a method of making a computer, a computer-controlled robot or a software think intelligently in a manner similar to the human mind. AI is accomplished by studying the patterns of the human brain and by analyzing the cognitive process. Artificial Intelligence is emerging as the next big thing in the technology field. Organizations are adopting AI and budgeting for certified professionals in the field, thus the demand for trained and certified professionals in AI is increasing. As this new field continues to grow, it will have an impact on everyday life and lead to considerable implications for many industries. Now, let us deep dive into the AI tutorial video and understand what is this Artificial Intelligence all about and how it can impact human life.
The topics covered in this Artificial Intelligence presentation are as follows:
1. What is Artificial intelligence?
2. Types of Artificial intelligence
3. Ways of achieving artificial intelligence
4. Applications of Artificial intelligence
5. Use case - Predicting if a person has diabetes or not
Simplilearn’s Artificial Intelligence course provides training in the skills required for a career in AI. You will master TensorFlow, Machine Learning and other AI concepts, plus the programming languages needed to design intelligent agents, deep learning algorithms & advanced artificial neural networks that use predictive analytics to solve real-time decision-making problems without explicit programming.
Why learn Artificial Intelligence?
The current and future demand for AI engineers is staggering. The New York Times reports a candidate shortage for certified AI Engineers, with fewer than 10,000 qualified people in the world to fill these jobs, which according to Paysa earn an average salary of $172,000 per year in the U.S. (or Rs.17 lakhs to Rs. 25 lakhs in India) for engineers with the required skills.
Those who complete the course will be able to:
1. Master the concepts of supervised and unsupervised learning
2. Gain practical mastery over principles, algorithms, and applications of machine learning through a hands-on approach which includes working on 28 projects and one capstone project.
3. Acquire thorough knowledge of the mathematical and heuristic aspects of machine learning.
4. Understand the concepts and operation of support vector machines, kernel SVM, Naive Bayes, decision tree classifier, random forest classifier, logistic regression, K-nearest neighbors, K-means clustering and more.
Comprehend the theoretic
Learn more at: https://www.simplilearn.com
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for EveryoneJames Anderson
If Artificial Intelligence (AI) is a black-box, how can a human comprehend and trust the results of Machine Learning (ML) alogrithms? Explainable AI (XAI) tries to shed light into that AI black-box so humans can trust what is going on. Our speaker Meg Dickey-Kurdziolek is currently a UX Researcher for Google Cloud AI and Industry Solutions, where she focuses her research on Explainable AI and Model Understanding. Recording of the presentation: https://youtu.be/6N2DNN_HDWU
AI, Machine Learning, and Data Science ConceptsDan O'Leary
An overview of AI, Machine Learning, and Data Science concepts, contrasting popular conceptions of AI to state-of-the-art methods in Data Science. An introduction to Machine Learning will compare supervised and unsupervised methods, give high-level descriptions of key methods, and discuss current use cases and trends.
Web version of presentation given to the Data Science Society of Auburn, a mix of undergraduate and graduate students interested in Data Science.
This document provides an overview of artificial intelligence (AI) including definitions of different types of AI, a brief history of AI, potential application fields and use cases, and the future outlook for AI. It defines AI as ranging from everyday applications to self-driving cars. It discusses narrow AI, general AI, and superintelligence. The document also summarizes key milestones in the development of AI from 1955 to the present and potential opportunities and challenges of AI including automation, ethics, and politics. It provides examples of Austrian AI startups and their technologies. The outlook suggests that human-level AI may be achieved by 2040 and superintelligence by 2060 with impacts on robotics, climate change, human enhancement, and autonomous
Introduction To Artificial Intelligence PowerPoint Presentation SlidesSlideTeam
Introduction to Artificial Intelligence is for the mid level managers giving information about what is AI, AI levels, types of AI, where AI is used. You can also know the difference between AI vs Machine learning vs Deep learning to understand expert system in a better way for business growth. https://bit.ly/3er7KWI
This document discusses the rise of intelligent control systems using techniques like artificial intelligence, neural networks, fuzzy logic and genetic algorithms. It lists several applications of intelligent control systems, such as in manufacturing, aerospace, healthcare and consumer products. Intelligent control systems are characterized by their ability to adapt, learn autonomously, and form hierarchical structures. The document also briefly mentions uses of intelligent control in areas like truss construction, spacecraft assembly, and health monitoring.
Presenting this set of slides with name - Artificial Intelligence Overview Powerpoint Presentation Slides. This complete deck is oriented to make sure you do not lag in your presentations. Our creatively crafted slides come with apt research and planning. This exclusive deck with thirtyseven slides is here to help you to strategize, plan, analyse, or segment the topic with clear understanding and apprehension. Utilize ready to use presentation slides on Artificial Intelligence Overview Powerpoint Presentation Slides with all sorts of editable templates, charts and graphs, overviews, analysis templates. It is usable for marking important decisions and covering critical issues. Display and present all possible kinds of underlying nuances, progress factors for an all inclusive presentation for the teams. This presentation deck can be used by all professionals, managers, individuals, internal external teams involved in any company organization.
artificial intelligence - in need of an ethical layer?Inge de Waard
This document discusses the need for an ethical framework for artificial intelligence (AI) used in education. It notes that algorithms and AI systems are developed by humans and can reflect their biases, potentially limiting opportunities for some groups. It suggests establishing an ethics commission or requiring ethics reviews of AI systems to ensure they promote values like diversity, inclusion, and student well-being rather than just replicating existing social norms. The document also questions whether establishing ethical guidelines is even possible given that AI systems are complex and outcomes are hard to predict. It asks readers to consider what an ethical approach to AI in education might include.
The document provides background on Emperor Nero of Rome, who was born outside Rome and whose mother married his great uncle Emperor Claudius so that Nero would become the next Emperor. Nero went on to become one of the most tyrannical and oppressive leaders in Roman history, known for his malevolent acts. The document signals it will provide more details on Nero's life and rule.
How To Write An Essay On Police CorruptionTracey Souza
The document provides steps for requesting and completing an assignment writing request through the website HelpWriting.net. It outlines registering for an account, completing an order form with instructions and deadline, reviewing writer bids and choosing one, authorizing payment after receiving the completed paper, and having the option to request revisions. The document emphasizes HelpWriting.net's promises to provide original, high-quality content and offer refunds for plagiarized work.
From Morten Rand-Hendriksen's Smashing Conference Freiburg 2018 talk.
Every decision we make is one made on behalf of your user. How do we know the decisions we make are the right ones? It is time we initiate a conversation: About where we are and where we want to go, about how we define and measure goodness and rightness in the digital realm, about responsibility, about decisions and consequences, about building something bigger than our own apps. It is time we talk about the ethics of design.
This talk introduces a method for ethical decision making in design and tech. Rather than a wet moralistic blanket covering the fires of creativity, ethics can be the hearth that makes our creative fires burn brighter without burning down the house.
https://smashingconf.com/speakers/morten-rand-hendriksen
The passage discusses class structure in F. Scott Fitzgerald's novel The Great Gatsby. It notes that the 1920s saw a rise in materialism and narcissism as the U.S. recovered from World Wars. Fitzgerald used the characters in Gatsby to represent different social classes that were prevalent in 1920s America, with an emphasis on prejudice between the classes. The essay will further analyze how Fitzgerald portrayed class structure and prejudice in the novel.
NCompass Live - March 13, 2019
http://nlc.nebraska.gov/NCompassLive/
In the presentation on February 13, I covered what emerging technology is and how it relates to libraries. Now it’s time to dive into what that means on a larger scale. What makes technology good or bad? Who is really qualified to make that determination? Anyone who tracks emerging technology will start to think about how this technology will affect the future of our communities and the world. What will make a piece of technology influential and powerful in the world? Tune in if you want to learn more about the following topics:
•Which ethics matter most in technology?
•What makes technology good or bad?
•What potential dangers should we be aware of with new technology?
•What factors might affect society in the long-run?
There are no easy answers to any of these questions. The concept of ‘ethics’ tends to be a gray area for many, and understandably so. This presentation won’t give you a finite right or wrong stance on all technology. But it will provide you with the tools to make the decision for yourself.
Presenter: Amanda Sweet, Technology Innovation Librarian, Nebraska Library Commission.
The AI Revolution – Are Tech Titans Racing to the Bottom.pdfAnshulsharma874284
In the midst of the AI revolution, a profound and pressing question emerges: Are the tech titans of our age engaged in a race to the bottom? As artificial intelligence continues its relentless march into our lives, transforming industries, and reshaping the very fabric of society, it’s imperative that we pause to consider the ethical implications of this unceasing advancement.
The world’s largest technology companies, the likes of Google, Amazon, Facebook, and others, undeniably wield colossal influence over the development and deployment of AI technologies. Their innovation and progress have brought us astonishing breakthroughs, from voice-activated virtual assistants to self-driving cars. Yet, this extraordinary power also raises ethical concerns, as the rapid pace of AI development may inadvertently compromise the ethical standards that should underpin such trans formative technology.
This document discusses how networked nonprofits can transform communities through social media. It defines a networked nonprofit as one that works collaboratively through open information sharing rather than operating independently. It emphasizes developing a social culture where social media is a cultural norm, transparency by sharing information openly, and simplicity through leveraging networks to accomplish more with less. The document provides examples of how some nonprofits have successfully adopted these principles and cautions against potential challenges in making the transition to becoming a networked nonprofit.
The value of being human - finding balance between the artificial and nature ...Salema Veliu
A short opinion piece based upon a panel discussion l gave at the International Symposium on Technology and Society (ISTAS20). Exploring the societal and individual implications of Technology. Proposing how a revisiting and embodiment of certain eastern philosophies that help ground us in the nature world provide the balance to the artificial world we are creating. Understanding our previous, present and future relationships and behaviours with a higher intelligence may yet help us create a more accountable and holistic framework for Ai as echoed by the WEF.
Exploring AI Ethics_ Challenges, Solutions, and SignificanceBluebash
Artificial Intelligence, or AI, is not just a science fiction idea anymore. It's a strong and ever-present influence in our everyday lives. It helps us make decisions, molds our experiences, and impacts our future.
Artificial Intelligence AI in Libraries Training for Innovation WebinarSaid Ali Said
Objectives The objectives of the webinar are to:
• introduce AI in libraries
• describe the IDEA Institute on AI and its contribution to providing professional, innovative training in AI to library and other information professionals
• understand challenges and opportunities in implementing AI in libraries based on real-world experiences of the first cohort of Institute Fellows
• consider equity, diversity, inclusion and accessibility issues, and ethical questions, in AI implementation.
Speakers
Prof. Dr. Dania Bilal
Professor, School of Information Sciences at the University of Tennessee in Knoxville, TN.
Researcher, scholar and educator in Human Information Behavior, Human–Computer Interaction (HCI), User Experience and Design (UXD), Human–AI Interaction, and Information Science Theory.
Research focus is on user information interaction and behavior (children, teenagers and adults) with information systems, products and interfaces; and on user-centered design for better user engagement and experiences.
Principal Investigator and co-developer, IDEA Institute on Artificial Intelligence.
Clara M. Chu
Director and Mortenson Distinguished Professor, Mortenson Center for International Library Programs, University of Illinois at Urbana-Champaign, IL.
• Expert in developing appropriate and strategic solutions to deliver equitable and relevant library services in culturally diverse and dynamic libraries.
• Studies the information needs of culturally diverse communities in a globalized and technological society.
• Co-developer, IDEA Institute on Artificial Intelligence.
Target Audience
• Staff in any type of library and information center or information environment.
• Library and information science students, educators and researchers.
Objectives The objectives of the webinar are to:
• introduce AI in libraries
• describe the IDEA Institute on AI and its contribution to providing professional, innovative training in AI to library and other information professionals
• understand challenges and opportunities in implementing AI in libraries based on real-world experiences of the first cohort of Institute Fellows
• consider equity, diversity, inclusion and accessibility issues, and ethical questions, in AI implementation.
Speakers
Prof. Dr. Dania Bilal
Professor, School of Information Sciences at the University of Tennessee in Knoxville, TN.
Researcher, scholar and educator in Human Information Behavior, Human–Computer Interaction (HCI), User Experience and Design (UXD), Human–AI Interaction, and Information Science Theory.
Research focus is on user information interaction and behavior (children, teenagers and adults) with information systems, products and interfaces; and on user-centered design for better user engagement and experiences.
Principal Investigator and co-developer, IDEA Institute on Artificial Intelligence.
Clara M. Chu
Director and Mortenson Distinguished Professor, Mortenson Center for International Library Programs, University of Illinois at Urbana-Champaign, IL.
• Expert in developing appropriate and strategic solutions to deliver equitable and relevant library services in culturally diverse and dynamic libraries.
• Studies the information needs of culturally diverse communities in a globalized and technological society.
• Co-developer, IDEA Institute on Artificial Intelligence.
The document discusses the top 10 best ab wheels on the market, providing details about each one's features and benefits for strengthening the core muscles of the abdomen, back, shoulders, and arms. Ab wheels are an effective yet affordable home exercise tool that can be used to improve strength, flexibility, and muscle tone when performing exercises on a flat surface under control. Choosing a high-quality, sturdy ab wheel suited to one's fitness level can provide positive results in core muscle development over time.
This document provides a summary of structural family theory, which examines the unspoken rules within families and how they affect the family's organization. It discusses key concepts of the theory like subsystems, boundaries, and rules. It also reviews literature applying structural family theory to divorced families, emphasizing the importance of clear parent-child roles and establishing a new family structure for the adolescent's well-being.
Einhard wrote "The Life of Charlemagne" to praise his foster father and create a historical record of his great deeds. He aimed to describe Charlemagne's accomplishments so that he would not be forgotten as a great leader. Einhard saw Charlemagne as a role model and wrote the biography with brevity, focusing on Charlemagne's expansion of his empire, improvements to laws and education, and his character as a strong but merciful ruler.
Dan Faggella - TEDx Slides 2015 - Artificial intelligence and ConsciousnessDaniel Faggella
URL of the original TEDx Talk: https://www.youtube.com/watch?v=PjiZbMhqqTM
Notes from my 2015 TEDx presentation, titled: "We Should Wake Up Before The Machines Do," on the topic of artificial intelligence and consciousness.
Speaker: Daniel Faggella
Location: Southern New Hampshire University
Conclusion Words For College Essay. Online assignment writing service.Sarah Meza
Gabriel, a child, was referred to the pediatric department for evaluation of a cough, fever, and possible lung infiltrate. He was seen by a doctor who discharged him home with a prescription for amoxicillin and Ventolin to treat his symptoms. He was asked to follow up with his general practitioner in a few days.
Similar to Ethical AI summit Dec 2023 notes from HB keynote (17)
This document discusses writing as an academic practice in light of generative AI technologies. It notes that while generative AI could enhance productivity, it may also narrow information access and gather user data. In contrast, purposes of student writing include expressing understanding, connecting to experience, developing voice and identity. The document argues that generative AI models are normative, extractive, unaccountable and lack human qualities like intention. It advocates for accountable assignments focusing on human aspects of writing that AI cannot replicate, like writing from a standpoint or to make a real difference. The document also discusses guidelines around disclosing AI assistance in academic work and encouraging critical use and understanding of generative AI technologies.
Helen Beetham discusses the need for universities to rethink how knowledge and thinking are practiced in their curriculums in a post-pandemic world. She argues that curriculums should value sustainability thinking, decolonization, digital practices like design thinking and coding, and data literacy. Universities also need to foster critical thinking about technology and its impacts. To prepare students for uncertain futures, curriculums should incorporate futures thinking exercises to imagine alternative futures and the knowledge needed to thrive in them.
This document discusses findings from a study on students' digital experiences and how they can inform the future of universities. Some key points:
1. Students focus more on transactional digital tasks like accessing information rather than transformational skills. Their digital skills are often not developed for future careers.
2. Not all students thrive equally in digital spaces, and digital practices don't always transfer from personal to academic settings. Inequalities are amplified.
3. When done well, digital tools can engage students through flexibility, specialized applications, and up-to-date resources. But some students lack skills, connectivity, or are disengaged.
4. Lectures remain important but are changing, with students relying
The document summarizes a presentation on open education and critical digital literacies. It discusses:
1. The need for open education to develop critically resourceful learners who question aspects of their learning and see themselves as critical subjects.
2. How open education requires critical educators who challenge power dynamics and develop critical pedagogies.
3. Various ways learners can develop critical thinking skills through digital technologies, such as through situated practices, developing technical skills, and forming their identity as learners.
Education technology - a feminist space?Helen Beetham
This document discusses whether education technology can be considered a feminist space. It notes that while some see the field as supportive of women, issues around unequal opportunities for women in tech careers and algorithmic bias persist. The document advocates for applying feminist concepts and critical frameworks around power, social justice, and the "male gaze" to research and practice in digital education. It argues that developing students' critical thinking around technology's social impacts and biases could help address these issues.
Student digital experience tracker expertsHelen Beetham
Slides from Jisc Student Experience Experts' meeting June 2016 introducing data from the Jisc Digital Student Experience Tracker pilot and findings about the Tracker process
The future is now: changes and challenges in the world of workHelen Beetham
The document discusses how digital technologies are changing the world of academic work. It notes that academic work is becoming more fragmented, uncertainly located, reputation-centered, monitored and quantified. It also discusses how work is becoming more entrepreneurial and distributed between human and machine tasks. The document proposes a digital capabilities framework to help university staff develop the skills needed to adapt to these changes in the digital university. It emphasizes the importance of developing digital skills for all staff roles.
Digital identities: resources for uncertain futuresHelen Beetham
The document discusses digital identity and how it relates to students. It notes that digital identity involves a person's digital traces, personal data, and online presence. While eportfolios can support identity checks and reflection, identity work occurs across many digital platforms and systems. The conclusion emphasizes that learners need secure environments to explore emergent identities, and institutions should focus on developing students' long-term digital identity skills through playful identity work, a repertoire of skills rather than perfection, and progressively more open engagement online.
La Trobe Uni Innovation Showcase keynoteHelen Beetham
This document discusses how digital technologies have changed education and innovation in the field. It notes that technology alone does not drive change, but how it is incorporated into social and educational practices can change values, goals, methods and tools. New knowledge areas and ways of knowing have emerged from digital technologies, including new data analysis methods, modes of representation, and theories of learning. Digital technologies also define new contexts for learning as universities and students increasingly use digital systems and practices. The future is uncertain but emphasizes students developing capabilities to thrive in rapid change, including through innovative teaching approaches that develop digital literacy. Barriers to innovation include organizational culture and infrastructure, but can be addressed through strategic planning and leadership support.
This document discusses the relationship between physical and virtual academic spaces. It makes three key points:
1. Academic campuses have become highly virtualized, with student status and learning achieved through digital systems and online interactions. However, virtual spaces cannot replace the value of in-person interactions.
2. Virtual spaces are designed environments that shape the meanings and uses that are possible within them. They also leave some students feeling exposed or vulnerable.
3. While the body seems excluded from virtual spaces, bodies are still present through digital traces, avatars, and the real-world labor that powers virtual systems. Virtual spaces both enable and challenge expressions of identity.
Outline of features of an educational organisation that might usefully be audited or assessed to determine its capacity to respond to digital opportunities and threats.
Wellbeing and responsibility: a new ethics for digital educatorsHelen Beetham
Slides for Jisc Learning and Teaching Experts' group June 2015 summarising work of Jisc Digital Student project and 'Framing digital capabilities' project. Summarises findings and draws out implications for 'digital wellbeing' as an emerging concern for staff and students.
Flipped learning is an arrangement where students complete independent study tasks before a taught session. This allows class time to be used for discussion, problem-solving, and other active learning activities led by the teacher. Both the independent and classroom portions can utilize technology like online videos and collaborative tools. Effective flipped class design includes allowing students to learn material before class, assessing understanding at the start of class, teaching responsively based on student needs, making pre-class work essential to in-class activities, and using class time for collaborative work and application of concepts.
Neutral version (university references removed) of webinar designed and run for the University of Newcastle, April 2015. Dealing with outcomes from the Jisc-funded Digital Student project and my own findings from interviews with students and consultation with sector bodies.
Neutral version (university references removed) of a workshop designed and run for the University of Bristol, March 2015. Deals with issues of blended, flipped and borderless learning and tries to distil some key principles.
Third of three slide decks for a flipped keynote presentation at the SEDA UK conference, November 2014. This looks at how we might 'recover' from the impacts of digital technology in education, and in particular what our responsibilities are as educational developers.
Second of three slide decks for a flipped keynote presentation at the SEDA UK conference, November 2014. This looks at two kinds of response to the digital revolution, a critical/intellectual response and a felt response.
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
1. It is a great privilege and also I feel a great responsibility to be talking
about these issues with the learning technology community today. There
has been a lot of talk about ethics in in relation to what is called ‘AI’. In
particularly ‘generative AI’, these new computational models that can
synthesise text and other media. We are all supposed to be making
‘responsible and ethical’ use of these new capabilities, and supporting
students to do the same.
2. What I want to do in this talk is to question whether we are really able to
be responsible, ethical actors in the way that guidelines like the Russell
Group’s ‘Principles on the Use of AI’ imagine us to be. I should say that I
am very grateful to the Russell group for producing their guidance, and
to all the other people working hard in universities to respond to the
challenges and changes that are emerging at such speed. I think all
these guides are struggling with the same issues we are all struggling
with, so I’m using them as an available example and not as a particularly
bad one.
I’ve selected this warning from the Russell Group Principles to illustrate
what I think the problems are. An ethics code ‘may not be something that
users can easily verify’. I call this ‘Schroedinger’s ethics’. You probably
know the thought experiment in quantum physics. In the experiment, a
radioactive particle either does or does not decay, according to quantum
probability. If it decays, a flask of poison is broken and the cat that is
inside the box for some reason dies. But as you can’t see inside the box,
when the experiment is over the cat is, in some senses, both alive and
dead.
This is really a story about the nature of probability. Generative AI models
are probabilistic models. They have a certain, deliberate randomness in
the way they generate media. They are also black boxes. We don’t know
what data they were trained on, what patterns they are responding to,
how they were designed by model engineers or redesigned by hundreds
of human data annotators. So we can’t know if there is a piece of ethical
code in there or not. A bit like the live/dead cat.
3. I think this also shows how wrong it is to go looking for ethics inside a
black box. Even if we knew the cat was alive, would we know what kind
of ethical cat it was? Would we like its ethics? Is a cat, or a piece of code,
really an ethical actor?
The cat as code, if it exists, clearly belongs to the black box model. And
the black boxes belong to some of the most powerful corporations on the
planet. On the right you can see some of the current language models
and interfaces - you can always try matching them to their corporate
owners in the chat window. There are a few wild cards. On the left are the
organisations that are building and hoping to profit from them. What can
we expect from these corporations in terms of an ethical code? Their
track record is not good. The major player here eliminated its entire AI
Ethics team earlier this year as it deepened its partnership with its ‘not for
profit’ partner. The second player sacked ethics advisers Timnit Gebru
and Margaret Mitchell when they raised issues of bias and safety. XAI,
established earlier this year ‘to understand the true nature of the
universe’, learns that true nature by scraping X for content - the X from
which the entire ethics team was also sacked when its current proprietor
took it over. Lobbyists from these corporations have been busy in
Washington and Brussels watering down legislation that might provide
some external scrutiny of their designs and business models. Even if it
was a good idea to put an ethical cat in a box, the cat does not seem to
be very well.
4. So if we can’t rely on embedded ethical code, how about the actors at
the very opposite end of the AI stack, the end users, all of us? Well, the
Russell Group principles respect our personal agency and encourage us
to develop AI literacy - and as an educator of course I approve of this.
But to have agency you must have information, you must have the time
and opportunity to reflect, and you must have real choices. This is
problematic when the new capabilities are being integrated so rapidly
into core platforms such as VLEs, grading environments, search engines
and productivity tools. Not to mention all the thousands of apps and APIs
that are springing up between the models and the users, offering
everything from lesson plans to off-the-peg assignments, as diverse
actors seek a piece of the profitable pie.
The EU guidelines for educators are more circumspect about the issue of
agency. They don’t demand that we are fully formed ethical agents in
these novel situations, but that educators are able to ask questions, to
engage in dialogue with providers, to make demands on responsible
public bodies. Well, hang on there. Which responsible public bodies?
Universities? Colleges? Governments? What ethical rules have they come
up with, in the last year? What agency do they have to enforce them, on
our behalf?
We will come back to the question of who is ethically responsible at the
end, because there is no point me banging on about harms if it leaves all
of us feeling helpless or demoralised. If we are not, individually, very
empowered we are certainly able to ask our universities and colleges
and the sector overall to provide us with a better environment for ethical
action.
5. What we see in the ALT framework, which I really like, are signs of
something I would call a relational approach to ethics. There is no fixed
code here. There are thoughts and explanations. We are asked to look
beyond our immediate context as users and recognise broader
responsibilities.
6. There are a great many things on the checklist of ethical concerns about
generative AI. Bias, privacy concerns, environmental issues, inequity,
disinformation, surveillance, copyright theft. These can seem random,
overwhelming, and disconnected. But if we take a relational approach, I
think we can better understand where the risks and harms arise. What is
a relational approach? It looks at how technology can reframe
relationships. It seeks to understand contexts and ecosystems rather
than focusing on individual users. It asks questions rather than ticking
boxes. I have taken these points from UNICEF and the Centre for
Techno-moral Futures, but you can find references to relational ethics in
healthcare, law and other professions if you search online.
7. Perhaps the most important feature of relationally to me as a feminist and
anti-racist is that I should recognise my own position. There is no ethics
from nowhere. This is precisely what AI in general, and synthetic models
in particular, propose. They offer a view from nowhere, a completely
unaccountable account of how the world is. They sound plausible, but
they have no real stake in the words or images they produce. “I’m sorry,
that isn’t what I meant,” they can always say. Again, and again.
So here is my own position story, which explains something of why I’m
here.
At 17 I turned down a place to study philosophy and psychology at
Oxford. I went to Sussex university to study AI and cognitive science
instead. It seemed to me a more plausible and certainly a more
contemporary and sexier promise of knowledge about the mind. By 19 I
had parted company with that degree, in a way that does not reflect very
well on my quest for knowledge. But I do know that from that time I was
and have remained convinced that the claims AI was making about
minds and ‘intelligence’ did not stack up. I was, you may conclude, a
very odd teenager - AI is full of odd teenagers - and I say that as
someone who remains good friends with people in the academic AI
community.
I’ve kept my thoughts about AI to myself when I went on to work in
education, because it seemed to me that AI would always be the victim
of its own hype cycles, that it would never attract much attention or
credibility from the education community. And yet here we are.
So, where am I speaking from, besides my own weird preoccupations? I
am a researcher, particularly in digital literacy or how we become agents
- ethical agents - in relation to digital technologies. I am a teacher, some
of the time. I have a stake in the experiences and practices of students.
But, while I may be a marginal voice on this issue, I have a voice. I have
that privilege. I’m white and well educated. The boundaries I find myself
8. on the wrong side of are not going to mean my credit is stopped, or
members of my family locked up, or I am going to be denied treatment or
a border crossing. All these can happen to people who find themselves
on the wrong side of categories that are determined by the use of AI.
9. So these issues of power, privilege and position open out into a question:
whose AI is this? There is no definition of artificial intelligence that is not
also a definition of intelligence itself: who has it, and whose intelligence
matters. There is no ‘the human mind’, obviously, when you think about it.
There are only human body-minds in specific cultures, societies and
systems of thinking. The abstractions required to define ‘AI’ serve
particular positionalities and purposes.
‘Artificial intelligence’ is and always has been a project. It is a project in
computer science, about what kind of models can be built with enough
power, speed and scale. It is a project in big tech, about how data power
can shape platforms, interfaces, and operating systems, and therefore
work and workplaces. It is becoming a project in education. It is a project
to elevate some things that some human beings do, and neglect others
as undeserving and unimportant.
10. The idea that intelligence should be defined by the people in charge of
the machines - this has been with us for a long time. The 1956 Dartmouth
Conference that launched the term ‘AI’ took place in a particular culture:
in the US, at the peak of its economic and global hegemony. Intelligence
had won the war, thanks in part to the code-breaking Colossus computer.
Intelligence testing was being used to shape the school curriculum in the
US, and in the UK - both countries in which education was being
integrated for the first time. The men who coined the term AI were sure
they knew what intelligence meant. It meant global power. It meant
playing chess and solving mathematical problems. Or as Marvin Minsky
put it a couple of decades later it meant “the ability to solve hard
problems” (Minsky, 1985a). Of course it takes a certain kind of man to
know what’s hard - but as it turned out the simple problems like vision
and natural language were much, much harder to solve with computation
than the hard ones, like playing chess.
11. But the use of intelligence to divide people up, and assign them to
different categories, especially to different categories of work, this goes
all the way back through eugenics, intelligence testing, all the way back
to Charles Babbage, who in his time was not celebrated for his failed
difference engine, but for his much more successful work on making the
factory and plantation systems more efficient. Intelligence was taken from
the weaver and assigned to the punch card system and the factory
overseer. Plantations were also managed like machines, no worker
having oversight of the whole process. In exactly the same way, the
difference engine broke down the work of calculating into discrete
operations, so that the simplest could be done mechanically. If it had
been successfully made and used, the first people put out of work would
have been the women and children who did these basic calculations, to
produce the nautical almanacs that were essential to the colonial trade.
These workers were called ‘calculators’, just as the women workers on
the Colossus at Bletchley Park were called ‘computers’. It’s never the
most powerful people whose work can be taken over by a machine.
12. In the present day, writers like Meredith Whittaker, Simone Brown, Edward
Ongweso junior, Joy Buolamwini of the Algorithmic Justice League and
many more are exposing the consequences of ‘AI’ for people who fall into
the wrong categories - whether it’s facial recognition in policing (this
diagram is from the gender shades project) or AI surveillance of borders
and conflict zones.
13. Talking of conflict zones, DARPA is the US military agency that has
funded AI since the early 60s. Much of this work has gone into
developing autonomous weapons, and supporting battlefield decisions. I
have removed the image originally included under this recent headline,
about DARPA funding a multi-million dollar project on battlefield AI. I’ve
removed it because in September this year, when the announcement was
published, an image of drones dropping bombs on their AI-designated
targets did not have the same capacity to distress as would do today. I
did not feel we would want to be confronted with that image. But I think is
important to remember that the project of AI has always been one of
projecting global power.
14. You may think the link to military AI is a bit unfair. After all, general
technologies can be used in many different ways. DARPA funded the
development of speech recognition technologies that are now used to
support accessibility, for example. But the scale of military funding of AI,
at least until about 2014, was what kept the project alive. Inevitably it
skewed what that project was interested in. And it means that many
companies looking to sell systems to education today, those systems
have their roots in military and surveillance applications. Looking into the
‘Safer AI’ summit a few weeks ago, I found myself wondering why the
task of ‘unlocking the future of education’ had been given by the DfE to
an AI company called Faculty. Some of you may remember Faculty from
its role in the Vote Leave campaign and in helping to run logistics for
number ten during the covid crisis. At the end of a blog post
summarising Faculty’s involvement with the future of AI in education was
a link inviting education leaders to ‘connect with Faculty about your AI
strategy’. Well, I clicked the link. I’m like that. And it goes straight to this
page of services to the military and law enforcement agencies. I would
call this a smoking gun if it wasn’t such a militaristic metaphor. But clearly
the same services that have been developed for law enforcement are
being sold directly to schools, with the endorsement of the DfE. This is
kind of unavoidable in an industry that only survived thanks to decades
of military funding.
15. I want to focus for the rest of my talk on what is called generative AI,
though from a statistical and computational point of view this is an
entirely different technology to the ones being developed in the 1950s
and the ones underpinning most surveillance technologies I just
described. I prefer to call this new technology synthetic media. This is
my definition (on the slide), but Emily Bender also called generative AI
‘synthetic media machines’, and Ted Chiang, another AI insider turned
critic, is content with ‘applied statistics’. This is Naomo Klein’s more
politically positional definition:
16. So it isn’t quite as simple as ‘seizing’. This is where I want to talk more
particularly about how relationships are being reframed through these
technologies. So…
For example, LLaMA-2 was trained using more than 1 million human
annotations. The model in this diagram, taken from open source project
DeCiLM, is retuned with new data every week. I am really grateful to this
project for its somewhat untypical openness about the training process.
What you see at each of these four points is human work, human
knowledge work being done. But only one kind of work - the work of the
model engineers - is acknowledged and rewarded. The rest are forms of
‘intelligence’, if you like, that the ‘intelligent’ system does not want to
acknowledge or pay very much for. Like the human chess player hiding
in the mechanical turk we are not supposed to see these people if we
want the magic to happen.
Now, in practice, most of the data workers who make up the third part,
who are usually referred to as the ‘data engine’, are workers in the global
south, where they are paid around $1-3 an hour, depending on the kinds
of data enrichment they are doing. This work is precarious and stressful.
Workers in Kenya, for example, are suing for the trauma they suffered
labelling violent and harmful images so the model’s producers could
claim that it was safe for users.
Also, nobody paid for the original training data, that actually makes up
the data model, that in being re-synthesised now threatens the
livelihoods of creative and productive workers, and in fact the whole
economy of creative and scholarly production, that all our own livelihoods
here depend on in the long term.
And finally are all of us - every time we interact with the model we are
contributing to future training with our own creative ideas, our prompts,
our materials. Who benefits from this? In the short term, perhaps we are
made a bit more productive. Productivity is always enjoyable when it is
17. fresh out of the box. But who benefits when these become the new global
operating systems for all knowledge? Unlike the internet, that for all its
flaws is an open, distributed, standards based architecture. These are
closed systems. And even as closed systems, they are not closed like an
organisation is closed, with all its explicit and implicit know-how
distributed throughout its resources and its technologies and its
employees. Through these new relationships of data and labour, valued
knowledge is entirely captured and managed in one system, a system
that can be owned by someone else.
Once again, the idea of intelligence is being used to divide and classify
labour, in order to deskill and devalue it. It’s still human beings, doing
human work. This is not some kind of magic box, it is just good old
fashioned Taylorist - or perhaps Babbage-ist - division of labour.
18. We need to think about where our students fit in all this. All the guidelines
I’ve seen imagine students as end users. So, the worst we can say about
that is they will have to be a lot more productive. Because if your
employer is expecting 250 ChatGPT articles a week, as some recent job
advertisements have asked for, or expects you to code in half the time it
used to take, with GitHub CoPilot, you are not going to be paid more, you
are just going to be timed less. Perhaps a very few students will be highly
paid system designers. But increasing numbers of them will end up as
part of the data engine, perhaps working for one of the burgeoning
annotation companies, or perhaps working inside an organisation, tuning
its data model so other workers can continue to ramp up their
productivity. The International Labour Organisation highlights that people
under 30 are far more likely to be employed in platform data work than
older workers. The EU estimated five years ago that 10% of students had
worked in the gig economy - that number is surely higher now. Our
students are implicated in the data engine at every level. We can’t just
think of them as consumers of its products.
19. But as educators, we also care about how students are being addressed
as consumers. Type the words ‘writing’ and ‘AI’ into any search engine -
while you still can - and you will fine hundreds of promoted web sites,
selling services that promise to take away the drudgery of reading and
writing and give students back their time. And we should not be
moralising about this - as teachers and researchers we are being sold
exactly the same promise, and we are lapping it up. Take away the
drudgery, focus on what really matters. Except, what if ‘what really
matters’ sometimes is the hard work of reading and writing? What if you
can’t ‘humanise’ your text, your images, or your code, just by clicking a
button, but you can develop as a human being by engaging in those
activities for yourself?
In fact the evidence points entirely the opposite way to the promise. More
automation actually makes work routines more standardised and more
boring, for the people still left to do them. In the case of learning in
particular, it is not easy to know what this enhanced productivity is buying
you, unless it is more time to earn the money you need to pay for your
learning - perhaps with your side hustle in the data economy. Isn’t time to
read, write, learn and think precisely what education is supposed to be
buying you?
20. There are inequalities baked into the models themselves, as I have
argued, but they are also baked into the commercialisation of the
models, with paid versions rapidly overtaking free models in terms of
their performance. A recent study, which I found on the Institute for
Student Employers web site (though actually carried out by , showed that
results on standard graduate recruitment tests are now skewed in favour
of applicants who can pay for premium models. And that is only the
applicants from the wealthiest households. A small levelling up effect for
neurodiverse applicants, for example, was dwarfed by this financial
inequity. So the ISE’s conclusion is that this will ‘set social mobility efforts
back years’. It seems likely that recruitment will centre on live tasks,
interviews and team activities with no access to generative models. So
we do students a disservice if we don’t expose them to these same
conditions in their studies and assessments. It would be strange if
universities were falling behind graduate recruiters on such a key issue of
equity. Major companies won’t be relying on generic models. They won’t
want generic prompt engineers. They will want critical thinkers who can
express their ideas and work in teams, and if they use in-house models
for productivity reasons, they will train their people to use them. That’s my
prediction anyway.
21. So there are inequities in the making of the models, inequities in the
using of the models, and it’s now well known that the data the models are
built on has all kinds of bias built in. This data reflects views from the
distant past, views from the fringes of the internet, and above all it
predominantly reflects the views of white, English speaking men whose
ideas have made it into the digital record. I mainly know about language
models but when it comes to bias the image models are just much more
vivid. So again, this is research done by Bloomberg, because the big
companies really want to know how these models are impacting on their
ability to attract the best talent. They are putting equity to the fore. They
generated thousands of images using the names of occupations, and
they found the skin colour of the people depicted in those images
matched the typical pay of the occupations. I don’t want to show
generated images and risk perpetuating those ideas, but I find this a
striking image from their research.
22. The same study found similar issues for gender and pay, though
obviously gender is a very contested issue when it comes to how digital
images are gendered by viewers. I am not commenting on Bloomberg’s
specific process here, only that clearly the generated images were
stereotyping occupations along conventional gender lines.
23. And if it were not enough that these models perpetuate some of the most
violent, unjust ideas from our own past, they are also a threat to our
planetary future. There are of course bigger polluters than big tech, but
as these statistics show, the massive computing power required to run
models, both in training and for every inference run, is non-negligible and
is growing every year. There is a massive demand on water to cool the
power plants where thousands of Nvidia chips are running these models.
We are told the industry wants to develop less power and water hungry
systems. But at the moment, a shortage of chips is the only thing holding
back the development of even more powerful models. And computing
power is the reason why the big tech companies have gained market
dominance so early on. The first truly marketable products from the whole
AI project have come from throwing power and data at it. This is a winner
takes all market, and power wins. Why would the winners make it any
easier for competitors to get on board?
24. And finally, there is what we in universities and colleges have a special
care about - what you might call the knowledge ecology, or how ideas
are developed, tested, represented and shared. In relation to our care of
students, of course we must care about issues such as deepfakes, the
flooding of social media with disinformation, and what Cory Doctorow
calls the enshittification of the internet at large. But we must have a
special concern for how synthetic text and data will insert themselves
into the research, teaching and learning practices that we, in the sector,
rely on. Research is difficult, and we are always under pressure to do it
faster and more efficiently. But what if difficulty is sometimes, actually, the
point? To discover something that isn’t in the written record and isn’t in a
model - which is to say, a summary of previous research - either?
Teaching in ways that are adaptive to students’ needs takes time and skill
and personal attention. But what if that time and attention is actually what
they need? This quote comes from Luke Munn’s recent article on
evaluating AI on indigenous Maori principles - and thanks to Paul
Prinsloo for taking me to this piece. He points out that generative AI
doesn’t just categorise us, it requires us to think in its categories, and
those categories may not be what we need to imagine alternative futures,
and discover alternative realities. The speed and efficiency of
25. So I want to talk briefly now about what a relational ethical response to
these developments might look like. And one solution I think we should
resist is to reframe everything we teach and assess around a definition of
human skills in relation to what is called ‘artificial intelligence’. We don’t
need people who can work in collaboration with artificial intelligences -
that concedes agency to what are simply systems for coordinating our
own and other people’s work. It’s delusional. We shouldn’t accept that
what hype and computing power and the concentration of capital can
produce today should define what it is valuable and useful for graduates
to do tomorrow. These systems are brittle, unreliable, contentious,
inequitable, a legal minefield. Big companies are already investing less in
them than they were a year ago. It is not inevitable that they will dominate
the workplace, and maybe we should be sharing with students that there
are choices and there are doubts.
26. When it comes to working with students, I fully agree that we want
students to be critical, but not only about the outcomes of these models
and not only in relation to their own work. We want them to be asking
questions that go wider than that, depending of course on the focus of
the subjects they are studying. The questions may look different in
engineering, in history, and in nursing. But I think these are questions that
young people are already asking. They are not moralising about abstract
things such as originality and academic integrity. They are actually very
concrete questions about technology and learning, that they have a
stake in.
27. Finally, I think we need to be creating an ecosystem in which ethical
choices are actually available, and the time and resources for people to
think, consider, ask questions, negotiate understandings. The new EU
regulations on AI are actually rather good at defining different kinds of
ethical actors in the AI space. Mostly they have let the big, general
models off the hook, for reasons we can speculate about. But despite
that, they are not at all interested in end users. The responsibility for
providing an ethical environment in which systems are deployed lies
mainly with the organisations providing the systems, in our case with
universities and colleges, and their organisations and regulatory bodies.
The EU classifies the use of all AI systems in education as high risk. It
requires all of these things… Now, do we really believe that any of the
models we are using in further and higher education could meet these
requirements? And if not, how do we get there? I don’t see how we can
do that without, as a sector, deciding to build and maintain our own
models.
28. It will be challenging. This chart shows the huge brain drain there has
been from academic AI to the commercial sector. Although small, open
models are now being built that can run on a laptop, to serve our sector a
lot of computing power will undoubtedly be needed. We will have to
relate to the large scale commercial models at some level. But by having
a collective voice, the sector could negotiate that relationship more
effectively for all of us. We can only do this, therefore, collaboratively and
openly. Otherwise this will just another source of inequity and
stratification across universities and colleges and their members. Wealthy
businesses like Bloomberg are already doing this. I have no doubt
wealthy universities and research institutes are doing it. But we need to
start joining up.
Collectively, universities and colleges are key ethical actors. Perhaps
uniquely as a sector we still have the know-how. We have reasons to
collaborate - we are not really a market, however many governments try
to make it so. We have a very particular stake in knowledge, knowledge
production, and values around knowledge. And we do build open
knowledge projects - wikipedia rests very heavily on the work of students
and academics. We have contributed extensively to open standards
since the birth of the internet. We have thriving open source and open
education communities. But will the sector act collectively, in this case?
29. I want to leave you with the thought that we do all have a voice in this.
This audience is one of the most influential when it comes to how the
sector responds, collectively. I started with a black box that couldn’t be
opened, I want to finish with a box that perhaps shouldn’t have been
opened. When Pandora’s box was opened, all kinds of troubles came
out. But at the bottom there was still hope. And I have great hope in the
responses being made by members of the ALT community to the
challenges of these new technologies, founded on our FELT values. And
I look forward to hearing about more of them today.