The document discusses issues around whose ethics and values are embedded in generative AI tools. It notes that while ethics codes exist, users cannot easily verify what values are incorporated. It advocates for a relational approach that considers the dynamic contexts and relationships in which AI is developed and used. The document outlines how generative AI works by training on large datasets and being refined through user prompts, but this process can encode biases and privilege some voices over others. It raises questions about the environmental impact, risks to education and jobs, and how AI may define and value humanity. It argues we need an ecosystem that fosters agency, care, accountability and representation when developing and using generative AI technologies.
Ethical AI summit Dec 2023 notes from HB keynoteHelen Beetham
Somewhat extended and tidied up text of HB keynote at the ALT winter summit on AI and Ethics, December 2023. Slides draft quality for navigation only - a better quality set of slides is also available.
The field of Artificial Intelligence (AI) has progressed rapidly in the past few years. AI systems are having a growing impact on society and concerns have been raised whether AI system can be trusted. A way to address these concerns is to employ ethically aligned design principles to the development of AI software. Yet these principles are still far away from practical application. This talk provides state-of-the-art empirical insight into what should researchers and professionals do today when the client wants ethics to be added to their system.
This presentation looks at how AI works, how it is being used presently in Education and then outline some concerns about how AI might be used in education in the future.
I argue that AI has a much greater part to play in Education – particularly in making education more widely available in the developing world and in reducing the cost of education.
The talk then moves on to discuss general ethical concerns about how AI is being used in society, looking at the issue of how we program autonomous vehicles as a case in point. I then outline five areas of concern about the use (and potential abuse) of AI in education arguing that we need to have a much more informed debate before things go too far. With this in mind, I close with some suggestions for courses and reading that might help colleagues to become better informed about the subject.
After a great trip to Melbourne for Future Assembly, I thought it'd be great to present our thoughts on Design Ethics for Artificial Intelligence.
It's a thought-provoking and engaging presentation and will have you pondering our flawed and highly subjective value systems.
Ethical Considerations in the Design of Artificial IntelligenceJohn C. Havens
A presentation for IEEE's Ethics Symposium happening in Vancouver, May 2016. Featuring presentations from John C. Havens, Mike Van der Loos, John P. Sullins, and Alan Mackworth.
Ethical AI summit Dec 2023 notes from HB keynoteHelen Beetham
Somewhat extended and tidied up text of HB keynote at the ALT winter summit on AI and Ethics, December 2023. Slides draft quality for navigation only - a better quality set of slides is also available.
The field of Artificial Intelligence (AI) has progressed rapidly in the past few years. AI systems are having a growing impact on society and concerns have been raised whether AI system can be trusted. A way to address these concerns is to employ ethically aligned design principles to the development of AI software. Yet these principles are still far away from practical application. This talk provides state-of-the-art empirical insight into what should researchers and professionals do today when the client wants ethics to be added to their system.
This presentation looks at how AI works, how it is being used presently in Education and then outline some concerns about how AI might be used in education in the future.
I argue that AI has a much greater part to play in Education – particularly in making education more widely available in the developing world and in reducing the cost of education.
The talk then moves on to discuss general ethical concerns about how AI is being used in society, looking at the issue of how we program autonomous vehicles as a case in point. I then outline five areas of concern about the use (and potential abuse) of AI in education arguing that we need to have a much more informed debate before things go too far. With this in mind, I close with some suggestions for courses and reading that might help colleagues to become better informed about the subject.
After a great trip to Melbourne for Future Assembly, I thought it'd be great to present our thoughts on Design Ethics for Artificial Intelligence.
It's a thought-provoking and engaging presentation and will have you pondering our flawed and highly subjective value systems.
Ethical Considerations in the Design of Artificial IntelligenceJohn C. Havens
A presentation for IEEE's Ethics Symposium happening in Vancouver, May 2016. Featuring presentations from John C. Havens, Mike Van der Loos, John P. Sullins, and Alan Mackworth.
This presentation reflects on ChatGPT and how it affects us as a whole, not just technically. Discover the truths about AI and the impacts it has per some of today's top tech professionals.
TEDx Manchester: AI & The Future of WorkVolker Hirsch
TEDx Manchester talk on artificial intelligence (AI) and how the ascent of AI and robotics impacts our future work environments.
The video of the talk is now also available here: https://youtu.be/dRw4d2Si8LA
The impact of AI on society gets bigger and bigger - and it is not all good. We as Data Scientists have to really put in work to not end up in ML hell.
This presentation was given at the Dutch Data Science Week.
Talk on Algorithmic Bias given at York University (Canada) on March 11, 2019. This is a shorter version of an interactive workshop presented at University of Minnesota, Duluth in Feb 2019.
This slideshow is the result of student work for the module SS11006 Criminal Justice Environment 1 on the FdA Criminology & Criminal Justice programme at the University Centre at Blackburn College.
The "What is Zemiology?" research project seeks to benefit local communities by improving levels of public awareness of the kind of harms investigated by the criminal justice system. The project aims to challenge conventional representations of 'crime' by mass media such as newspapers and television.
Please see http://youtu.be/8QIILcct6Ik for more.
ChatGPT is a highly powerful tool, its limitations should be considered when using it for any application. Appropriate safeguards should be taken to mitigate the potential risks associated with using ChatGPT, including monitoring its responses and ensuring that it is used in a responsible and ethical manner.
presented by - www.thepodcasting.org
Innnovations in online teaching and learning: CHatGPT and other artificial as...Rebecca Ferguson
Talk given by Agnes Kukulska-Hulme and Rebecca Ferguson to SciLab (a centre for pedagogical research and innovation in business and law) at The Open University, Milton Keynes, UK on Wednesday 3 May 2023.
Technology for everyone - AI ethics and BiasMarion Mulder
Slides from my talk at #ToonTechTalks on 27 september 2018
We all see the great potential AI is bringing us. But is it really bringing it to everyone? How are we ensuring under-represented groups are included and vulnerable people are protected? What to do when our technology is unintended biased and discriminating against certain groups. And what if the data and AI is correct, but the by-effect of it is that some groups are put at risk? All questions we need to think about when we are advancing technology for the benefit of humanity.
Sharing what I've learned from my work in diversity, digital and from following great minds in this field such as Joanna Bryson, Virginia Dignum, Rumman Chowdhury, Juriaan van Diggelen, Valerie Frissen, Catelijne Muller, and many more.
The New Assembly Line: 3 Best Practices for Building (Secure) Connected CarsLookout
When an industry without experience in Internet security starts connecting things to the Internet, it typically makes a number of mistakes both in how it implements secure systems, and how it interacts with the security community. With connected automobiles, the stakes for getting security right have never been higher. “What’s the worst that could happen?” is a lot more serious when you’re talking about a computer that can travel 100+ MPH.
Slides from HR Talks on Future of work: AI vs. Human.
Organized by HR Hub in Bucharest, on 23 Jan 2017.
Topics discussed:
* Automation
* AI
* Impact on HR
This collection of slides are meant as a starting point and tutorial for the ones who want to understand AI Ethics and in particular the challenges around bias and fairness. Furthermore, I have also included studies on how we as humans perceive AI influence in our private as well as working lives.
In der Textarbeit wird erläutert, wie Arbeit und Leben in Zukunft aussehen werden. Beschrieben wird nicht nur die Entwicklung in Deutschland, sondern in ganz Europa. Dabei wird einiges auch skeptisch bzw. kritisch betrachtet. Die Künstliche Intelligenz spielt eine sehr große Rolle in unserer Arbeit und es wird ebenfalls auf die Gefahren bzw. Problematiken dieser hingewiesen.
Status
Join us for an engaging presentation, 'Empowering High School Success with ChatGPT,' where we explore the incredible capabilities of ChatGPT and its potential to enhance the educational journey of high school students. Discover how this advanced AI technology can be a valuable resource, offering support for homework, SAT preparation, college search, and more. We'll discuss the benefits, ethical considerations, and practical use cases that make ChatGPT a powerful companion in achieving academic excellence. Whether you're a student, parent, or educator, this presentation will shed light on the innovative ways ChatGPT can help navigate the challenges and opportunities of high school life
Konica Minolta - Artificial Intelligence White PaperEyal Benedek
The evolution of artificial intelligence in the workplace
Since the first appearance of the words “artificial intelligence” more than 60 years ago, our imaginations have been sparked. Imagine creating computers that simulate human intelligence.
AI has the potential to profoundly influence our lives, perhaps to the point when our world can be better understood and even predicted. In workplaces we can develop systems through which AI may evolve. And Konica Minolta is progressing with the concept of intelligent hubs which will provide businesses with insight, support and greater collaboration.
By combining our core technologies with transformative solutions in the digital workplace, we’re evolving to become a problem-solving digital company creating new value for people and society.
This presentation reflects on ChatGPT and how it affects us as a whole, not just technically. Discover the truths about AI and the impacts it has per some of today's top tech professionals.
TEDx Manchester: AI & The Future of WorkVolker Hirsch
TEDx Manchester talk on artificial intelligence (AI) and how the ascent of AI and robotics impacts our future work environments.
The video of the talk is now also available here: https://youtu.be/dRw4d2Si8LA
The impact of AI on society gets bigger and bigger - and it is not all good. We as Data Scientists have to really put in work to not end up in ML hell.
This presentation was given at the Dutch Data Science Week.
Talk on Algorithmic Bias given at York University (Canada) on March 11, 2019. This is a shorter version of an interactive workshop presented at University of Minnesota, Duluth in Feb 2019.
This slideshow is the result of student work for the module SS11006 Criminal Justice Environment 1 on the FdA Criminology & Criminal Justice programme at the University Centre at Blackburn College.
The "What is Zemiology?" research project seeks to benefit local communities by improving levels of public awareness of the kind of harms investigated by the criminal justice system. The project aims to challenge conventional representations of 'crime' by mass media such as newspapers and television.
Please see http://youtu.be/8QIILcct6Ik for more.
ChatGPT is a highly powerful tool, its limitations should be considered when using it for any application. Appropriate safeguards should be taken to mitigate the potential risks associated with using ChatGPT, including monitoring its responses and ensuring that it is used in a responsible and ethical manner.
presented by - www.thepodcasting.org
Innnovations in online teaching and learning: CHatGPT and other artificial as...Rebecca Ferguson
Talk given by Agnes Kukulska-Hulme and Rebecca Ferguson to SciLab (a centre for pedagogical research and innovation in business and law) at The Open University, Milton Keynes, UK on Wednesday 3 May 2023.
Technology for everyone - AI ethics and BiasMarion Mulder
Slides from my talk at #ToonTechTalks on 27 september 2018
We all see the great potential AI is bringing us. But is it really bringing it to everyone? How are we ensuring under-represented groups are included and vulnerable people are protected? What to do when our technology is unintended biased and discriminating against certain groups. And what if the data and AI is correct, but the by-effect of it is that some groups are put at risk? All questions we need to think about when we are advancing technology for the benefit of humanity.
Sharing what I've learned from my work in diversity, digital and from following great minds in this field such as Joanna Bryson, Virginia Dignum, Rumman Chowdhury, Juriaan van Diggelen, Valerie Frissen, Catelijne Muller, and many more.
The New Assembly Line: 3 Best Practices for Building (Secure) Connected CarsLookout
When an industry without experience in Internet security starts connecting things to the Internet, it typically makes a number of mistakes both in how it implements secure systems, and how it interacts with the security community. With connected automobiles, the stakes for getting security right have never been higher. “What’s the worst that could happen?” is a lot more serious when you’re talking about a computer that can travel 100+ MPH.
Slides from HR Talks on Future of work: AI vs. Human.
Organized by HR Hub in Bucharest, on 23 Jan 2017.
Topics discussed:
* Automation
* AI
* Impact on HR
This collection of slides are meant as a starting point and tutorial for the ones who want to understand AI Ethics and in particular the challenges around bias and fairness. Furthermore, I have also included studies on how we as humans perceive AI influence in our private as well as working lives.
In der Textarbeit wird erläutert, wie Arbeit und Leben in Zukunft aussehen werden. Beschrieben wird nicht nur die Entwicklung in Deutschland, sondern in ganz Europa. Dabei wird einiges auch skeptisch bzw. kritisch betrachtet. Die Künstliche Intelligenz spielt eine sehr große Rolle in unserer Arbeit und es wird ebenfalls auf die Gefahren bzw. Problematiken dieser hingewiesen.
Status
Join us for an engaging presentation, 'Empowering High School Success with ChatGPT,' where we explore the incredible capabilities of ChatGPT and its potential to enhance the educational journey of high school students. Discover how this advanced AI technology can be a valuable resource, offering support for homework, SAT preparation, college search, and more. We'll discuss the benefits, ethical considerations, and practical use cases that make ChatGPT a powerful companion in achieving academic excellence. Whether you're a student, parent, or educator, this presentation will shed light on the innovative ways ChatGPT can help navigate the challenges and opportunities of high school life
Konica Minolta - Artificial Intelligence White PaperEyal Benedek
The evolution of artificial intelligence in the workplace
Since the first appearance of the words “artificial intelligence” more than 60 years ago, our imaginations have been sparked. Imagine creating computers that simulate human intelligence.
AI has the potential to profoundly influence our lives, perhaps to the point when our world can be better understood and even predicted. In workplaces we can develop systems through which AI may evolve. And Konica Minolta is progressing with the concept of intelligent hubs which will provide businesses with insight, support and greater collaboration.
By combining our core technologies with transformative solutions in the digital workplace, we’re evolving to become a problem-solving digital company creating new value for people and society.
Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...AJHSSR Journal
ABSTRACT : This study aims to debate and analyze the implementation of artificial intelligence (AI) in the Justice Age of the Future
Democracy and how it can affect civil and criminal investigation. To do so, a database of indexed scientific papers and conference materials
were "searched" to gather their findings. Artificial intelligence (AI), is a science for the development of intelligent machines and has its
roots in the early philosophical studies of human nature and in the process of knowing the world, expanded by neurophysiologists and
psychologists in the form of a series of theories, about the work of the human brain and thought. The stage of the development of the science
of artificial intelligence is the development of the foundation of the mathematical theory of computation - the theory of algorithms - and the
creation of computers, Anglin, (1995). "Artificial Intelligence" is a science that has theoretical and experimental parts. In practice, the
problem of the creation of "Artificial Intelligence" is, on the one hand, at the intersection of computer technology and, on the other, with
neurophysiology, cognitive and behavioral psychology. The Philosophy of Artificial Intelligence serves as a theoretical basis, but only with
the appearance of significant results will the theory acquire an independent meaning. Until now, the theory and practice of "Artificial
Intelligence" must be distinguished from the mathematical, algorithmic, robotic, physiological, and other theoretical techniques and
experimental techniques that have an independent meaning.
KEYWORDS: Artificial Intelligence; Hybrid Smart Systems (HIS); Computer Machines; Robotics; Test of Turing
In the last decade, workplaces have started to evolve towards digitalisation. In the future people will work in digitally connected environments where personalisation is enabled, collaboration is improved and data sharing and information management are automated. Ultimately, these future workplaces will provide context-aware artificial intelligence (AI) and decision support that leverage both localised information and broader community knowledge whenever needed.
What really is Artificial Intelligence about? Harmony Kwawu
AI systems are growing. But what is AI, where did the idea behind it come from, what is intelligence, how does expert level intelligence work, and perhaps most importantly, would AI systems eventually make human beings redundant?
AI Everywhere: How Microsoft is Democratizing AIPaul Prae
Microsoft has set a goal of democratizing AI. It’s agents, applications, services, and infrastructure. Join us to learn how Microsoft's intelligent cloud solutions can help any organization better handle uncertainty, reduce risks, build intelligent applications that deliver better services faster, grow customer loyalty, help businesses become more proactive and differentiate themselves from intensifying competition. We will discuss:
+ Microsoft's AI Strategy
+ Examples of AI in the Enterprise
+ The Cortana Intelligence Suite
CUbRIK tutorial at ICWE 2013: part 1 Introduction to Human ComputationCUbRIK Project
2013, July 8
Part 1 of the tutorial illustrated at ICWE 2013, by Alessandro Bozzon (Delft University of Technology)
Crowdsourcing and human computation are novel disciplines that enable the design of computation processes that include humans as actors for task execution. In such a context, Games With a Purpose are an effective mean to channel, in a constructive manner, the human brainpower required to perform tasks that computers are unable to perform, through computer games. This tutorial introduces the core research questions in human computation, with a specific focus on the techniques required to manage structured and unstructured data. The second half of the tutorial delves into the field of game design for serious task, with an emphasis on games for human computation purposes. Our goal is to provide participants with a wide, yet complete overview of the research landscape; we aim at giving practitioners a solid understanding of the best practices in designing and running human computation tasks, while providing academics with solid references and, possibly, promising ideas for their future research activities.
Imagining and Empowering: Rethinking and Retooling for the Digital Future(s)Gigi Johnson
Enjoy my keynote presentation slides from the Friends of the National Library of Medicine on "Post-Pandemic Libraries: The Upcoming Era of Change". My session, which started the day, was about "Imagining and Empowering: Rethinking and Retooling for the Digital Future(s)". Add'l info: linktr.ee/gigijohnson
This work introduces “quantum intelligence” as a concept of intelligence for operating in the quantum realm may help in a potential AI-Quantum Computing convergence (~2030e), and towards the realization of SRAI for well-being (economics, health, energy, space). “Scale-free intelligence” is formulated as a generic capacity for learning.
AI did not spring onto the scene with chatGPT, but is in an ongoing multi-year adoption. A transition may be underway from an information society to a knowledge society (one tempered and specifically using knowledge to improve the human condition). AI is a dual-use technology with both significant risk and upleveling possibilities.
SRAI for well-being is a social objective, and also a technological objective. SRAI is part of AI development and within the technological trajectory of harnessing all scales of physical reality ranging from quantum materials to space exploration.
Conceptually, thinking in quantum and relativistic terms expands the physical worldview, and likewise the social worldview of entities inhabiting the larger world. Practically, SRAI may be realized in phases: short-term regulation and registries, medium-term agents learning to implement human values with internal reward functions, and long-term responsible human-AI entities acting in partnership in a future of SRAI for well-being.
Student digital experience tracker expertsHelen Beetham
Slides from Jisc Student Experience Experts' meeting June 2016 introducing data from the Jisc Digital Student Experience Tracker pilot and findings about the Tracker process
My chapter in John Lea's edited book for Open University Press, Enhancing Teaching and Learning in HE, reproduced with kind permission of the publishers (thank you).
Outline of features of an educational organisation that might usefully be audited or assessed to determine its capacity to respond to digital opportunities and threats.
Wellbeing and responsibility: a new ethics for digital educatorsHelen Beetham
Slides for Jisc Learning and Teaching Experts' group June 2015 summarising work of Jisc Digital Student project and 'Framing digital capabilities' project. Summarises findings and draws out implications for 'digital wellbeing' as an emerging concern for staff and students.
Design principles for flipped classes prepared for a workshop at the University of Gloucester Learning and Teaching Fest 15. Inspired by University of Sydney's Teaching Insight no.9.
Neutral version (university references removed) of webinar designed and run for the University of Newcastle, April 2015. Dealing with outcomes from the Jisc-funded Digital Student project and my own findings from interviews with students and consultation with sector bodies.
Neutral version (university references removed) of a workshop designed and run for the University of Bristol, March 2015. Deals with issues of blended, flipped and borderless learning and tries to distil some key principles.
Third of three slide decks for a flipped keynote presentation at the SEDA UK conference, November 2014. This looks at how we might 'recover' from the impacts of digital technology in education, and in particular what our responsibilities are as educational developers.
Second of three slide decks for a flipped keynote presentation at the SEDA UK conference, November 2014. This looks at two kinds of response to the digital revolution, a critical/intellectual response and a felt response.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxEduSkills OECD
Andreas Schleicher presents at the OECD webinar ‘Digital devices in schools: detrimental distraction or secret to success?’ on 27 May 2024. The presentation was based on findings from PISA 2022 results and the webinar helped launch the PISA in Focus ‘Managing screen time: How to protect and equip students against distraction’ https://www.oecd-ilibrary.org/education/managing-screen-time_7c225af4-en and the OECD Education Policy Perspective ‘Students, digital devices and success’ can be found here - https://oe.cd/il/5yV
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
We all have good and bad thoughts from time to time and situation to situation. We are bombarded daily with spiraling thoughts(both negative and positive) creating all-consuming feel , making us difficult to manage with associated suffering. Good thoughts are like our Mob Signal (Positive thought) amidst noise(negative thought) in the atmosphere. Negative thoughts like noise outweigh positive thoughts. These thoughts often create unwanted confusion, trouble, stress and frustration in our mind as well as chaos in our physical world. Negative thoughts are also known as “distorted thinking”.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
Home assignment II on Spectroscopy 2024 Answers.pdf
ALT Ethical AI summit, HB keynote, Dec 2023
1. Whose ethics? Whose AI?
A relational approach to the
challenge of ethical AI
A L T W I N T E R S U M M I T 2 0 2 3 : E T H I C S A N D A I
Helen Beetham
@helenbeetham
helenbeetham.substack.com
2. Whose ethics?
CC
0
Public
Domain
via
wikimedia
commons
‘While ethics codes exist,
they may not be embedded
within all generative AI tools
and their incorporation, or
otherwise, may not be
something that users can
easily verify.’
Russell Group: ‘Principles on the Use of AI in Education’, July 2023
4. Whose ethics?
“Users”
All students and staff understand the
opportunities, limitations and ethical
issues associated with the use of
these tools and can apply what they
have learned as the capabilities of
generative AI develop…
The school or educator is able to
formulate some relevant questions
and engage in a constructive dialogue
with AI systems providers or with the
responsible public bodies …
5. Whose ethics?
• Think critically and consider
the wider environment
• Recognise responsibilities
and influence beyond your
institution
• Care of self and others
• Be accountable and
prepared to explain decisions
• Recognise possibility of bias
Association
for
Learning
Techonologies
2022
6. Relational ethics
• Relationships at the core
• Understanding context and ecosystem
• Asking the right questions
“in order to understand how to be ethical we
need to understand the dynamic, interwoven
contexts and relationships within which
[innovation] is designed and deployed”
From the Centre for
Technomoral Futures and the
UNICEF Data for Children
Collaborative
7. The principle of positionally
“to pay attention to positionality,
reflexivity, and how this shapes the
production of knowledge...
Farhana Sultana (2007) via the Equality
Institute
CC
BY-SA
2.0
Rama
via
wikimedia
commons
8. Pei Wang (2019) review of ‘The concept
of Artificial Intelligence’ in the Journal of
General Artificial Intelligence
“Every working definition of AI corresponds
to an abstraction that describes the mind
from a certain point of view…This
abstraction guides the construction of a
computer system that is [meant to be]
similar to a human mind in that sense,
while neglecting other aspects of the human
mind as irrelevant.”
Whose AI?
9. Herbert Simon on the ‘Logic
Theorist’ programme, 1956
‘We believe that we can start with
some of the most advanced human
activities—i.e. proving theorems—
and work back to the “simplest”’
Simon
and
Newell
playing
chess,
image
unt.univ-cotedazur.fr
Whose AI?
14. Whose (generative) AI?
“The wealthiest companies in history
unilaterally seizing the sum total of
human knowledge that exists in
digital, scrapable form and walling it
off inside proprietary products”
Naomi Klein, 2023
And why I prefer the term ‘synthetic media’:
The statistical modelling and re-synthesis of language, images, music, video,
data, and other digital records of human communications and cultural meanings
15. How synthesis works
1. Original
‘training’ data or
corpus: human
authored text
2. Training process
- model engineers
continually adjust
parameters over
multiple training runs
3. Diverse forms of
human re
fi
nement, from
labelling to research and
demonstrator texts
4. User prompts call
and re
fi
ne inferences:
reused as training data
Central
image
from
deciAI
via
substack,
annotations
HB
17. How synthesis makes us all more
productive
What is “productivity” in learning
(and in teaching, and in research)?
Who benefits? At the expense of what, or who?
21. Environmental impact
“generating an image using a powerful AI
model takes as much energy as fully
charging your smartphone”
MIT Technology Review December 2023
Inference requires 4-10x the compute
when compared with indexed search
Stanford AI Index Report 2023
22. The knowledge ecology
Luke Munn, 2023
“The designer of the system holds the power
to decide what the truth of the world will
be…What will be left for higher education
when ChatGPT and other emerging LLMs
have become de facto arbiters of truth?”
23. “Skills humans need”
• Concedes agency to probabilistic systems
• Creates new divisions of intellectual labour,
value and reward among people
• De
fi
nes ‘human’ and ‘intelligence’ universally
• De
fi
nes ‘human’ as whatever ‘technology’ is
not (yet)
• Invests in a particular version of ‘the future’…
• … that has no future
24. AI ‘literacy’ of questioning
• How are outputs synthesised (really)?
• Who pro
fi
ts? Who is exploited or
excluded? Who is not represented?
• What is the environmental impact?
• How do models enhance bias,
inequality, and privatisation, as well as
improving access and productivity?
•What are the risks to human creative
and intellectual work in different
scenarios of widespread use?
• Whose work / knowledge should be
valued and why?
(C) Dominika Zarzycka, used with photographer’s permission
25. Education systems categorised by the EU as ‘high risk’
• Adequate risk assessment and
mitigation
• High quality datasets to
minimise risks and biases
• Full record to ensure traceability
and accountability
• Appropriate human oversight
• Robustness, security, accuracy
Building an ecosystem for agency
and care
Image: Ada Lovelace Institute