SlideShare a Scribd company logo
1 of 29
Download to read offline
It is a great privilege and also I feel a great responsibility to be talking
about these issues with the learning technology community today. There
has been a lot of talk about ethics in in relation to what is called ā€˜AIā€™. In
particularly ā€˜generative AIā€™, these new computational models that can
synthesise text and other media. We are all supposed to be making
ā€˜responsible and ethicalā€™ use of these new capabilities, and supporting
students to do the same.
What I want to do in this talk is to question whether we are really able to
be responsible, ethical actors in the way that guidelines like the Russell
Groupā€™s ā€˜Principles on the Use of AIā€™ imagine us to be. I should say that I
am very grateful to the Russell group for producing their guidance, and
to all the other people working hard in universities to respond to the
challenges and changes that are emerging at such speed. I think all
these guides are struggling with the same issues we are all struggling
with, so Iā€™m using them as an available example and not as a particularly
bad one.
Iā€™ve selected this warning from the Russell Group Principles to illustrate
what I think the problems are. An ethics code ā€˜may not be something that
users can easily verifyā€™. I call this ā€˜Schroedingerā€™s ethicsā€™. You probably
know the thought experiment in quantum physics. In the experiment, a
radioactive particle either does or does not decay, according to quantum
probability. If it decays, a flask of poison is broken and the cat that is
inside the box for some reason dies. But as you canā€™t see inside the box,
when the experiment is over the cat is, in some senses, both alive and
dead.
This is really a story about the nature of probability. Generative AI models
are probabilistic models. They have a certain, deliberate randomness in
the way they generate media. They are also black boxes. We donā€™t know
what data they were trained on, what patterns they are responding to,
how they were designed by model engineers or redesigned by hundreds
of human data annotators. So we canā€™t know if there is a piece of ethical
code in there or not. A bit like the live/dead cat.
I think this also shows how wrong it is to go looking for ethics inside a
black box. Even if we knew the cat was alive, would we know what kind
of ethical cat it was? Would we like its ethics? Is a cat, or a piece of code,
really an ethical actor?
The cat as code, if it exists, clearly belongs to the black box model. And
the black boxes belong to some of the most powerful corporations on the
planet. On the right you can see some of the current language models
and interfaces - you can always try matching them to their corporate
owners in the chat window. There are a few wild cards. On the left are the
organisations that are building and hoping to profit from them. What can
we expect from these corporations in terms of an ethical code? Their
track record is not good. The major player here eliminated its entire AI
Ethics team earlier this year as it deepened its partnership with its ā€˜not for
profitā€™ partner. The second player sacked ethics advisers Timnit Gebru
and Margaret Mitchell when they raised issues of bias and safety. XAI,
established earlier this year ā€˜to understand the true nature of the
universeā€™, learns that true nature by scraping X for content - the X from
which the entire ethics team was also sacked when its current proprietor
took it over. Lobbyists from these corporations have been busy in
Washington and Brussels watering down legislation that might provide
some external scrutiny of their designs and business models. Even if it
was a good idea to put an ethical cat in a box, the cat does not seem to
be very well.
So if we canā€™t rely on embedded ethical code, how about the actors at
the very opposite end of the AI stack, the end users, all of us? Well, the
Russell Group principles respect our personal agency and encourage us
to develop AI literacy - and as an educator of course I approve of this.
But to have agency you must have information, you must have the time
and opportunity to reflect, and you must have real choices. This is
problematic when the new capabilities are being integrated so rapidly
into core platforms such as VLEs, grading environments, search engines
and productivity tools. Not to mention all the thousands of apps and APIs
that are springing up between the models and the users, offering
everything from lesson plans to off-the-peg assignments, as diverse
actors seek a piece of the profitable pie.
The EU guidelines for educators are more circumspect about the issue of
agency. They donā€™t demand that we are fully formed ethical agents in
these novel situations, but that educators are able to ask questions, to
engage in dialogue with providers, to make demands on responsible
public bodies. Well, hang on there. Which responsible public bodies?
Universities? Colleges? Governments? What ethical rules have they come
up with, in the last year? What agency do they have to enforce them, on
our behalf?
We will come back to the question of who is ethically responsible at the
end, because there is no point me banging on about harms if it leaves all
of us feeling helpless or demoralised. If we are not, individually, very
empowered we are certainly able to ask our universities and colleges
and the sector overall to provide us with a better environment for ethical
action.
What we see in the ALT framework, which I really like, are signs of
something I would call a relational approach to ethics. There is no fixed
code here. There are thoughts and explanations. We are asked to look
beyond our immediate context as users and recognise broader
responsibilities.
There are a great many things on the checklist of ethical concerns about
generative AI. Bias, privacy concerns, environmental issues, inequity,
disinformation, surveillance, copyright theft. These can seem random,
overwhelming, and disconnected. But if we take a relational approach, I
think we can better understand where the risks and harms arise. What is
a relational approach? It looks at how technology can reframe
relationships. It seeks to understand contexts and ecosystems rather
than focusing on individual users. It asks questions rather than ticking
boxes. I have taken these points from UNICEF and the Centre for
Techno-moral Futures, but you can find references to relational ethics in
healthcare, law and other professions if you search online.
Perhaps the most important feature of relationally to me as a feminist and
anti-racist is that I should recognise my own position. There is no ethics
from nowhere. This is precisely what AI in general, and synthetic models
in particular, propose. They offer a view from nowhere, a completely
unaccountable account of how the world is. They sound plausible, but
they have no real stake in the words or images they produce. ā€œIā€™m sorry,
that isnā€™t what I meant,ā€ they can always say. Again, and again.
So here is my own position story, which explains something of why Iā€™m
here.
At 17 I turned down a place to study philosophy and psychology at
Oxford. I went to Sussex university to study AI and cognitive science
instead. It seemed to me a more plausible and certainly a more
contemporary and sexier promise of knowledge about the mind. By 19 I
had parted company with that degree, in a way that does not reflect very
well on my quest for knowledge. But I do know that from that time I was
and have remained convinced that the claims AI was making about
minds and ā€˜intelligenceā€™ did not stack up. I was, you may conclude, a
very odd teenager - AI is full of odd teenagers - and I say that as
someone who remains good friends with people in the academic AI
community.
Iā€™ve kept my thoughts about AI to myself when I went on to work in
education, because it seemed to me that AI would always be the victim
of its own hype cycles, that it would never attract much attention or
credibility from the education community. And yet here we are.
So, where am I speaking from, besides my own weird preoccupations? I
am a researcher, particularly in digital literacy or how we become agents
- ethical agents - in relation to digital technologies. I am a teacher, some
of the time. I have a stake in the experiences and practices of students.
But, while I may be a marginal voice on this issue, I have a voice. I have
that privilege. Iā€™m white and well educated. The boundaries I find myself
on the wrong side of are not going to mean my credit is stopped, or
members of my family locked up, or I am going to be denied treatment or
a border crossing. All these can happen to people who find themselves
on the wrong side of categories that are determined by the use of AI.
So these issues of power, privilege and position open out into a question:
whose AI is this? There is no definition of artificial intelligence that is not
also a definition of intelligence itself: who has it, and whose intelligence
matters. There is no ā€˜the human mindā€™, obviously, when you think about it.
There are only human body-minds in specific cultures, societies and
systems of thinking. The abstractions required to define ā€˜AIā€™ serve
particular positionalities and purposes.
ā€˜Artificial intelligenceā€™ is and always has been a project. It is a project in
computer science, about what kind of models can be built with enough
power, speed and scale. It is a project in big tech, about how data power
can shape platforms, interfaces, and operating systems, and therefore
work and workplaces. It is becoming a project in education. It is a project
to elevate some things that some human beings do, and neglect others
as undeserving and unimportant.
The idea that intelligence should be defined by the people in charge of
the machines - this has been with us for a long time. The 1956 Dartmouth
Conference that launched the term ā€˜AIā€™ took place in a particular culture:
in the US, at the peak of its economic and global hegemony. Intelligence
had won the war, thanks in part to the code-breaking Colossus computer.
Intelligence testing was being used to shape the school curriculum in the
US, and in the UK - both countries in which education was being
integrated for the first time. The men who coined the term AI were sure
they knew what intelligence meant. It meant global power. It meant
playing chess and solving mathematical problems. Or as Marvin Minsky
put it a couple of decades later it meant ā€œthe ability to solve hard
problemsā€ (Minsky, 1985a). Of course it takes a certain kind of man to
know whatā€™s hard - but as it turned out the simple problems like vision
and natural language were much, much harder to solve with computation
than the hard ones, like playing chess.
But the use of intelligence to divide people up, and assign them to
different categories, especially to different categories of work, this goes
all the way back through eugenics, intelligence testing, all the way back
to Charles Babbage, who in his time was not celebrated for his failed
difference engine, but for his much more successful work on making the
factory and plantation systems more efficient. Intelligence was taken from
the weaver and assigned to the punch card system and the factory
overseer. Plantations were also managed like machines, no worker
having oversight of the whole process. In exactly the same way, the
difference engine broke down the work of calculating into discrete
operations, so that the simplest could be done mechanically. If it had
been successfully made and used, the first people put out of work would
have been the women and children who did these basic calculations, to
produce the nautical almanacs that were essential to the colonial trade.
These workers were called ā€˜calculatorsā€™, just as the women workers on
the Colossus at Bletchley Park were called ā€˜computersā€™. Itā€™s never the
most powerful people whose work can be taken over by a machine.
In the present day, writers like Meredith Whittaker, Simone Brown, Edward
Ongweso junior, Joy Buolamwini of the Algorithmic Justice League and
many more are exposing the consequences of ā€˜AIā€™ for people who fall into
the wrong categories - whether itā€™s facial recognition in policing (this
diagram is from the gender shades project) or AI surveillance of borders
and conflict zones.
Talking of conflict zones, DARPA is the US military agency that has
funded AI since the early 60s. Much of this work has gone into
developing autonomous weapons, and supporting battlefield decisions. I
have removed the image originally included under this recent headline,
about DARPA funding a multi-million dollar project on battlefield AI. Iā€™ve
removed it because in September this year, when the announcement was
published, an image of drones dropping bombs on their AI-designated
targets did not have the same capacity to distress as would do today. I
did not feel we would want to be confronted with that image. But I think is
important to remember that the project of AI has always been one of
projecting global power.
You may think the link to military AI is a bit unfair. After all, general
technologies can be used in many different ways. DARPA funded the
development of speech recognition technologies that are now used to
support accessibility, for example. But the scale of military funding of AI,
at least until about 2014, was what kept the project alive. Inevitably it
skewed what that project was interested in. And it means that many
companies looking to sell systems to education today, those systems
have their roots in military and surveillance applications. Looking into the
ā€˜Safer AIā€™ summit a few weeks ago, I found myself wondering why the
task of ā€˜unlocking the future of educationā€™ had been given by the DfE to
an AI company called Faculty. Some of you may remember Faculty from
its role in the Vote Leave campaign and in helping to run logistics for
number ten during the covid crisis. At the end of a blog post
summarising Facultyā€™s involvement with the future of AI in education was
a link inviting education leaders to ā€˜connect with Faculty about your AI
strategyā€™. Well, I clicked the link. Iā€™m like that. And it goes straight to this
page of services to the military and law enforcement agencies. I would
call this a smoking gun if it wasnā€™t such a militaristic metaphor. But clearly
the same services that have been developed for law enforcement are
being sold directly to schools, with the endorsement of the DfE. This is
kind of unavoidable in an industry that only survived thanks to decades
of military funding.
I want to focus for the rest of my talk on what is called generative AI,
though from a statistical and computational point of view this is an
entirely different technology to the ones being developed in the 1950s
and the ones underpinning most surveillance technologies I just
described. I prefer to call this new technology synthetic media. This is
my definition (on the slide), but Emily Bender also called generative AI
ā€˜synthetic media machinesā€™, and Ted Chiang, another AI insider turned
critic, is content with ā€˜applied statisticsā€™. This is Naomo Kleinā€™s more
politically positional definition:
So it isnā€™t quite as simple as ā€˜seizingā€™. This is where I want to talk more
particularly about how relationships are being reframed through these
technologies. Soā€¦
For example, LLaMA-2 was trained using more than 1 million human
annotations. The model in this diagram, taken from open source project
DeCiLM, is retuned with new data every week. I am really grateful to this
project for its somewhat untypical openness about the training process.
What you see at each of these four points is human work, human
knowledge work being done. But only one kind of work - the work of the
model engineers - is acknowledged and rewarded. The rest are forms of
ā€˜intelligenceā€™, if you like, that the ā€˜intelligentā€™ system does not want to
acknowledge or pay very much for. Like the human chess player hiding
in the mechanical turk we are not supposed to see these people if we
want the magic to happen.
Now, in practice, most of the data workers who make up the third part,
who are usually referred to as the ā€˜data engineā€™, are workers in the global
south, where they are paid around $1-3 an hour, depending on the kinds
of data enrichment they are doing. This work is precarious and stressful.
Workers in Kenya, for example, are suing for the trauma they suffered
labelling violent and harmful images so the modelā€™s producers could
claim that it was safe for users.
Also, nobody paid for the original training data, that actually makes up
the data model, that in being re-synthesised now threatens the
livelihoods of creative and productive workers, and in fact the whole
economy of creative and scholarly production, that all our own livelihoods
here depend on in the long term.
And finally are all of us - every time we interact with the model we are
contributing to future training with our own creative ideas, our prompts,
our materials. Who benefits from this? In the short term, perhaps we are
made a bit more productive. Productivity is always enjoyable when it is
fresh out of the box. But who benefits when these become the new global
operating systems for all knowledge? Unlike the internet, that for all its
flaws is an open, distributed, standards based architecture. These are
closed systems. And even as closed systems, they are not closed like an
organisation is closed, with all its explicit and implicit know-how
distributed throughout its resources and its technologies and its
employees. Through these new relationships of data and labour, valued
knowledge is entirely captured and managed in one system, a system
that can be owned by someone else.
Once again, the idea of intelligence is being used to divide and classify
labour, in order to deskill and devalue it. Itā€™s still human beings, doing
human work. This is not some kind of magic box, it is just good old
fashioned Taylorist - or perhaps Babbage-ist - division of labour.
We need to think about where our students fit in all this. All the guidelines
Iā€™ve seen imagine students as end users. So, the worst we can say about
that is they will have to be a lot more productive. Because if your
employer is expecting 250 ChatGPT articles a week, as some recent job
advertisements have asked for, or expects you to code in half the time it
used to take, with GitHub CoPilot, you are not going to be paid more, you
are just going to be timed less. Perhaps a very few students will be highly
paid system designers. But increasing numbers of them will end up as
part of the data engine, perhaps working for one of the burgeoning
annotation companies, or perhaps working inside an organisation, tuning
its data model so other workers can continue to ramp up their
productivity. The International Labour Organisation highlights that people
under 30 are far more likely to be employed in platform data work than
older workers. The EU estimated five years ago that 10% of students had
worked in the gig economy - that number is surely higher now. Our
students are implicated in the data engine at every level. We canā€™t just
think of them as consumers of its products.
But as educators, we also care about how students are being addressed
as consumers. Type the words ā€˜writingā€™ and ā€˜AIā€™ into any search engine -
while you still can - and you will fine hundreds of promoted web sites,
selling services that promise to take away the drudgery of reading and
writing and give students back their time. And we should not be
moralising about this - as teachers and researchers we are being sold
exactly the same promise, and we are lapping it up. Take away the
drudgery, focus on what really matters. Except, what if ā€˜what really
mattersā€™ sometimes is the hard work of reading and writing? What if you
canā€™t ā€˜humaniseā€™ your text, your images, or your code, just by clicking a
button, but you can develop as a human being by engaging in those
activities for yourself?
In fact the evidence points entirely the opposite way to the promise. More
automation actually makes work routines more standardised and more
boring, for the people still left to do them. In the case of learning in
particular, it is not easy to know what this enhanced productivity is buying
you, unless it is more time to earn the money you need to pay for your
learning - perhaps with your side hustle in the data economy. Isnā€™t time to
read, write, learn and think precisely what education is supposed to be
buying you?
There are inequalities baked into the models themselves, as I have
argued, but they are also baked into the commercialisation of the
models, with paid versions rapidly overtaking free models in terms of
their performance. A recent study, which I found on the Institute for
Student Employers web site (though actually carried out by , showed that
results on standard graduate recruitment tests are now skewed in favour
of applicants who can pay for premium models. And that is only the
applicants from the wealthiest households. A small levelling up effect for
neurodiverse applicants, for example, was dwarfed by this financial
inequity. So the ISEā€™s conclusion is that this will ā€˜set social mobility efforts
back yearsā€™. It seems likely that recruitment will centre on live tasks,
interviews and team activities with no access to generative models. So
we do students a disservice if we donā€™t expose them to these same
conditions in their studies and assessments. It would be strange if
universities were falling behind graduate recruiters on such a key issue of
equity. Major companies wonā€™t be relying on generic models. They wonā€™t
want generic prompt engineers. They will want critical thinkers who can
express their ideas and work in teams, and if they use in-house models
for productivity reasons, they will train their people to use them. Thatā€™s my
prediction anyway.
So there are inequities in the making of the models, inequities in the
using of the models, and itā€™s now well known that the data the models are
built on has all kinds of bias built in. This data reflects views from the
distant past, views from the fringes of the internet, and above all it
predominantly reflects the views of white, English speaking men whose
ideas have made it into the digital record. I mainly know about language
models but when it comes to bias the image models are just much more
vivid. So again, this is research done by Bloomberg, because the big
companies really want to know how these models are impacting on their
ability to attract the best talent. They are putting equity to the fore. They
generated thousands of images using the names of occupations, and
they found the skin colour of the people depicted in those images
matched the typical pay of the occupations. I donā€™t want to show
generated images and risk perpetuating those ideas, but I find this a
striking image from their research.
The same study found similar issues for gender and pay, though
obviously gender is a very contested issue when it comes to how digital
images are gendered by viewers. I am not commenting on Bloombergā€™s
specific process here, only that clearly the generated images were
stereotyping occupations along conventional gender lines.
And if it were not enough that these models perpetuate some of the most
violent, unjust ideas from our own past, they are also a threat to our
planetary future. There are of course bigger polluters than big tech, but
as these statistics show, the massive computing power required to run
models, both in training and for every inference run, is non-negligible and
is growing every year. There is a massive demand on water to cool the
power plants where thousands of Nvidia chips are running these models.
We are told the industry wants to develop less power and water hungry
systems. But at the moment, a shortage of chips is the only thing holding
back the development of even more powerful models. And computing
power is the reason why the big tech companies have gained market
dominance so early on. The first truly marketable products from the whole
AI project have come from throwing power and data at it. This is a winner
takes all market, and power wins. Why would the winners make it any
easier for competitors to get on board?
And finally, there is what we in universities and colleges have a special
care about - what you might call the knowledge ecology, or how ideas
are developed, tested, represented and shared. In relation to our care of
students, of course we must care about issues such as deepfakes, the
flooding of social media with disinformation, and what Cory Doctorow
calls the enshittification of the internet at large. But we must have a
special concern for how synthetic text and data will insert themselves
into the research, teaching and learning practices that we, in the sector,
rely on. Research is difficult, and we are always under pressure to do it
faster and more efficiently. But what if difficulty is sometimes, actually, the
point? To discover something that isnā€™t in the written record and isnā€™t in a
model - which is to say, a summary of previous research - either?
Teaching in ways that are adaptive to studentsā€™ needs takes time and skill
and personal attention. But what if that time and attention is actually what
they need? This quote comes from Luke Munnā€™s recent article on
evaluating AI on indigenous Maori principles - and thanks to Paul
Prinsloo for taking me to this piece. He points out that generative AI
doesnā€™t just categorise us, it requires us to think in its categories, and
those categories may not be what we need to imagine alternative futures,
and discover alternative realities. The speed and efficiency of
So I want to talk briefly now about what a relational ethical response to
these developments might look like. And one solution I think we should
resist is to reframe everything we teach and assess around a definition of
human skills in relation to what is called ā€˜artificial intelligenceā€™. We donā€™t
need people who can work in collaboration with artificial intelligences -
that concedes agency to what are simply systems for coordinating our
own and other peopleā€™s work. Itā€™s delusional. We shouldnā€™t accept that
what hype and computing power and the concentration of capital can
produce today should define what it is valuable and useful for graduates
to do tomorrow. These systems are brittle, unreliable, contentious,
inequitable, a legal minefield. Big companies are already investing less in
them than they were a year ago. It is not inevitable that they will dominate
the workplace, and maybe we should be sharing with students that there
are choices and there are doubts.
When it comes to working with students, I fully agree that we want
students to be critical, but not only about the outcomes of these models
and not only in relation to their own work. We want them to be asking
questions that go wider than that, depending of course on the focus of
the subjects they are studying. The questions may look different in
engineering, in history, and in nursing. But I think these are questions that
young people are already asking. They are not moralising about abstract
things such as originality and academic integrity. They are actually very
concrete questions about technology and learning, that they have a
stake in.
Finally, I think we need to be creating an ecosystem in which ethical
choices are actually available, and the time and resources for people to
think, consider, ask questions, negotiate understandings. The new EU
regulations on AI are actually rather good at defining different kinds of
ethical actors in the AI space. Mostly they have let the big, general
models off the hook, for reasons we can speculate about. But despite
that, they are not at all interested in end users. The responsibility for
providing an ethical environment in which systems are deployed lies
mainly with the organisations providing the systems, in our case with
universities and colleges, and their organisations and regulatory bodies.
The EU classifies the use of all AI systems in education as high risk. It
requires all of these thingsā€¦ Now, do we really believe that any of the
models we are using in further and higher education could meet these
requirements? And if not, how do we get there? I donā€™t see how we can
do that without, as a sector, deciding to build and maintain our own
models.
It will be challenging. This chart shows the huge brain drain there has
been from academic AI to the commercial sector. Although small, open
models are now being built that can run on a laptop, to serve our sector a
lot of computing power will undoubtedly be needed. We will have to
relate to the large scale commercial models at some level. But by having
a collective voice, the sector could negotiate that relationship more
effectively for all of us. We can only do this, therefore, collaboratively and
openly. Otherwise this will just another source of inequity and
stratification across universities and colleges and their members. Wealthy
businesses like Bloomberg are already doing this. I have no doubt
wealthy universities and research institutes are doing it. But we need to
start joining up.
Collectively, universities and colleges are key ethical actors. Perhaps
uniquely as a sector we still have the know-how. We have reasons to
collaborate - we are not really a market, however many governments try
to make it so. We have a very particular stake in knowledge, knowledge
production, and values around knowledge. And we do build open
knowledge projects - wikipedia rests very heavily on the work of students
and academics. We have contributed extensively to open standards
since the birth of the internet. We have thriving open source and open
education communities. But will the sector act collectively, in this case?
I want to leave you with the thought that we do all have a voice in this.
This audience is one of the most influential when it comes to how the
sector responds, collectively. I started with a black box that couldnā€™t be
opened, I want to finish with a box that perhaps shouldnā€™t have been
opened. When Pandoraā€™s box was opened, all kinds of troubles came
out. But at the bottom there was still hope. And I have great hope in the
responses being made by members of the ALT community to the
challenges of these new technologies, founded on our FELT values. And
I look forward to hearing about more of them today.

More Related Content

What's hot

The Rise of the LLMsā€Š-ā€ŠHow I Learned to Stop Worrying & Love theĀ GPT!
The Rise of the LLMsā€Š-ā€ŠHow I Learned to Stop Worrying & Love theĀ GPT!The Rise of the LLMsā€Š-ā€ŠHow I Learned to Stop Worrying & Love theĀ GPT!
The Rise of the LLMsā€Š-ā€ŠHow I Learned to Stop Worrying & Love theĀ GPT!taozen
Ā 
Artificial Intelligence in Education|Evolve Machine Learners
Artificial Intelligence in Education|Evolve Machine LearnersArtificial Intelligence in Education|Evolve Machine Learners
Artificial Intelligence in Education|Evolve Machine LearnersMian Ashar
Ā 
IoT Based Smart Clothing
IoT Based Smart ClothingIoT Based Smart Clothing
IoT Based Smart Clothingijtsrd
Ā 
AI and the Future.pptx
AI and the Future.pptxAI and the Future.pptx
AI and the Future.pptxJeffOHara9
Ā 
Ethics in the use of Data & AI
Ethics in the use of Data & AI Ethics in the use of Data & AI
Ethics in the use of Data & AI Kalilur Rahman
Ā 
Artificial Intelligence and Machine Learning
Artificial Intelligence and Machine LearningArtificial Intelligence and Machine Learning
Artificial Intelligence and Machine LearningMykola Dobrochynskyy
Ā 
ChatGPT_Cheatsheet_Costa.pdf
ChatGPT_Cheatsheet_Costa.pdfChatGPT_Cheatsheet_Costa.pdf
ChatGPT_Cheatsheet_Costa.pdfssuser3e5d3a
Ā 
The future of ai ethics
The future of ai ethics The future of ai ethics
The future of ai ethics Vish Nandlall
Ā 
AI threats
AI threatsAI threats
AI threatsFiros Vp
Ā 
AI open tools for Research.pptx
AI open tools for Research.pptxAI open tools for Research.pptx
AI open tools for Research.pptxMohammad Usman
Ā 
LLM presentation final
LLM presentation finalLLM presentation final
LLM presentation finalRuth Griffin
Ā 
Why Social Media Chat Bots Are the Future of Communication - Deck
Why Social Media Chat Bots Are the Future of Communication - DeckWhy Social Media Chat Bots Are the Future of Communication - Deck
Why Social Media Chat Bots Are the Future of Communication - DeckJan Rezab
Ā 
Artificial Intelligence and the Future of Humanity
Artificial Intelligence and the Future of HumanityArtificial Intelligence and the Future of Humanity
Artificial Intelligence and the Future of HumanityGerd Leonhard
Ā 
Impact of Chatbots in e-commerce
Impact of Chatbots in e-commerceImpact of Chatbots in e-commerce
Impact of Chatbots in e-commerceErandra Jayasundara
Ā 
Why itā€™s unethical to focus on ā€˜AI Ethicsā€™
Why itā€™s unethical to focus on ā€˜AI Ethicsā€™Why itā€™s unethical to focus on ā€˜AI Ethicsā€™
Why itā€™s unethical to focus on ā€˜AI Ethicsā€™Kye Andersson
Ā 
INTERNET OF THING PRESENTATION ON PUBLIC SPEAKING
INTERNET OF THING PRESENTATION ON PUBLIC SPEAKINGINTERNET OF THING PRESENTATION ON PUBLIC SPEAKING
INTERNET OF THING PRESENTATION ON PUBLIC SPEAKINGAYESHA JAVED
Ā 
AI in healthcare - Use Cases
AI in healthcare - Use Cases AI in healthcare - Use Cases
AI in healthcare - Use Cases Ganesan Narayanasamy
Ā 
The Ethics of AI in Education
The Ethics of AI in EducationThe Ethics of AI in Education
The Ethics of AI in EducationMark S. Steed
Ā 

What's hot (20)

The Rise of the LLMsā€Š-ā€ŠHow I Learned to Stop Worrying & Love theĀ GPT!
The Rise of the LLMsā€Š-ā€ŠHow I Learned to Stop Worrying & Love theĀ GPT!The Rise of the LLMsā€Š-ā€ŠHow I Learned to Stop Worrying & Love theĀ GPT!
The Rise of the LLMsā€Š-ā€ŠHow I Learned to Stop Worrying & Love theĀ GPT!
Ā 
Artificial Intelligence in Education|Evolve Machine Learners
Artificial Intelligence in Education|Evolve Machine LearnersArtificial Intelligence in Education|Evolve Machine Learners
Artificial Intelligence in Education|Evolve Machine Learners
Ā 
AI (1).pdf
AI (1).pdfAI (1).pdf
AI (1).pdf
Ā 
IoT Based Smart Clothing
IoT Based Smart ClothingIoT Based Smart Clothing
IoT Based Smart Clothing
Ā 
AI and the Future.pptx
AI and the Future.pptxAI and the Future.pptx
AI and the Future.pptx
Ā 
Ethics in the use of Data & AI
Ethics in the use of Data & AI Ethics in the use of Data & AI
Ethics in the use of Data & AI
Ā 
Artificial Intelligence and Machine Learning
Artificial Intelligence and Machine LearningArtificial Intelligence and Machine Learning
Artificial Intelligence and Machine Learning
Ā 
ChatGPT_Cheatsheet_Costa.pdf
ChatGPT_Cheatsheet_Costa.pdfChatGPT_Cheatsheet_Costa.pdf
ChatGPT_Cheatsheet_Costa.pdf
Ā 
The future of ai ethics
The future of ai ethics The future of ai ethics
The future of ai ethics
Ā 
Implementing Ethics in AI
Implementing Ethics in AIImplementing Ethics in AI
Implementing Ethics in AI
Ā 
AI threats
AI threatsAI threats
AI threats
Ā 
AI open tools for Research.pptx
AI open tools for Research.pptxAI open tools for Research.pptx
AI open tools for Research.pptx
Ā 
LLM presentation final
LLM presentation finalLLM presentation final
LLM presentation final
Ā 
Why Social Media Chat Bots Are the Future of Communication - Deck
Why Social Media Chat Bots Are the Future of Communication - DeckWhy Social Media Chat Bots Are the Future of Communication - Deck
Why Social Media Chat Bots Are the Future of Communication - Deck
Ā 
Artificial Intelligence and the Future of Humanity
Artificial Intelligence and the Future of HumanityArtificial Intelligence and the Future of Humanity
Artificial Intelligence and the Future of Humanity
Ā 
Impact of Chatbots in e-commerce
Impact of Chatbots in e-commerceImpact of Chatbots in e-commerce
Impact of Chatbots in e-commerce
Ā 
Why itā€™s unethical to focus on ā€˜AI Ethicsā€™
Why itā€™s unethical to focus on ā€˜AI Ethicsā€™Why itā€™s unethical to focus on ā€˜AI Ethicsā€™
Why itā€™s unethical to focus on ā€˜AI Ethicsā€™
Ā 
INTERNET OF THING PRESENTATION ON PUBLIC SPEAKING
INTERNET OF THING PRESENTATION ON PUBLIC SPEAKINGINTERNET OF THING PRESENTATION ON PUBLIC SPEAKING
INTERNET OF THING PRESENTATION ON PUBLIC SPEAKING
Ā 
AI in healthcare - Use Cases
AI in healthcare - Use Cases AI in healthcare - Use Cases
AI in healthcare - Use Cases
Ā 
The Ethics of AI in Education
The Ethics of AI in EducationThe Ethics of AI in Education
The Ethics of AI in Education
Ā 

Similar to Ethical AI summit Dec 2023 notes from HB keynote

artificial intelligence - in need of an ethical layer?
artificial intelligence - in need of an ethical layer?artificial intelligence - in need of an ethical layer?
artificial intelligence - in need of an ethical layer?Inge de Waard
Ā 
Persuasive Essay Ks3
Persuasive Essay Ks3Persuasive Essay Ks3
Persuasive Essay Ks3Amy White
Ā 
How To Write An Essay On Police Corruption
How To Write An Essay On Police CorruptionHow To Write An Essay On Police Corruption
How To Write An Essay On Police CorruptionTracey Souza
Ā 
How Not to Destroy the World: Ethics in Design and Technology
How Not to Destroy the World: Ethics in Design and TechnologyHow Not to Destroy the World: Ethics in Design and Technology
How Not to Destroy the World: Ethics in Design and TechnologyMorten Rand-Hendriksen
Ā 
Essay About Rainwater Harvesting
Essay About Rainwater HarvestingEssay About Rainwater Harvesting
Essay About Rainwater HarvestingJamie Jackson
Ā 
The AI Revolution ā€“ Are Tech Titans Racing to the Bottom.pdf
The AI Revolution ā€“ Are Tech Titans Racing to the Bottom.pdfThe AI Revolution ā€“ Are Tech Titans Racing to the Bottom.pdf
The AI Revolution ā€“ Are Tech Titans Racing to the Bottom.pdfAnshulsharma874284
Ā 
Craigs List Boot Camp Presentation
Craigs List Boot Camp PresentationCraigs List Boot Camp Presentation
Craigs List Boot Camp PresentationBeth Kanter
Ā 
The value of being human - finding balance between the artificial and nature ...
The value of being human - finding balance between the artificial and nature ...The value of being human - finding balance between the artificial and nature ...
The value of being human - finding balance between the artificial and nature ...Salema Veliu
Ā 
Exploring AI Ethics_ Challenges, Solutions, and Significance
Exploring AI Ethics_ Challenges, Solutions, and SignificanceExploring AI Ethics_ Challenges, Solutions, and Significance
Exploring AI Ethics_ Challenges, Solutions, and SignificanceBluebash LLC
Ā 
Artificial Intelligence AI in Libraries Training for Innovation Webinar
Artificial Intelligence  AI in Libraries Training for Innovation WebinarArtificial Intelligence  AI in Libraries Training for Innovation Webinar
Artificial Intelligence AI in Libraries Training for Innovation WebinarSaid Ali Said
Ā 
Writing University Essays
Writing University EssaysWriting University Essays
Writing University EssaysRochelle Schear
Ā 
Unit20248 Assignment 1
Unit20248 Assignment 1Unit20248 Assignment 1
Unit20248 Assignment 1Jessica Deakin
Ā 
003 6Th Grade Argumentative Essay Topi. Online assignment writing service.
003 6Th Grade Argumentative Essay Topi. Online assignment writing service.003 6Th Grade Argumentative Essay Topi. Online assignment writing service.
003 6Th Grade Argumentative Essay Topi. Online assignment writing service.Jennifer Holmes
Ā 
Dan Faggella - TEDx Slides 2015 - Artificial intelligence and Consciousness
Dan Faggella - TEDx Slides 2015 - Artificial intelligence and ConsciousnessDan Faggella - TEDx Slides 2015 - Artificial intelligence and Consciousness
Dan Faggella - TEDx Slides 2015 - Artificial intelligence and ConsciousnessDaniel Faggella
Ā 
Conclusion Words For College Essay. Online assignment writing service.
Conclusion Words For College Essay. Online assignment writing service.Conclusion Words For College Essay. Online assignment writing service.
Conclusion Words For College Essay. Online assignment writing service.Sarah Meza
Ā 

Similar to Ethical AI summit Dec 2023 notes from HB keynote (17)

artificial intelligence - in need of an ethical layer?
artificial intelligence - in need of an ethical layer?artificial intelligence - in need of an ethical layer?
artificial intelligence - in need of an ethical layer?
Ā 
Persuasive Essay Ks3
Persuasive Essay Ks3Persuasive Essay Ks3
Persuasive Essay Ks3
Ā 
How To Write An Essay On Police Corruption
How To Write An Essay On Police CorruptionHow To Write An Essay On Police Corruption
How To Write An Essay On Police Corruption
Ā 
How Not to Destroy the World: Ethics in Design and Technology
How Not to Destroy the World: Ethics in Design and TechnologyHow Not to Destroy the World: Ethics in Design and Technology
How Not to Destroy the World: Ethics in Design and Technology
Ā 
Essay About Rainwater Harvesting
Essay About Rainwater HarvestingEssay About Rainwater Harvesting
Essay About Rainwater Harvesting
Ā 
Ethics in Technology Handout
Ethics in Technology HandoutEthics in Technology Handout
Ethics in Technology Handout
Ā 
The AI Revolution ā€“ Are Tech Titans Racing to the Bottom.pdf
The AI Revolution ā€“ Are Tech Titans Racing to the Bottom.pdfThe AI Revolution ā€“ Are Tech Titans Racing to the Bottom.pdf
The AI Revolution ā€“ Are Tech Titans Racing to the Bottom.pdf
Ā 
Craigs List Boot Camp Presentation
Craigs List Boot Camp PresentationCraigs List Boot Camp Presentation
Craigs List Boot Camp Presentation
Ā 
The value of being human - finding balance between the artificial and nature ...
The value of being human - finding balance between the artificial and nature ...The value of being human - finding balance between the artificial and nature ...
The value of being human - finding balance between the artificial and nature ...
Ā 
Exploring AI Ethics_ Challenges, Solutions, and Significance
Exploring AI Ethics_ Challenges, Solutions, and SignificanceExploring AI Ethics_ Challenges, Solutions, and Significance
Exploring AI Ethics_ Challenges, Solutions, and Significance
Ā 
Artificial Intelligence AI in Libraries Training for Innovation Webinar
Artificial Intelligence  AI in Libraries Training for Innovation WebinarArtificial Intelligence  AI in Libraries Training for Innovation Webinar
Artificial Intelligence AI in Libraries Training for Innovation Webinar
Ā 
Artificial Intelligence AI in Libraries Training for Innovation Webinar
Artificial Intelligence  AI in Libraries Training for Innovation WebinarArtificial Intelligence  AI in Libraries Training for Innovation Webinar
Artificial Intelligence AI in Libraries Training for Innovation Webinar
Ā 
Writing University Essays
Writing University EssaysWriting University Essays
Writing University Essays
Ā 
Unit20248 Assignment 1
Unit20248 Assignment 1Unit20248 Assignment 1
Unit20248 Assignment 1
Ā 
003 6Th Grade Argumentative Essay Topi. Online assignment writing service.
003 6Th Grade Argumentative Essay Topi. Online assignment writing service.003 6Th Grade Argumentative Essay Topi. Online assignment writing service.
003 6Th Grade Argumentative Essay Topi. Online assignment writing service.
Ā 
Dan Faggella - TEDx Slides 2015 - Artificial intelligence and Consciousness
Dan Faggella - TEDx Slides 2015 - Artificial intelligence and ConsciousnessDan Faggella - TEDx Slides 2015 - Artificial intelligence and Consciousness
Dan Faggella - TEDx Slides 2015 - Artificial intelligence and Consciousness
Ā 
Conclusion Words For College Essay. Online assignment writing service.
Conclusion Words For College Essay. Online assignment writing service.Conclusion Words For College Essay. Online assignment writing service.
Conclusion Words For College Essay. Online assignment writing service.
Ā 

More from Helen Beetham

Writing as academic practice short.pdf
Writing as academic practice short.pdfWriting as academic practice short.pdf
Writing as academic practice short.pdfHelen Beetham
Ā 
Future of the university july 21
Future of the university july 21 Future of the university july 21
Future of the university july 21 Helen Beetham
Ā 
Acode keynote 2019
Acode keynote 2019Acode keynote 2019
Acode keynote 2019Helen Beetham
Ā 
Oer19 critical open space
Oer19 critical open spaceOer19 critical open space
Oer19 critical open spaceHelen Beetham
Ā 
Education technology - a feminist space?
Education technology - a feminist space?Education technology - a feminist space?
Education technology - a feminist space?Helen Beetham
Ā 
Online learners experts' meeting june 16
Online learners experts' meeting june 16Online learners experts' meeting june 16
Online learners experts' meeting june 16Helen Beetham
Ā 
Student digital experience tracker experts
Student digital experience tracker expertsStudent digital experience tracker experts
Student digital experience tracker expertsHelen Beetham
Ā 
The future is now: changes and challenges in the world of work
The future is now: changes and challenges in the world of workThe future is now: changes and challenges in the world of work
The future is now: changes and challenges in the world of workHelen Beetham
Ā 
Digital identities: resources for uncertain futures
Digital identities: resources for uncertain futuresDigital identities: resources for uncertain futures
Digital identities: resources for uncertain futuresHelen Beetham
Ā 
La Trobe Uni Innovation Showcase keynote
La Trobe Uni Innovation Showcase keynoteLa Trobe Uni Innovation Showcase keynote
La Trobe Uni Innovation Showcase keynoteHelen Beetham
Ā 
Occupying Virtual Space
Occupying Virtual SpaceOccupying Virtual Space
Occupying Virtual SpaceHelen Beetham
Ā 
Digital educational organisation
Digital educational organisationDigital educational organisation
Digital educational organisationHelen Beetham
Ā 
Gloucester flipped learning workshop slides
Gloucester flipped learning workshop slidesGloucester flipped learning workshop slides
Gloucester flipped learning workshop slidesHelen Beetham
Ā 
Wellbeing and responsibility: a new ethics for digital educators
Wellbeing and responsibility: a new ethics for digital educatorsWellbeing and responsibility: a new ethics for digital educators
Wellbeing and responsibility: a new ethics for digital educatorsHelen Beetham
Ā 
Design principles for flipped classes
Design principles for flipped classesDesign principles for flipped classes
Design principles for flipped classesHelen Beetham
Ā 
Digital students slideshare version
Digital students slideshare versionDigital students slideshare version
Digital students slideshare versionHelen Beetham
Ā 
What is blended learning?
What is blended learning?What is blended learning?
What is blended learning?Helen Beetham
Ā 

More from Helen Beetham (20)

Writing as academic practice short.pdf
Writing as academic practice short.pdfWriting as academic practice short.pdf
Writing as academic practice short.pdf
Ā 
Future of the university july 21
Future of the university july 21 Future of the university july 21
Future of the university july 21
Ā 
Acode keynote 2019
Acode keynote 2019Acode keynote 2019
Acode keynote 2019
Ā 
Oer19 critical open space
Oer19 critical open spaceOer19 critical open space
Oer19 critical open space
Ā 
Education technology - a feminist space?
Education technology - a feminist space?Education technology - a feminist space?
Education technology - a feminist space?
Ā 
Online learners experts' meeting june 16
Online learners experts' meeting june 16Online learners experts' meeting june 16
Online learners experts' meeting june 16
Ā 
Student digital experience tracker experts
Student digital experience tracker expertsStudent digital experience tracker experts
Student digital experience tracker experts
Ā 
The future is now: changes and challenges in the world of work
The future is now: changes and challenges in the world of workThe future is now: changes and challenges in the world of work
The future is now: changes and challenges in the world of work
Ā 
Digital identities: resources for uncertain futures
Digital identities: resources for uncertain futuresDigital identities: resources for uncertain futures
Digital identities: resources for uncertain futures
Ā 
Nus workshop
Nus workshopNus workshop
Nus workshop
Ā 
La Trobe Uni Innovation Showcase keynote
La Trobe Uni Innovation Showcase keynoteLa Trobe Uni Innovation Showcase keynote
La Trobe Uni Innovation Showcase keynote
Ā 
Occupying Virtual Space
Occupying Virtual SpaceOccupying Virtual Space
Occupying Virtual Space
Ā 
Digital educational organisation
Digital educational organisationDigital educational organisation
Digital educational organisation
Ā 
Gloucester flipped learning workshop slides
Gloucester flipped learning workshop slidesGloucester flipped learning workshop slides
Gloucester flipped learning workshop slides
Ā 
Wellbeing and responsibility: a new ethics for digital educators
Wellbeing and responsibility: a new ethics for digital educatorsWellbeing and responsibility: a new ethics for digital educators
Wellbeing and responsibility: a new ethics for digital educators
Ā 
Design principles for flipped classes
Design principles for flipped classesDesign principles for flipped classes
Design principles for flipped classes
Ā 
Digital students slideshare version
Digital students slideshare versionDigital students slideshare version
Digital students slideshare version
Ā 
What is blended learning?
What is blended learning?What is blended learning?
What is blended learning?
Ā 
In recovery
In recoveryIn recovery
In recovery
Ā 
In response
In responseIn response
In response
Ā 

Recently uploaded

Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...Jisc
Ā 
Hį»ŒC Tį»T TIįŗ¾NG ANH 11 THEO CHĘÆĘ NG TRƌNH GLOBAL SUCCESS ĐƁP ƁN CHI TIįŗ¾T - Cįŗ¢ NĂ...
Hį»ŒC Tį»T TIįŗ¾NG ANH 11 THEO CHĘÆĘ NG TRƌNH GLOBAL SUCCESS ĐƁP ƁN CHI TIįŗ¾T - Cįŗ¢ NĂ...Hį»ŒC Tį»T TIįŗ¾NG ANH 11 THEO CHĘÆĘ NG TRƌNH GLOBAL SUCCESS ĐƁP ƁN CHI TIįŗ¾T - Cįŗ¢ NĂ...
Hį»ŒC Tį»T TIįŗ¾NG ANH 11 THEO CHĘÆĘ NG TRƌNH GLOBAL SUCCESS ĐƁP ƁN CHI TIįŗ¾T - Cįŗ¢ NĂ...Nguyen Thanh Tu Collection
Ā 
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdfFraming an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdfUjwalaBharambe
Ā 
Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17Celine George
Ā 
DATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersDATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersSabitha Banu
Ā 
ACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfSpandanaRallapalli
Ā 
Gas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptxGas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptxDr.Ibrahim Hassaan
Ā 
Roles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceRoles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceSamikshaHamane
Ā 
Judging the Relevance and worth of ideas part 2.pptx
Judging the Relevance  and worth of ideas part 2.pptxJudging the Relevance  and worth of ideas part 2.pptx
Judging the Relevance and worth of ideas part 2.pptxSherlyMaeNeri
Ā 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfMr Bounab Samir
Ā 
Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for BeginnersSabitha Banu
Ā 
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...JhezDiaz1
Ā 
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxGrade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxChelloAnnAsuncion2
Ā 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTiammrhaywood
Ā 
Romantic Opera MUSIC FOR GRADE NINE pptx
Romantic Opera MUSIC FOR GRADE NINE pptxRomantic Opera MUSIC FOR GRADE NINE pptx
Romantic Opera MUSIC FOR GRADE NINE pptxsqpmdrvczh
Ā 
call girls in Kamla Market (DELHI) šŸ” >ą¼’9953330565šŸ” genuine Escort Service šŸ”āœ”ļøāœ”ļø
call girls in Kamla Market (DELHI) šŸ” >ą¼’9953330565šŸ” genuine Escort Service šŸ”āœ”ļøāœ”ļøcall girls in Kamla Market (DELHI) šŸ” >ą¼’9953330565šŸ” genuine Escort Service šŸ”āœ”ļøāœ”ļø
call girls in Kamla Market (DELHI) šŸ” >ą¼’9953330565šŸ” genuine Escort Service šŸ”āœ”ļøāœ”ļø9953056974 Low Rate Call Girls In Saket, Delhi NCR
Ā 

Recently uploaded (20)

Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...
Ā 
Raw materials used in Herbal Cosmetics.pptx
Raw materials used in Herbal Cosmetics.pptxRaw materials used in Herbal Cosmetics.pptx
Raw materials used in Herbal Cosmetics.pptx
Ā 
Model Call Girl in Bikash Puri Delhi reach out to us at šŸ”9953056974šŸ”
Model Call Girl in Bikash Puri  Delhi reach out to us at šŸ”9953056974šŸ”Model Call Girl in Bikash Puri  Delhi reach out to us at šŸ”9953056974šŸ”
Model Call Girl in Bikash Puri Delhi reach out to us at šŸ”9953056974šŸ”
Ā 
Hį»ŒC Tį»T TIįŗ¾NG ANH 11 THEO CHĘÆĘ NG TRƌNH GLOBAL SUCCESS ĐƁP ƁN CHI TIįŗ¾T - Cįŗ¢ NĂ...
Hį»ŒC Tį»T TIįŗ¾NG ANH 11 THEO CHĘÆĘ NG TRƌNH GLOBAL SUCCESS ĐƁP ƁN CHI TIįŗ¾T - Cįŗ¢ NĂ...Hį»ŒC Tį»T TIįŗ¾NG ANH 11 THEO CHĘÆĘ NG TRƌNH GLOBAL SUCCESS ĐƁP ƁN CHI TIįŗ¾T - Cįŗ¢ NĂ...
Hį»ŒC Tį»T TIįŗ¾NG ANH 11 THEO CHĘÆĘ NG TRƌNH GLOBAL SUCCESS ĐƁP ƁN CHI TIįŗ¾T - Cįŗ¢ NĂ...
Ā 
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdfFraming an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Ā 
Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17
Ā 
DATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersDATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginners
Ā 
ACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdf
Ā 
Gas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptxGas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptx
Ā 
Roles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceRoles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in Pharmacovigilance
Ā 
Judging the Relevance and worth of ideas part 2.pptx
Judging the Relevance  and worth of ideas part 2.pptxJudging the Relevance  and worth of ideas part 2.pptx
Judging the Relevance and worth of ideas part 2.pptx
Ā 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Ā 
Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for Beginners
Ā 
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
Ā 
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxGrade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
Ā 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
Ā 
9953330565 Low Rate Call Girls In Rohini Delhi NCR
9953330565 Low Rate Call Girls In Rohini  Delhi NCR9953330565 Low Rate Call Girls In Rohini  Delhi NCR
9953330565 Low Rate Call Girls In Rohini Delhi NCR
Ā 
Romantic Opera MUSIC FOR GRADE NINE pptx
Romantic Opera MUSIC FOR GRADE NINE pptxRomantic Opera MUSIC FOR GRADE NINE pptx
Romantic Opera MUSIC FOR GRADE NINE pptx
Ā 
call girls in Kamla Market (DELHI) šŸ” >ą¼’9953330565šŸ” genuine Escort Service šŸ”āœ”ļøāœ”ļø
call girls in Kamla Market (DELHI) šŸ” >ą¼’9953330565šŸ” genuine Escort Service šŸ”āœ”ļøāœ”ļøcall girls in Kamla Market (DELHI) šŸ” >ą¼’9953330565šŸ” genuine Escort Service šŸ”āœ”ļøāœ”ļø
call girls in Kamla Market (DELHI) šŸ” >ą¼’9953330565šŸ” genuine Escort Service šŸ”āœ”ļøāœ”ļø
Ā 
OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...
Ā 

Ethical AI summit Dec 2023 notes from HB keynote

  • 1. It is a great privilege and also I feel a great responsibility to be talking about these issues with the learning technology community today. There has been a lot of talk about ethics in in relation to what is called ā€˜AIā€™. In particularly ā€˜generative AIā€™, these new computational models that can synthesise text and other media. We are all supposed to be making ā€˜responsible and ethicalā€™ use of these new capabilities, and supporting students to do the same.
  • 2. What I want to do in this talk is to question whether we are really able to be responsible, ethical actors in the way that guidelines like the Russell Groupā€™s ā€˜Principles on the Use of AIā€™ imagine us to be. I should say that I am very grateful to the Russell group for producing their guidance, and to all the other people working hard in universities to respond to the challenges and changes that are emerging at such speed. I think all these guides are struggling with the same issues we are all struggling with, so Iā€™m using them as an available example and not as a particularly bad one. Iā€™ve selected this warning from the Russell Group Principles to illustrate what I think the problems are. An ethics code ā€˜may not be something that users can easily verifyā€™. I call this ā€˜Schroedingerā€™s ethicsā€™. You probably know the thought experiment in quantum physics. In the experiment, a radioactive particle either does or does not decay, according to quantum probability. If it decays, a flask of poison is broken and the cat that is inside the box for some reason dies. But as you canā€™t see inside the box, when the experiment is over the cat is, in some senses, both alive and dead. This is really a story about the nature of probability. Generative AI models are probabilistic models. They have a certain, deliberate randomness in the way they generate media. They are also black boxes. We donā€™t know what data they were trained on, what patterns they are responding to, how they were designed by model engineers or redesigned by hundreds of human data annotators. So we canā€™t know if there is a piece of ethical code in there or not. A bit like the live/dead cat.
  • 3. I think this also shows how wrong it is to go looking for ethics inside a black box. Even if we knew the cat was alive, would we know what kind of ethical cat it was? Would we like its ethics? Is a cat, or a piece of code, really an ethical actor? The cat as code, if it exists, clearly belongs to the black box model. And the black boxes belong to some of the most powerful corporations on the planet. On the right you can see some of the current language models and interfaces - you can always try matching them to their corporate owners in the chat window. There are a few wild cards. On the left are the organisations that are building and hoping to profit from them. What can we expect from these corporations in terms of an ethical code? Their track record is not good. The major player here eliminated its entire AI Ethics team earlier this year as it deepened its partnership with its ā€˜not for profitā€™ partner. The second player sacked ethics advisers Timnit Gebru and Margaret Mitchell when they raised issues of bias and safety. XAI, established earlier this year ā€˜to understand the true nature of the universeā€™, learns that true nature by scraping X for content - the X from which the entire ethics team was also sacked when its current proprietor took it over. Lobbyists from these corporations have been busy in Washington and Brussels watering down legislation that might provide some external scrutiny of their designs and business models. Even if it was a good idea to put an ethical cat in a box, the cat does not seem to be very well.
  • 4. So if we canā€™t rely on embedded ethical code, how about the actors at the very opposite end of the AI stack, the end users, all of us? Well, the Russell Group principles respect our personal agency and encourage us to develop AI literacy - and as an educator of course I approve of this. But to have agency you must have information, you must have the time and opportunity to reflect, and you must have real choices. This is problematic when the new capabilities are being integrated so rapidly into core platforms such as VLEs, grading environments, search engines and productivity tools. Not to mention all the thousands of apps and APIs that are springing up between the models and the users, offering everything from lesson plans to off-the-peg assignments, as diverse actors seek a piece of the profitable pie. The EU guidelines for educators are more circumspect about the issue of agency. They donā€™t demand that we are fully formed ethical agents in these novel situations, but that educators are able to ask questions, to engage in dialogue with providers, to make demands on responsible public bodies. Well, hang on there. Which responsible public bodies? Universities? Colleges? Governments? What ethical rules have they come up with, in the last year? What agency do they have to enforce them, on our behalf? We will come back to the question of who is ethically responsible at the end, because there is no point me banging on about harms if it leaves all of us feeling helpless or demoralised. If we are not, individually, very empowered we are certainly able to ask our universities and colleges and the sector overall to provide us with a better environment for ethical action.
  • 5. What we see in the ALT framework, which I really like, are signs of something I would call a relational approach to ethics. There is no fixed code here. There are thoughts and explanations. We are asked to look beyond our immediate context as users and recognise broader responsibilities.
  • 6. There are a great many things on the checklist of ethical concerns about generative AI. Bias, privacy concerns, environmental issues, inequity, disinformation, surveillance, copyright theft. These can seem random, overwhelming, and disconnected. But if we take a relational approach, I think we can better understand where the risks and harms arise. What is a relational approach? It looks at how technology can reframe relationships. It seeks to understand contexts and ecosystems rather than focusing on individual users. It asks questions rather than ticking boxes. I have taken these points from UNICEF and the Centre for Techno-moral Futures, but you can find references to relational ethics in healthcare, law and other professions if you search online.
  • 7. Perhaps the most important feature of relationally to me as a feminist and anti-racist is that I should recognise my own position. There is no ethics from nowhere. This is precisely what AI in general, and synthetic models in particular, propose. They offer a view from nowhere, a completely unaccountable account of how the world is. They sound plausible, but they have no real stake in the words or images they produce. ā€œIā€™m sorry, that isnā€™t what I meant,ā€ they can always say. Again, and again. So here is my own position story, which explains something of why Iā€™m here. At 17 I turned down a place to study philosophy and psychology at Oxford. I went to Sussex university to study AI and cognitive science instead. It seemed to me a more plausible and certainly a more contemporary and sexier promise of knowledge about the mind. By 19 I had parted company with that degree, in a way that does not reflect very well on my quest for knowledge. But I do know that from that time I was and have remained convinced that the claims AI was making about minds and ā€˜intelligenceā€™ did not stack up. I was, you may conclude, a very odd teenager - AI is full of odd teenagers - and I say that as someone who remains good friends with people in the academic AI community. Iā€™ve kept my thoughts about AI to myself when I went on to work in education, because it seemed to me that AI would always be the victim of its own hype cycles, that it would never attract much attention or credibility from the education community. And yet here we are. So, where am I speaking from, besides my own weird preoccupations? I am a researcher, particularly in digital literacy or how we become agents - ethical agents - in relation to digital technologies. I am a teacher, some of the time. I have a stake in the experiences and practices of students. But, while I may be a marginal voice on this issue, I have a voice. I have that privilege. Iā€™m white and well educated. The boundaries I find myself
  • 8. on the wrong side of are not going to mean my credit is stopped, or members of my family locked up, or I am going to be denied treatment or a border crossing. All these can happen to people who find themselves on the wrong side of categories that are determined by the use of AI.
  • 9. So these issues of power, privilege and position open out into a question: whose AI is this? There is no definition of artificial intelligence that is not also a definition of intelligence itself: who has it, and whose intelligence matters. There is no ā€˜the human mindā€™, obviously, when you think about it. There are only human body-minds in specific cultures, societies and systems of thinking. The abstractions required to define ā€˜AIā€™ serve particular positionalities and purposes. ā€˜Artificial intelligenceā€™ is and always has been a project. It is a project in computer science, about what kind of models can be built with enough power, speed and scale. It is a project in big tech, about how data power can shape platforms, interfaces, and operating systems, and therefore work and workplaces. It is becoming a project in education. It is a project to elevate some things that some human beings do, and neglect others as undeserving and unimportant.
  • 10. The idea that intelligence should be defined by the people in charge of the machines - this has been with us for a long time. The 1956 Dartmouth Conference that launched the term ā€˜AIā€™ took place in a particular culture: in the US, at the peak of its economic and global hegemony. Intelligence had won the war, thanks in part to the code-breaking Colossus computer. Intelligence testing was being used to shape the school curriculum in the US, and in the UK - both countries in which education was being integrated for the first time. The men who coined the term AI were sure they knew what intelligence meant. It meant global power. It meant playing chess and solving mathematical problems. Or as Marvin Minsky put it a couple of decades later it meant ā€œthe ability to solve hard problemsā€ (Minsky, 1985a). Of course it takes a certain kind of man to know whatā€™s hard - but as it turned out the simple problems like vision and natural language were much, much harder to solve with computation than the hard ones, like playing chess.
  • 11. But the use of intelligence to divide people up, and assign them to different categories, especially to different categories of work, this goes all the way back through eugenics, intelligence testing, all the way back to Charles Babbage, who in his time was not celebrated for his failed difference engine, but for his much more successful work on making the factory and plantation systems more efficient. Intelligence was taken from the weaver and assigned to the punch card system and the factory overseer. Plantations were also managed like machines, no worker having oversight of the whole process. In exactly the same way, the difference engine broke down the work of calculating into discrete operations, so that the simplest could be done mechanically. If it had been successfully made and used, the first people put out of work would have been the women and children who did these basic calculations, to produce the nautical almanacs that were essential to the colonial trade. These workers were called ā€˜calculatorsā€™, just as the women workers on the Colossus at Bletchley Park were called ā€˜computersā€™. Itā€™s never the most powerful people whose work can be taken over by a machine.
  • 12. In the present day, writers like Meredith Whittaker, Simone Brown, Edward Ongweso junior, Joy Buolamwini of the Algorithmic Justice League and many more are exposing the consequences of ā€˜AIā€™ for people who fall into the wrong categories - whether itā€™s facial recognition in policing (this diagram is from the gender shades project) or AI surveillance of borders and conflict zones.
  • 13. Talking of conflict zones, DARPA is the US military agency that has funded AI since the early 60s. Much of this work has gone into developing autonomous weapons, and supporting battlefield decisions. I have removed the image originally included under this recent headline, about DARPA funding a multi-million dollar project on battlefield AI. Iā€™ve removed it because in September this year, when the announcement was published, an image of drones dropping bombs on their AI-designated targets did not have the same capacity to distress as would do today. I did not feel we would want to be confronted with that image. But I think is important to remember that the project of AI has always been one of projecting global power.
  • 14. You may think the link to military AI is a bit unfair. After all, general technologies can be used in many different ways. DARPA funded the development of speech recognition technologies that are now used to support accessibility, for example. But the scale of military funding of AI, at least until about 2014, was what kept the project alive. Inevitably it skewed what that project was interested in. And it means that many companies looking to sell systems to education today, those systems have their roots in military and surveillance applications. Looking into the ā€˜Safer AIā€™ summit a few weeks ago, I found myself wondering why the task of ā€˜unlocking the future of educationā€™ had been given by the DfE to an AI company called Faculty. Some of you may remember Faculty from its role in the Vote Leave campaign and in helping to run logistics for number ten during the covid crisis. At the end of a blog post summarising Facultyā€™s involvement with the future of AI in education was a link inviting education leaders to ā€˜connect with Faculty about your AI strategyā€™. Well, I clicked the link. Iā€™m like that. And it goes straight to this page of services to the military and law enforcement agencies. I would call this a smoking gun if it wasnā€™t such a militaristic metaphor. But clearly the same services that have been developed for law enforcement are being sold directly to schools, with the endorsement of the DfE. This is kind of unavoidable in an industry that only survived thanks to decades of military funding.
  • 15. I want to focus for the rest of my talk on what is called generative AI, though from a statistical and computational point of view this is an entirely different technology to the ones being developed in the 1950s and the ones underpinning most surveillance technologies I just described. I prefer to call this new technology synthetic media. This is my definition (on the slide), but Emily Bender also called generative AI ā€˜synthetic media machinesā€™, and Ted Chiang, another AI insider turned critic, is content with ā€˜applied statisticsā€™. This is Naomo Kleinā€™s more politically positional definition:
  • 16. So it isnā€™t quite as simple as ā€˜seizingā€™. This is where I want to talk more particularly about how relationships are being reframed through these technologies. Soā€¦ For example, LLaMA-2 was trained using more than 1 million human annotations. The model in this diagram, taken from open source project DeCiLM, is retuned with new data every week. I am really grateful to this project for its somewhat untypical openness about the training process. What you see at each of these four points is human work, human knowledge work being done. But only one kind of work - the work of the model engineers - is acknowledged and rewarded. The rest are forms of ā€˜intelligenceā€™, if you like, that the ā€˜intelligentā€™ system does not want to acknowledge or pay very much for. Like the human chess player hiding in the mechanical turk we are not supposed to see these people if we want the magic to happen. Now, in practice, most of the data workers who make up the third part, who are usually referred to as the ā€˜data engineā€™, are workers in the global south, where they are paid around $1-3 an hour, depending on the kinds of data enrichment they are doing. This work is precarious and stressful. Workers in Kenya, for example, are suing for the trauma they suffered labelling violent and harmful images so the modelā€™s producers could claim that it was safe for users. Also, nobody paid for the original training data, that actually makes up the data model, that in being re-synthesised now threatens the livelihoods of creative and productive workers, and in fact the whole economy of creative and scholarly production, that all our own livelihoods here depend on in the long term. And finally are all of us - every time we interact with the model we are contributing to future training with our own creative ideas, our prompts, our materials. Who benefits from this? In the short term, perhaps we are made a bit more productive. Productivity is always enjoyable when it is
  • 17. fresh out of the box. But who benefits when these become the new global operating systems for all knowledge? Unlike the internet, that for all its flaws is an open, distributed, standards based architecture. These are closed systems. And even as closed systems, they are not closed like an organisation is closed, with all its explicit and implicit know-how distributed throughout its resources and its technologies and its employees. Through these new relationships of data and labour, valued knowledge is entirely captured and managed in one system, a system that can be owned by someone else. Once again, the idea of intelligence is being used to divide and classify labour, in order to deskill and devalue it. Itā€™s still human beings, doing human work. This is not some kind of magic box, it is just good old fashioned Taylorist - or perhaps Babbage-ist - division of labour.
  • 18. We need to think about where our students fit in all this. All the guidelines Iā€™ve seen imagine students as end users. So, the worst we can say about that is they will have to be a lot more productive. Because if your employer is expecting 250 ChatGPT articles a week, as some recent job advertisements have asked for, or expects you to code in half the time it used to take, with GitHub CoPilot, you are not going to be paid more, you are just going to be timed less. Perhaps a very few students will be highly paid system designers. But increasing numbers of them will end up as part of the data engine, perhaps working for one of the burgeoning annotation companies, or perhaps working inside an organisation, tuning its data model so other workers can continue to ramp up their productivity. The International Labour Organisation highlights that people under 30 are far more likely to be employed in platform data work than older workers. The EU estimated five years ago that 10% of students had worked in the gig economy - that number is surely higher now. Our students are implicated in the data engine at every level. We canā€™t just think of them as consumers of its products.
  • 19. But as educators, we also care about how students are being addressed as consumers. Type the words ā€˜writingā€™ and ā€˜AIā€™ into any search engine - while you still can - and you will fine hundreds of promoted web sites, selling services that promise to take away the drudgery of reading and writing and give students back their time. And we should not be moralising about this - as teachers and researchers we are being sold exactly the same promise, and we are lapping it up. Take away the drudgery, focus on what really matters. Except, what if ā€˜what really mattersā€™ sometimes is the hard work of reading and writing? What if you canā€™t ā€˜humaniseā€™ your text, your images, or your code, just by clicking a button, but you can develop as a human being by engaging in those activities for yourself? In fact the evidence points entirely the opposite way to the promise. More automation actually makes work routines more standardised and more boring, for the people still left to do them. In the case of learning in particular, it is not easy to know what this enhanced productivity is buying you, unless it is more time to earn the money you need to pay for your learning - perhaps with your side hustle in the data economy. Isnā€™t time to read, write, learn and think precisely what education is supposed to be buying you?
  • 20. There are inequalities baked into the models themselves, as I have argued, but they are also baked into the commercialisation of the models, with paid versions rapidly overtaking free models in terms of their performance. A recent study, which I found on the Institute for Student Employers web site (though actually carried out by , showed that results on standard graduate recruitment tests are now skewed in favour of applicants who can pay for premium models. And that is only the applicants from the wealthiest households. A small levelling up effect for neurodiverse applicants, for example, was dwarfed by this financial inequity. So the ISEā€™s conclusion is that this will ā€˜set social mobility efforts back yearsā€™. It seems likely that recruitment will centre on live tasks, interviews and team activities with no access to generative models. So we do students a disservice if we donā€™t expose them to these same conditions in their studies and assessments. It would be strange if universities were falling behind graduate recruiters on such a key issue of equity. Major companies wonā€™t be relying on generic models. They wonā€™t want generic prompt engineers. They will want critical thinkers who can express their ideas and work in teams, and if they use in-house models for productivity reasons, they will train their people to use them. Thatā€™s my prediction anyway.
  • 21. So there are inequities in the making of the models, inequities in the using of the models, and itā€™s now well known that the data the models are built on has all kinds of bias built in. This data reflects views from the distant past, views from the fringes of the internet, and above all it predominantly reflects the views of white, English speaking men whose ideas have made it into the digital record. I mainly know about language models but when it comes to bias the image models are just much more vivid. So again, this is research done by Bloomberg, because the big companies really want to know how these models are impacting on their ability to attract the best talent. They are putting equity to the fore. They generated thousands of images using the names of occupations, and they found the skin colour of the people depicted in those images matched the typical pay of the occupations. I donā€™t want to show generated images and risk perpetuating those ideas, but I find this a striking image from their research.
  • 22. The same study found similar issues for gender and pay, though obviously gender is a very contested issue when it comes to how digital images are gendered by viewers. I am not commenting on Bloombergā€™s specific process here, only that clearly the generated images were stereotyping occupations along conventional gender lines.
  • 23. And if it were not enough that these models perpetuate some of the most violent, unjust ideas from our own past, they are also a threat to our planetary future. There are of course bigger polluters than big tech, but as these statistics show, the massive computing power required to run models, both in training and for every inference run, is non-negligible and is growing every year. There is a massive demand on water to cool the power plants where thousands of Nvidia chips are running these models. We are told the industry wants to develop less power and water hungry systems. But at the moment, a shortage of chips is the only thing holding back the development of even more powerful models. And computing power is the reason why the big tech companies have gained market dominance so early on. The first truly marketable products from the whole AI project have come from throwing power and data at it. This is a winner takes all market, and power wins. Why would the winners make it any easier for competitors to get on board?
  • 24. And finally, there is what we in universities and colleges have a special care about - what you might call the knowledge ecology, or how ideas are developed, tested, represented and shared. In relation to our care of students, of course we must care about issues such as deepfakes, the flooding of social media with disinformation, and what Cory Doctorow calls the enshittification of the internet at large. But we must have a special concern for how synthetic text and data will insert themselves into the research, teaching and learning practices that we, in the sector, rely on. Research is difficult, and we are always under pressure to do it faster and more efficiently. But what if difficulty is sometimes, actually, the point? To discover something that isnā€™t in the written record and isnā€™t in a model - which is to say, a summary of previous research - either? Teaching in ways that are adaptive to studentsā€™ needs takes time and skill and personal attention. But what if that time and attention is actually what they need? This quote comes from Luke Munnā€™s recent article on evaluating AI on indigenous Maori principles - and thanks to Paul Prinsloo for taking me to this piece. He points out that generative AI doesnā€™t just categorise us, it requires us to think in its categories, and those categories may not be what we need to imagine alternative futures, and discover alternative realities. The speed and efficiency of
  • 25. So I want to talk briefly now about what a relational ethical response to these developments might look like. And one solution I think we should resist is to reframe everything we teach and assess around a definition of human skills in relation to what is called ā€˜artificial intelligenceā€™. We donā€™t need people who can work in collaboration with artificial intelligences - that concedes agency to what are simply systems for coordinating our own and other peopleā€™s work. Itā€™s delusional. We shouldnā€™t accept that what hype and computing power and the concentration of capital can produce today should define what it is valuable and useful for graduates to do tomorrow. These systems are brittle, unreliable, contentious, inequitable, a legal minefield. Big companies are already investing less in them than they were a year ago. It is not inevitable that they will dominate the workplace, and maybe we should be sharing with students that there are choices and there are doubts.
  • 26. When it comes to working with students, I fully agree that we want students to be critical, but not only about the outcomes of these models and not only in relation to their own work. We want them to be asking questions that go wider than that, depending of course on the focus of the subjects they are studying. The questions may look different in engineering, in history, and in nursing. But I think these are questions that young people are already asking. They are not moralising about abstract things such as originality and academic integrity. They are actually very concrete questions about technology and learning, that they have a stake in.
  • 27. Finally, I think we need to be creating an ecosystem in which ethical choices are actually available, and the time and resources for people to think, consider, ask questions, negotiate understandings. The new EU regulations on AI are actually rather good at defining different kinds of ethical actors in the AI space. Mostly they have let the big, general models off the hook, for reasons we can speculate about. But despite that, they are not at all interested in end users. The responsibility for providing an ethical environment in which systems are deployed lies mainly with the organisations providing the systems, in our case with universities and colleges, and their organisations and regulatory bodies. The EU classifies the use of all AI systems in education as high risk. It requires all of these thingsā€¦ Now, do we really believe that any of the models we are using in further and higher education could meet these requirements? And if not, how do we get there? I donā€™t see how we can do that without, as a sector, deciding to build and maintain our own models.
  • 28. It will be challenging. This chart shows the huge brain drain there has been from academic AI to the commercial sector. Although small, open models are now being built that can run on a laptop, to serve our sector a lot of computing power will undoubtedly be needed. We will have to relate to the large scale commercial models at some level. But by having a collective voice, the sector could negotiate that relationship more effectively for all of us. We can only do this, therefore, collaboratively and openly. Otherwise this will just another source of inequity and stratification across universities and colleges and their members. Wealthy businesses like Bloomberg are already doing this. I have no doubt wealthy universities and research institutes are doing it. But we need to start joining up. Collectively, universities and colleges are key ethical actors. Perhaps uniquely as a sector we still have the know-how. We have reasons to collaborate - we are not really a market, however many governments try to make it so. We have a very particular stake in knowledge, knowledge production, and values around knowledge. And we do build open knowledge projects - wikipedia rests very heavily on the work of students and academics. We have contributed extensively to open standards since the birth of the internet. We have thriving open source and open education communities. But will the sector act collectively, in this case?
  • 29. I want to leave you with the thought that we do all have a voice in this. This audience is one of the most influential when it comes to how the sector responds, collectively. I started with a black box that couldnā€™t be opened, I want to finish with a box that perhaps shouldnā€™t have been opened. When Pandoraā€™s box was opened, all kinds of troubles came out. But at the bottom there was still hope. And I have great hope in the responses being made by members of the ALT community to the challenges of these new technologies, founded on our FELT values. And I look forward to hearing about more of them today.