A talk implies listening, and listening is linear. That is just one of the flaws of algorithms: linearity without complex dialogue
Facebook curates news based on algorithmic choices (e.g. based on likes of friends)
White young males mostly (yes, it matters). Is Artificial Intelligence a reversibly engineered brain?
Algorithms can be found in every piece of software: Turn-it-In, twitter, Big Data from & for MOOCs, browser searches, online sales… They are pervasive and are now used practically in all programmed applications.
We are in part algorithms due to the information we absorb coming frm the internet and our apps.
Easy to proof: in order to use online learning, English is used most frequently, simplified English with English jargon added. Written texts are the most frequently used form in content delivery and certainly in assessments. This leaves out non-written language based societies, and learn-by-mimicing behaviour. Do we ethically support the use of English, written assessments as proof of learning?
A product is always a mirror of its creator. See community bounding: https://medium.com/@joaomilho/fixing-the-filter-bubble-e360a2c9bfdc
A black box is a system that has inputs and outputs but without the public being aware of its internal workings.
The IQ trap all over again, implying a simplistic reversed engineering option of the brain.
AI is just one piece of information that fits with current technological and dominant power.
People make up humanity, through diversityNo human is the same, nor their brains.
Deep learning builds a “neural network”, loosely modelled on the human brain. This is composed of hundreds of thousands of neurons organised in different layers.
Only a small part of AI in education takes into account the anguish/emotion/state of being of the student or adult learner. This is called affective computing, and combines computer science, psychology and cognitive science: speech, facial expression.
Why are these teacher’s ‘good’?
Who cares about MOOC drop-outs? It just means the learner had better things to do. Focus on intrinsic interest and motivation to capture the learner, no matter where (informal and formal).
But morals are not ethics. Morality is a personal compass for right and wrong, therefor it is an internal, individual process, unlike ethics which can be expressed as a set of external rules.
Ethics is about constructing or strengthening ideologies. It does not support one over the other, it just provides arguments for any ideological decision.
artificial intelligence - in need of an ethical layer?
AI in Education in need for ethics?
Inge de Waard (at gmail dot com)
Discuss and compare ideas on implementing an
ethical layer within Artificial Intelligence for
Artificial Intelligence: mathematical models that
enable communication, enhanced decision
making, semantic reasoning, responding and
learning between machines and humans.
Algorithms: a process or set of rules to be followed
in calculations or other problem-solving
operations. Algorithms are coded into software.
Algorithms are all around us. We are a
product of algorithms that surround us.
Algorithms enter our homes, work, schools,
institutes, habits… but in most cases they are
Being non-transparent results in unexpected
outcomes: filter bubbles (un)professional hairstyle
AI risks to replicate the norm (filter bubbles prove
it).Explained in part by the similar profiles of the
creators of these algorithms.
Frank Pasquale (law prof) argued, “authority is
increasingly expressed algorithmically.”
Audrey Waters (fab thinker, talking tomorrow)
wrote “Algorithms — their development and
implementation — are important expressions of
power and influence.”
Algocratic governance based on black boxes?
Information and software systems rule.
AI in formal education: semi-automated
assessments, gamification, learning analytics,
predictive analytics, scientific apps, automated
student assistants, identity confirmation …
AI in informal learning: browser searches, personal
apps, quantified self, learning locker based
learning, course suggestions …
In short, AI in Ed can go both ways, and all the
ways in between…
• Primary school assessment reveals a never-gonna-formally-learn student
but enthusiastically yells out poems => gets a one-on-one tutor for
language, and ultimately learns poetry.
• Pre-school reveals personal skills compatible with satisfaction through
skilled labor. Learning trajectory is provided, mentorship is arranged.
• Humans are enhanced with technology => post-human is evolving and
emotions are supported to lead to satisfied lives.
Personal learning paths, enhancing strengths and intrinsic motivation based
on enthusiasm and emotions and personal learning goals …
• AI looks only for those profiles that are deemed to be able to
contribute to society. The other humans are second class citizens with
less opportunities. Emotions are screened for violent potential.
• AI evolves and looks at humans as an inefficient species (based on
existing human-build algorithms coding efficiency and moral codes
such as peace must be achieved. Humans are put into reservations to
protect them against themselves. AI develops into space exploring
Transparency to learn what is happening with AI in
e.g. learning analytics and why => ethical rules.
Maybe it is just natural to increase the dominant
norm? And does not need ethics? History has
always kept mostly words from those in power.
Does power always win, or do those win who we
want to remember?
What would be part of your ethical layer, which
outcomes of AI in education would you like to see?