Talk for MK Geek Night, 23 Sep 2021
AI means more hype, more technology, more future - and more money! But what actually is it? In this talk, Doug will explain what people mean by artificial intelligence and machine learning, what sort of problems they can solve, and how they do it. We'll see a range of examples where they're being used, and look at how it goes well and how it goes wrong, from entertaining AI weirdness to serious algorithmic bias. You won't end up being able to implement techniques like Support Vector Machines or Generative Adversarial Networks (unless you already could) but you should end up with a better idea of what the people who can are up to.
World Usability Day, 2018
AI is becoming a greater part of the systems and products we design, yet algorithms have been shown time and time again to be imbued with unintentional racism, sexism, and other -isms. As design and AI fields converge can how researchers, designers, and developers work together to ensure that our powers are used for good, and not for accidental evil?
Using your data to influence your environmentIan Forrester
With the internet of things all around us, it is now possible for your personal data to influence your environment. Soon, your personal data could be used to influence how a movie is shown to you! Let's talk about the implications and ethics of using data this way.
Give a background of Data Science and Artificial Intelligence, to better understand the current state of the art (SOTA) for Large Language Models (LLMs) and Generative AI. Then start a discussion on the direction things are going in the future.
Designing AI for Humanity at dmi:Design Leadership Conference in BostonCarol Smith
As design leaders we must enable our teams with skills and knowledge to take on the new and exciting opportunities that building powerful AI systems bring. Dynamic systems require transparency regarding data provenance, bias, training methods, and more, to gain user’s trust. Carol will cover these topics and challenge us as design leaders, to represent our fellow humans by provoking conversations regarding critical ethical and safety needs.
Presented at dmi:Design Leadership Conference in Boston in October 2018.
TXO & Komfo - AI: The good, the bad, and the ugly of AIKomfo
The good, the bad, and the ugly of AI - How do you meet the now frontier?
AI has already moved into many areas of business, society, and everyday life. But how do you embrace the now frontier in digital transformation and implement it in your organization? Grimur Fjeldsted, an expert in digital transformation and innovation management, is here to tackle the hard questions and give you a concrete view on the current AI landscape by uncovering all the key aspects.
In Agenda:
- Stay ahead of the AI curve with insights into the current state, future outlook, as well as the ethics of AI.
- Actionable tips into how to turn an allegedly disruptive technology into a major opportunity for business
- Insights into big data and social data integration for business
World Usability Day, 2018
AI is becoming a greater part of the systems and products we design, yet algorithms have been shown time and time again to be imbued with unintentional racism, sexism, and other -isms. As design and AI fields converge can how researchers, designers, and developers work together to ensure that our powers are used for good, and not for accidental evil?
Using your data to influence your environmentIan Forrester
With the internet of things all around us, it is now possible for your personal data to influence your environment. Soon, your personal data could be used to influence how a movie is shown to you! Let's talk about the implications and ethics of using data this way.
Give a background of Data Science and Artificial Intelligence, to better understand the current state of the art (SOTA) for Large Language Models (LLMs) and Generative AI. Then start a discussion on the direction things are going in the future.
Designing AI for Humanity at dmi:Design Leadership Conference in BostonCarol Smith
As design leaders we must enable our teams with skills and knowledge to take on the new and exciting opportunities that building powerful AI systems bring. Dynamic systems require transparency regarding data provenance, bias, training methods, and more, to gain user’s trust. Carol will cover these topics and challenge us as design leaders, to represent our fellow humans by provoking conversations regarding critical ethical and safety needs.
Presented at dmi:Design Leadership Conference in Boston in October 2018.
TXO & Komfo - AI: The good, the bad, and the ugly of AIKomfo
The good, the bad, and the ugly of AI - How do you meet the now frontier?
AI has already moved into many areas of business, society, and everyday life. But how do you embrace the now frontier in digital transformation and implement it in your organization? Grimur Fjeldsted, an expert in digital transformation and innovation management, is here to tackle the hard questions and give you a concrete view on the current AI landscape by uncovering all the key aspects.
In Agenda:
- Stay ahead of the AI curve with insights into the current state, future outlook, as well as the ethics of AI.
- Actionable tips into how to turn an allegedly disruptive technology into a major opportunity for business
- Insights into big data and social data integration for business
Google, IBM, Microsoft, Apple, Facebook, Baidu, Foxconn, and others have recently made multi-billion dollar investments in artificial intelligence and robotics. Some of these investments are aimed at increasing productivity and enhancing coordination and cooperation. Others are aimed at creating strategic gains in competitive interactions. This is creating "arms races" in high-frequency trading, cyber warfare, drone warfare, stealth technology, surveillance systems, and missile warfare. Recently, Stephen Hawking, Elon Musk, and others have issued strong cautionary statements about the safety of intelligent technologies. We describe the potentially antisocial "rational drives" of self-preservation, resource acquisition, replication, and self-improvement that uncontrolled autonomous systems naturally exhibit. We describe the "Safe-AI Scaffolding Strategy" for developing these systems with a high confidence of safety based on the insight that even superintelligences are constrained by mathematical proof and cryptographic complexity. It appears that we are at an inflection point in the development of intelligent technologies and that the choices we make today will have a dramatic impact on the future of humanity.
Video of the talk: https://www.parc.com/event/2127/ai-and-robotics-at-an-inflection-point.html
Military Flight Training - Digital Technology Disruption Ahead?Andy Fawkes
Looking at some of the latest digital technology trends and developments that will or may impact on military flight training. Presented at the 7th SMi Annual Military Flight Training Conference - London 10/11 October 2018.
Google 2.0 - More than just a search engine.Kyle Webb
Presentation at WestCAST 2011 by Kyle Webb, Cassie Eskra, and Nicole VanCaeseele.
In education, Google has traditionally been seen as simply a search engine. However, in recent years, Google has been developing their own resources that present many new learning opportunities in the classroom. Google Earth, Google SketchUp, and WolframAlpha will be presented as well as examples of how they can be used. We will focus primarily on their applications to math education, but also explore their potential for cross-curricular use.
In this session we'll dive into the journey that Google chooses to take in order focus on AI: what was the mindset, what were the challenges and what is the direction for the future.
On March 26, 2015 Steve Omohundro gave a talk in the IBM Research 2015 Distinguished Speaker Series at the Accelerated Discovery Lab, IBM Research, Almaden.
Google, IBM, Microsoft, Apple, Facebook, Baidu, Foxconn, and others have recently made multi-billion dollar investments in artificial intelligence and robotics. Some of these investments are aimed at increasing productivity and enhancing coordination and cooperation. Others are aimed at creating strategic gains in competitive interactions. This is creating “arms races” in high-frequency trading, cyber warfare, drone warfare, stealth technology, surveillance systems, and missile warfare. Recently, Stephen Hawking, Elon Musk, and others have issued strong cautionary statements about the safety of intelligent technologies. We describe the potentially antisocial “rational drives” of self-preservation, resource acquisition, replication, and self-improvement that uncontrolled autonomous systems naturally exhibit. We describe the “Safe-AI Scaffolding Strategy” for developing these systems with a high confidence of safety based on the insight that even superintelligences are constrained by the laws of physics, mathematical proof, and cryptographic complexity. “Smart contracts” are a promising decentralized cryptographic technology used in Ethereum and other second-generation cryptocurrencies. They can express economic, legal, and political rules and will be a key component in governing autonomous technologies. If we are able to meet the challenges, AI and robotics have the potential to dramatically improve every aspect of human life.
Semantics, Deep Learning, and the Transformation of BusinessSteve Omohundro
Deep learning is likely to have a big impact on business. McKinsey predicts that AI and robotics will create $50 trillion of value over the next 10 years. Over $1 billion of venture investment has gone to 250 deep learning startups over the past year. Deep learning systems have recently broken records in speech recognition, image recognition, image captioning, translation, drug discovery and other tasks. Why is this happening now and how is it likely to play out? We review the development of AI and the pendulum swings between the "neats" and the "scruffies". We describe traditional approaches to semantics through logics and grammars and the new deep learning vector semantics. We relate it to Roger Shepard's cognitive geometry and the structure of biological networks. We also describe limitations of deep learning for safety and regulation. We show how it fits into the rational agent framework and discuss what the next steps may be.
Dissecting the dangers of deepfakes and their impact on reputation Generative...CSIRO National AI Centre
At the recent Generative AI Conference - This talk defined deepfakes and the widespread damage misinformation can cause. In order to build awareness of the ethical implications of deepfakes. At the
National AI Centre, Responsible AI and Responsible AI Network
allows us to action a way to use AI that is aligned to Australia's AI ethics principles.
Moving Forward with Digital Disruption: A Right MindsetBohyun Kim
A keynote presented at the MentorNJ In-Person Networking Event 2018 organized by LibraryLinkNJ -The New Jersey Library Cooperative, held at Monroe Township, NJ. on October 5, 2018.
http://librarylinknj.org/MentorNJ/programs/networking-event-2018
Artificial Intelligence in testing - A STeP-IN Evening Talk Session Speech by...Kalilur Rahman
AI is the new ELECTRICITY - said Andrew Ng. There are two sides of the coin. There are a lot of nay-sayers for AI. At the end of the day, it will be Augmented Intelligence, Adaptive Intelligence, Automated Intelligence that will propel human intelligence forward - more than anything else. It will be a great time ahead. Whether it would be an "Eye(AI) Wash" as skeptics say or an "I wish" from them for starting late on the journey, only time will tell. It is a matter of when and how long, instead of an If. #ArtificialIntelligence #IntelligentTesting #QCoE #NextGenTesting #QualityFocusedDelivery #DigitalInnovation #ITIndustry #NewAgeIT #InnovativeTesting#AIFication #Automation #DigitalEconomy #Singularity #Transcendence #Futurism
Trusting machines with robust, unbiased and reproducible AI Margriet Groenendijk
To trust a decision made by an algorithm, we need to know that it is reliable and fair, that it can be accounted for, and that it will cause no harm. We need assurance that it cannot be tampered with and that the system itself is secure. We need to understand the rationale behind the algorithmic assessment, recommendation or outcome, and be able to interact with it, probe it – even ask questions. And we need assurance that the values and norms of our societies are also reflected in those outcomes.
Learn about how bias can take root in machine learning algorithms and ways to overcome it. From the power of open source, to tools built to detect and remove bias in machine learning models, there is a vibrant ecosystem of contributors who are working to build a digital future that is inclusive and fair. Learn how to achieve AI fairness, robustness, explainability and accountability. You can become part of the solution.
APIdays Paris 2018 - Bots on the 'Net: The Good, the Bad, and the Future, Mik...apidays
Bots on the 'Net: The Good, the Bad, and the Future
Mike Amundsen, Director of API Architecture, API Academy
Apply to be a speaker here - https://apidays.typeform.com/to/J1snsg
[DSC Europe 23] Shahab Anbarjafari - Generative AI: Impact of Responsible AIDataScienceConferenc1
Today, we embark on a journey into the realm of Generative AI (Gen AI), a force of innovation and possibility. We'll not only unveil the vast opportunities it offers but also confront the ethical challenges it poses. In the spirit of responsible innovation, we'll then dive deep into Responsible AI, illuminating the path to its implementation in this era of Gen AI. Join us for a profound exploration of this technological frontier, where our commitment to responsibility and foresight shapes the future.
How to Prevent & Overcome Digital Extinction - Digital EvolutionAndrea Vascellari
5 practical tips on how to prevent and eventually overcome 5 of the most frequent causes of digital extinction that brands, organizations and at times also individuals are facing today.
How to get to Runter End: Generating English placenames with a neural networkDoug Clow
These are slides for a talk at MK Geek Night, Thu 7 March 2019. Doug trained a neural network on the official database of placenames in England, then got it to generate its own suggestions. Some were convincing, some were funny, and some even turned out to be real places. Doug will give a bit of an explanation of how he did it, and show some of the best results.
A partial history of Educational Technology at the Open UniversityDoug Clow
This is a talk given at the OU's Computers and Learning Research Group, on 17 Jan 2019. In it I give a very partial history of educational technology at the Open University, since its founding in 1969 to the present day. It’ll be partial in multiple senses. A full history would take far longer than a single session. If I gave a comprehensively synoptic account, it’d be too broad-brush to be interesting. So I’ll be selecting elements to focus on, and I’ll be unashamedly partial in picking the ones that appeal particularly to me. We’ve always been pioneers in using technology to help our students learn. What that means has changed profoundly in some ways, and is much the same in others. As Santayana said, “Those who cannot remember the past are condemned to repeat it.” Come along to hear the digital equivalent of “I remember when all this was fields”!
More Related Content
Similar to What Actually Is Artificial Intelligence?
Google, IBM, Microsoft, Apple, Facebook, Baidu, Foxconn, and others have recently made multi-billion dollar investments in artificial intelligence and robotics. Some of these investments are aimed at increasing productivity and enhancing coordination and cooperation. Others are aimed at creating strategic gains in competitive interactions. This is creating "arms races" in high-frequency trading, cyber warfare, drone warfare, stealth technology, surveillance systems, and missile warfare. Recently, Stephen Hawking, Elon Musk, and others have issued strong cautionary statements about the safety of intelligent technologies. We describe the potentially antisocial "rational drives" of self-preservation, resource acquisition, replication, and self-improvement that uncontrolled autonomous systems naturally exhibit. We describe the "Safe-AI Scaffolding Strategy" for developing these systems with a high confidence of safety based on the insight that even superintelligences are constrained by mathematical proof and cryptographic complexity. It appears that we are at an inflection point in the development of intelligent technologies and that the choices we make today will have a dramatic impact on the future of humanity.
Video of the talk: https://www.parc.com/event/2127/ai-and-robotics-at-an-inflection-point.html
Military Flight Training - Digital Technology Disruption Ahead?Andy Fawkes
Looking at some of the latest digital technology trends and developments that will or may impact on military flight training. Presented at the 7th SMi Annual Military Flight Training Conference - London 10/11 October 2018.
Google 2.0 - More than just a search engine.Kyle Webb
Presentation at WestCAST 2011 by Kyle Webb, Cassie Eskra, and Nicole VanCaeseele.
In education, Google has traditionally been seen as simply a search engine. However, in recent years, Google has been developing their own resources that present many new learning opportunities in the classroom. Google Earth, Google SketchUp, and WolframAlpha will be presented as well as examples of how they can be used. We will focus primarily on their applications to math education, but also explore their potential for cross-curricular use.
In this session we'll dive into the journey that Google chooses to take in order focus on AI: what was the mindset, what were the challenges and what is the direction for the future.
On March 26, 2015 Steve Omohundro gave a talk in the IBM Research 2015 Distinguished Speaker Series at the Accelerated Discovery Lab, IBM Research, Almaden.
Google, IBM, Microsoft, Apple, Facebook, Baidu, Foxconn, and others have recently made multi-billion dollar investments in artificial intelligence and robotics. Some of these investments are aimed at increasing productivity and enhancing coordination and cooperation. Others are aimed at creating strategic gains in competitive interactions. This is creating “arms races” in high-frequency trading, cyber warfare, drone warfare, stealth technology, surveillance systems, and missile warfare. Recently, Stephen Hawking, Elon Musk, and others have issued strong cautionary statements about the safety of intelligent technologies. We describe the potentially antisocial “rational drives” of self-preservation, resource acquisition, replication, and self-improvement that uncontrolled autonomous systems naturally exhibit. We describe the “Safe-AI Scaffolding Strategy” for developing these systems with a high confidence of safety based on the insight that even superintelligences are constrained by the laws of physics, mathematical proof, and cryptographic complexity. “Smart contracts” are a promising decentralized cryptographic technology used in Ethereum and other second-generation cryptocurrencies. They can express economic, legal, and political rules and will be a key component in governing autonomous technologies. If we are able to meet the challenges, AI and robotics have the potential to dramatically improve every aspect of human life.
Semantics, Deep Learning, and the Transformation of BusinessSteve Omohundro
Deep learning is likely to have a big impact on business. McKinsey predicts that AI and robotics will create $50 trillion of value over the next 10 years. Over $1 billion of venture investment has gone to 250 deep learning startups over the past year. Deep learning systems have recently broken records in speech recognition, image recognition, image captioning, translation, drug discovery and other tasks. Why is this happening now and how is it likely to play out? We review the development of AI and the pendulum swings between the "neats" and the "scruffies". We describe traditional approaches to semantics through logics and grammars and the new deep learning vector semantics. We relate it to Roger Shepard's cognitive geometry and the structure of biological networks. We also describe limitations of deep learning for safety and regulation. We show how it fits into the rational agent framework and discuss what the next steps may be.
Dissecting the dangers of deepfakes and their impact on reputation Generative...CSIRO National AI Centre
At the recent Generative AI Conference - This talk defined deepfakes and the widespread damage misinformation can cause. In order to build awareness of the ethical implications of deepfakes. At the
National AI Centre, Responsible AI and Responsible AI Network
allows us to action a way to use AI that is aligned to Australia's AI ethics principles.
Moving Forward with Digital Disruption: A Right MindsetBohyun Kim
A keynote presented at the MentorNJ In-Person Networking Event 2018 organized by LibraryLinkNJ -The New Jersey Library Cooperative, held at Monroe Township, NJ. on October 5, 2018.
http://librarylinknj.org/MentorNJ/programs/networking-event-2018
Artificial Intelligence in testing - A STeP-IN Evening Talk Session Speech by...Kalilur Rahman
AI is the new ELECTRICITY - said Andrew Ng. There are two sides of the coin. There are a lot of nay-sayers for AI. At the end of the day, it will be Augmented Intelligence, Adaptive Intelligence, Automated Intelligence that will propel human intelligence forward - more than anything else. It will be a great time ahead. Whether it would be an "Eye(AI) Wash" as skeptics say or an "I wish" from them for starting late on the journey, only time will tell. It is a matter of when and how long, instead of an If. #ArtificialIntelligence #IntelligentTesting #QCoE #NextGenTesting #QualityFocusedDelivery #DigitalInnovation #ITIndustry #NewAgeIT #InnovativeTesting#AIFication #Automation #DigitalEconomy #Singularity #Transcendence #Futurism
Trusting machines with robust, unbiased and reproducible AI Margriet Groenendijk
To trust a decision made by an algorithm, we need to know that it is reliable and fair, that it can be accounted for, and that it will cause no harm. We need assurance that it cannot be tampered with and that the system itself is secure. We need to understand the rationale behind the algorithmic assessment, recommendation or outcome, and be able to interact with it, probe it – even ask questions. And we need assurance that the values and norms of our societies are also reflected in those outcomes.
Learn about how bias can take root in machine learning algorithms and ways to overcome it. From the power of open source, to tools built to detect and remove bias in machine learning models, there is a vibrant ecosystem of contributors who are working to build a digital future that is inclusive and fair. Learn how to achieve AI fairness, robustness, explainability and accountability. You can become part of the solution.
APIdays Paris 2018 - Bots on the 'Net: The Good, the Bad, and the Future, Mik...apidays
Bots on the 'Net: The Good, the Bad, and the Future
Mike Amundsen, Director of API Architecture, API Academy
Apply to be a speaker here - https://apidays.typeform.com/to/J1snsg
[DSC Europe 23] Shahab Anbarjafari - Generative AI: Impact of Responsible AIDataScienceConferenc1
Today, we embark on a journey into the realm of Generative AI (Gen AI), a force of innovation and possibility. We'll not only unveil the vast opportunities it offers but also confront the ethical challenges it poses. In the spirit of responsible innovation, we'll then dive deep into Responsible AI, illuminating the path to its implementation in this era of Gen AI. Join us for a profound exploration of this technological frontier, where our commitment to responsibility and foresight shapes the future.
How to Prevent & Overcome Digital Extinction - Digital EvolutionAndrea Vascellari
5 practical tips on how to prevent and eventually overcome 5 of the most frequent causes of digital extinction that brands, organizations and at times also individuals are facing today.
Similar to What Actually Is Artificial Intelligence? (20)
How to get to Runter End: Generating English placenames with a neural networkDoug Clow
These are slides for a talk at MK Geek Night, Thu 7 March 2019. Doug trained a neural network on the official database of placenames in England, then got it to generate its own suggestions. Some were convincing, some were funny, and some even turned out to be real places. Doug will give a bit of an explanation of how he did it, and show some of the best results.
A partial history of Educational Technology at the Open UniversityDoug Clow
This is a talk given at the OU's Computers and Learning Research Group, on 17 Jan 2019. In it I give a very partial history of educational technology at the Open University, since its founding in 1969 to the present day. It’ll be partial in multiple senses. A full history would take far longer than a single session. If I gave a comprehensively synoptic account, it’d be too broad-brush to be interesting. So I’ll be selecting elements to focus on, and I’ll be unashamedly partial in picking the ones that appeal particularly to me. We’ve always been pioneers in using technology to help our students learn. What that means has changed profoundly in some ways, and is much the same in others. As Santayana said, “Those who cannot remember the past are condemned to repeat it.” Come along to hear the digital equivalent of “I remember when all this was fields”!
Where is the evidence? A call to action for learning analyticsDoug Clow
Keynote presentation at LASI-Rocky Mountains online conference, 12 June 2017, based on a similar talk at LAK17, Learning Analytics and Knowledge Conference 2017, Vancouver. An analysis of the nature of evidence, the state of the evidence in the field of learning analytics, and some suggestions for ways to improve, based on work from the LACE project's Evidence Hub.
Trains and Balloons: An Introduction to Learning AnalyticsDoug Clow
Slides for a talk given at the Institute of Physics Higher Education Group meeting on Concept Inventories and Learning Analytics, Tue 4 April 2017, Open University, UK
A Whistestop Tour of Theories for TEL ResearchDoug Clow
Presentation to postgraduate students at the Institute of Educational Technology, The Open University, UK, 28 Feb 2017. A very brief overview of some of the theories that are often referenced in TEL research.
LAEP Visions of the Future of Learning AnalyticsDoug Clow
Presentation on the LACE project's Visions of the Future of Learning Analytics work from the LAEP project's expert workshop in Amsterdam, 15-16 March 2016.
How can universities scale up learning analytics beyond small-scale pilots to seriously use data to improve student learning? This interactive workshop was designed to help you think this through for your institution.
Universities are hard to change. Having good data and analytics is a good start, but is only one part of success. This session will provide tools and frameworks to help you analyse what else is needed, building on experiences of successful large-scale learning analytics activity at the Open University and the University of Technology, Sydney, and from the pan-European Learning Analytics Community Exchange project.
Slides for a talk at Bett, London, 20 January 2016.
Visions of the Future of Learning AnalyticsDoug Clow
Eight visions of the future of learning analytics, created as a way of exploring possible futures by the LACE (Learning Analytics Community Exchange) Project, and presented at Bett 2016, London, 20 January 2016
Moving through MOOCs: Pedagogy, Learning and Patterns of Engagement.
Presentation at EC-TEL 2015, September, 2015, Toledo, Spain.
[This is the shorter, more visual version. The detailed version is available at http://www.slideshare.net/R3beccaF/moving-through-moocs-pedagogy-learning-and-patterns-of-engagement.]
Massive open online courses (MOOCs) are part of the lifelong learning experience of people worldwide. Many of these learners participate fully. However, the high levels of dropout on most of these courses are a cause for concern. Previous studies have suggested that there are patterns of engagement within MOOCs that vary according to the pedagogy employed. The current paper builds on this work and examines MOOCs from different providers that have been offered on the FutureLearn platform. A cluster analysis of these MOOCs shows that engagement patterns are related to pedagogy and course duration. Learners did not work through a three-week MOOC in the same ways that learners work through the first three weeks of an eight-week MOOC.
Creating an action plan for learning analyticsDoug Clow
Slides for a talk at Bett 2015, London, on Friday 23 January at Excel.
Learning analytics has great potential. By using data more effectively, we can understand and improve learning and the learning environment. Trail-blazing projects, exciting demonstrations and earnest strategy papers set out a compelling vision for data in HE.
That vision can sometimes seem far from institutional reality. How can we get some of those benefits for our learners?
This interactive workshop will help participants assess their institution’s current capability for making use of learning analytics, and help them plan for action. The facilitators will draw on a wide range of practical experience, including from the pan-European Learning Analytics Community Exchange project.
Learning Analytics: Making learning better?Doug Clow
Learning Analytics: Making learning better?
Slides for a talk at Bett 2015, London, Fri 23 January, as part of the LACE project (www.laceproject.eu)
This panel discussion starts with a short introduction to learning analytics and educational data mining, highlighting how European schools are using different types of data to help support, manage and predict learning outcomes. It includes viewpoints from national school networks in the Nordic countries and the Netherlands, a research input from the European Commission supported LACE project highlighting research on the use of learning analytics and an expert input on ethical and privacy issues in the application of learning analytics. Participants will be encouraged to share their views and where interested to join the growing LACE Community
Learning Analytics Examples from the UK, Australia and North AmericaDoug Clow
Examples of Learning Analytics from the UK, Australia and North America, aimed at schools level. Slides from a talk at a pre-conference seminar on learning analytics at the EMINENT conference, European Schoolnet, Pädagogishe Hochschule Zürich, 12 November 2014.
What is Learning Analytics? Slides from a talk at a pre-conference seminar on learning analytics at the EMINENT conference, European Schoolnet, Pädagogishe Hochschule Zürich, 12 November 2014.
Learning Analytics: A General Introduction and Perspectives from the UKDoug Clow
A presentation at a seminar on learning analytics for schools held at Skolverket, the Swedish National Agency for Schools, in Stockholm, Sweden, in collaboration with the Norwegian Centre for ICT in Education, on 9 October 2014. Part of the LACE project #laceproject www.laceproject.eu
http://lanyrd.com/2014/seminar-on-learning-analytics-for-schools-in-sto-2/
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
2. Who I am
Photo (CC)-BY-SA Bill Bertram https://commons.wikimedia.org/wiki/File:ZXSpectrum48k.jpg
3. Quick quiz:
How much do you
know about AI?
A. Nothing special
B. I’m interested
C. I’ve applied it myself
D. I have strong views
about the use of
gradient descent for
hyperparameter
optimisation in training
deep convolutional
neutral networks, or at
least, I fully understood
this sentence.
Photo by Maximalfocus on Unsplash
4. 1. What is AI?
2. How does it work?
3. How does it go wrong?
a) In entertaining ways
b) In serious ways
4. How does it go well?
5. 1. What is AI?
2. How does it work?
3. How does it go wrong?
a) In entertaining ways
b) In serious ways
4.V How does it go well?
6. Can machines
think?
• Turing Test:
Can you make a machine
humans can’t tell is a
machine?
• Eliza (algorithm)
• Chatbots (machine learning)
• GPT-3 (neural network)
Photo (CC) BY-SA Antoine Taveneaux https://en.wikipedia.org/wiki/Alan_Turing#/media/File:Turing-statue-Bletchley_14.jpg
9. Photo by Timelab Pro on Unsplash
• Games
• Chess, Go, video games
• Optimisation
• Distribution, manufacturing, retail, finance
• Complex control systems
• Robots, drones, self-driving cars
• Recommenders
• Amazon, YouTube, Netflix
• Speech recognition, natural language processing
• Siri, Alexa, Hey Google
10. 1. What is AI?
2. How does it work?
3. How does it go wrong?
a) In entertaining ways
b) In serious ways
4. How does it go well?
12. Photo by Anika Huizinga on Unsplash
More like this
(supervised learning)
What have we here?
(unsupervised learning)
Get a high score
(reinforcement learning)
18. By Petty Officer Photographer Jay Allen - https://www.royalnavy.mod.uk/news-and-latest-activity/news/2021/may/20/210520-carriers-at-sea-and-strike-warrior,
OGL 3, https://commons.wikimedia.org/w/index.php?curid=105562576
Fix bugs in sorting algorithms Delete the list to be sorted
(no items = nothing out of order)
Minimise difference between generated
generated and target output Delete the target output
Design an optimal lens It’s 20 m thick
Land aircraft on carrier So fast the braking cable force overflows
to zero
Bad optimisation
19. 1. What is AI?
2. How does it work?
3. How does it go wrong?
a)In entertaining ways
b)In serious ways
4. How does it go well?
22. BMJ 2020;369:m1328
7 April 2020
• 232 predictors
• Poorly reported
• High risk of bias
• Two worth validating
Nature Machine Intelligence
3, 199–217 (2021)
• 2,212 studies
• 415 after initial screening
• 62 after quality screening
• None of clinical use
Photo by Fusion Medical Animation on Unsplash
They used AI to help with Covid …
… it wasn’t good.
23. MQ-0 Reaper over Afghanistan
By Lt. Col. Leslie Pratt - commons file, Public Domain, https://commons.wikimedia.org/w/index.php?curid=68095681
New capabilities to do bad
Increasing inequality
Unemployment
25. Superintelligence
(CC)-BY William Clifford https://flic.kr/p/37PkGN
The AI does not hate you,
nor does it love you,
but you are made out of atoms
which it can use for something else.
26. 1. What is AI?
2. How does it work?
3. How does it go wrong?
a) In entertaining ways
b) In serious ways
4. How does it go well?
28. GANs
generator vs discriminator
Cats from http://thesecatsdonotexist.com/ https://devopstar.com/2019/02/25/generating-cats-with-stylegan-on-aws-sagemaker
30. Ethical AI
• Fairness
• Accountability
• Sustainability
• Safety
• Transparency
Leslie, D. (2019). Understanding artificial intelligence
ethics and safety: A guide for the responsible design
and implementation of AI systems in the public sector.
The Alan Turing Institute.
https://doi.org/10.5281/zenodo.3240529 Photo by Hendo Wang on Unsplash
43. How to use a neural network
1. Gather data
2. Train your
network
3. Use it on new data
4. Profit
Photo by Joshua Lanzarini on Unsplash
44. • It isn’t magic
• It depends on the data
• It has got way better
• It can go wrong
• in entertaining ways
• and in really bad ways.
• It will get better
So pleased to be here giving a talk.
Love giving talks, first in nearly 3y
This little guy here was put in by AI
Data scientist, digital transformation leader, researcher, teacherInterested in human & machine learning for decades
Turing Test Aka The Imitation Game, starring Benedict Cumberbatch
PowerPoint suggested these icons. Woah.
What coes AI do well now?
PowerPoint suggested these icons. Woah.
More like this – known data, cats/dogs: prediction, generation, pattern recognition, anomaly/noveltyWhat have we here – unknown data, find patterns, group this data, classify, categorise our customers
High score – a way of keeping score, explore/exploit: maximise revenue, minimise inventory, improve completion rate
Cells like in Excel
Numbers in Input are your data, photo pixels, Output, cat or dog
Adjust the numbers until the inputs give you the right output
Many more cells, more layers, arrows
I trained a NN on English placenames
It generated some fun ones
Janelle Shane
Inktober, creative prompt, draw every day for October
NN-generated prompts
Learn a locomotion strategy, fast. Makes a tall thing that falls over.
Jump as high as you can. No not rolling over. Highest point your lowest bit reaches. It falls over & pole vaults, not jumping.
YouTube, what sucks in your attention, has lots of vortexes that suck people in, tries them out on you. Ends up with the hard stuff on conspiracy theories, radicalization, etc
Facebook, socially meaningful engagement = most controversial posts, from friends & family, stuff you can’t help but engage on, and pull in reinforcements. I left.
Job applications, hard work, biased.
Train NN, prev applicants, who appointed.
Racist! Delete ethnicity. Names. Career paths, univs.
AI won’t stop racism, the people doing it have to work hard at stopping.
Sometimes it’s just not very good.
https://www.bmj.com/content/369/bmj.m1328https://www.nature.com/articles/s42256-021-00307-0
gig economy needs AI
How could really good AI be bad? Artificial general intelligence, not narrow
Self-improvement.
Paperclip maximiser- optimise manuf process, get smarter
Social intelligence, better deals with suppliers, sales
Far fetched, but so are the AIs below
Problem this week? No.
Worth looking at? Yes. And people are.
Chess used to mean intelligent! DeepBlue vs Kasparov 1990s
AlphaZero knew almost nothing, learned it from the game.
Chess, Shogi, Go
Huge compute effort to train, much less to play.
Generative adversarial network
Images, then cue generator in with prompt images
Deepfakes
Art!
Proteins DO everything, nanobots
Shape is everything
DNA, amino acids, protein
Lots of DNA seqs, fewer shapes
Transformatory
Fairness in data, design, outcome, implementation
Accountability before and after
Sustainability – economic, environmental, social (license to operate)
Safety – accuracy, reliability, security, robustnessTransparency – explainable AI, openness
There is, as the man used to say, one more thing. Example of a GAN applied to the video domain with both a motion prompt and a separate photo prompt
Combine with audio track to produce composite video
There is, as the man used to say, one more thing
If you can play chess, you’re intelligent
Mid40s, Turing theorized how computers could play chess. 1949, Claude Shannon at Bell Labs published paper w description of how do it. 1950, Turing made an actual algorithm. No suitable machine, so manually calculated, 30 min per move. Algorithm lost, history made, paper in 1953.
https://www.pcworld.com/article/2036854/a-brief-history-of-computer-chess.html#slide3
1958IBM programmer Alex Bernstein playing his chess program at the console of the 704 mainframe. Bernstein told the computer what move to make by flipping the switches on the front panel. The program took about eight minutes to calculate each move.