Tutorial, Learning Analytics Summer Institute, Ann Arbor, June 2017
As algorithms pervade societal life, they’re moving from an arcane topic reserved for computer scientists and mathematicians, to the object of far wider academic and mainstream media attention (try a web news search on algorithms, and then add ethics). As agencies delegate machines with increasing powers to make judgements about complex human qualities such as ’employability’, ‘credit worthiness’, or ‘likelihood of committing a crime’, we are confronted by the challenge of “governing algorithms”, lest they turn into Weapons of Math Destruction. But in what senses are they opaque, and to whom? And what is meant by “accountable”?
The education sector is clearly not immune from these questions, and it falls to the Learning Analytics community to convene a vigorous debate, and devise good responses. In this tutorial, I’ll set the scene, and then propose a set of lenses that we can bring to bear on a learning analytics infrastructure, to identify some of the meanings that “accountability” might have. It turns out that algorithmic transparency and accountability may be the wrong focus — or rather, just one piece of the jigsaw. Intriguingly, even if you can look inside the algorithmic ‘black box’, which is imagined to lie in the system’s code, there may be little of use there. I propose that a human-centred informatics approach offers a more holistic framing, where the aggregate quality we are after might be termed Analytic System Integrity. I’ll work through a couple of examples as a form of ‘audit’, to show where one can identify weaknesses and opportunities, and consider the implications for how we conceive and design learning analytics that are responsive to the questions that society will rightly be asking.
Towards Contested Collective Intelligence
Simon Buckingham Shum, Director Connected Intelligence Centre, University of Technology Sydney
This talk is to open up a dialogue with the important work of the SWARM project. I’ll introduce the key ideas that have shaped my work on interactive software tools to make thinking visible, shareable and contestable, some of the design prototypes, and some of the lessons we’ve learnt en route.
Teaching, Assessment and Learning Analytics: Time to Question AssumptionsSimon Buckingham Shum
Presented by the Assessment Research Centre
and the Melbourne Centre for the Study of Higher Education
Teaching, Assessment and Learning Analytics: Time to Question Assumptions
Simon Buckingham Shum
Professor of Learning Informatics, and Director of the Connected Intelligence Centre (CIC)
University of Technology Sydney
When: 11.30 -12.30 pm, Wed. 13 Sep 2017
Where: Frank Tate Room, Level 9, 100 Leicester St, Carlton
This will be a non-technical talk accessible to a broad range of educational practitioners and researchers, designed to provoke a conversation that provides time to question assumptions. The field of Learning Analytics sits at the convergence of two fields: Learning (including learning technology, educational research and learning/assessment sciences) and Analytics (statistics; visualisation; computer science; data science; AI). Many would add Human-Computer Interaction (e.g. participatory design; user experience; usability evaluation) as a differentiator from related fields such as Educational Data Mining, since the Learning Analytics community attracts many with a concern for the sociotechnical implications of designing and embedding analytics in educational organisations.
Learning Analytics is viewed by many educators with the same suspicion they reserve for AI or “learning management systems”. While in some cases this is justified, I will question other assumptions with some learning analytics examples which can serve as objects for us to think with. I am curious to know what connections/questions arise when these are shared..
Simon Buckingham Shum is Professor of Learning Informatics at the University of Technology Sydney, where he was appointed in August 2014 to direct the new Connected Intelligence Centre. Previously he was Professor of Learning Informatics and an Associate Director at The UK Open University’s Knowledge Media Institute. He is active in the field of Learning Analytics as a co-founder and former Vice President of the Society for Learning Analytics Research, and Program Co-Chair of LAK18, the International Learning Analytics and Knowledge Conference. Previously he co-founded the Compendium Institute and Learning Emergence networks. Simon brings a Human-Centred Informatics (HCI) approach to his work, with a background in Psychology (BSc, York), Ergonomics (MSc, London) and HCI Design Argumentation (PhD, York). He co-edited Visualizing Argumentation (2003) followed by Knowledge Cartography (2008, 2nd Edn. 2014), and with Al Selvin, wrote Constructing Knowledge Art (2015). He was recently appointed as a Fellow of The RSA. http://Simon.BuckinghamShum.net
MIT Program on Information Science Talk -- Julia Flanders on Jobs, Roles, Ski...Micah Altman
Julia Flanders, who is the Director of the Digital Scholarship Group in the Northeastern University Library, and a Professor of Practice in Northeastern's English Department gave a talk on Jobs, Roles, Skills, Tools: Working in the Digital Academy as part of the Program on Information Science Brown Bag Series.
In the talk, illustrated by the slides below, Julia discusses the evolving landscape of digital humanities (and digital scholarship more broadly) and considers the relationship between technology, tool development, and professional roles.
For more see: http://informatics.mit.edu/event/brown-bag-jobs-roles-skills-tools-working-digital-academy-julia-flanders
SOLARSPELL: THE SOLAR POWERED EDUCATIONAL LEARNING LIBRARY - EXPERIENTIAL LEA...Micah Altman
Access to high-quality, relevant information is absolutely foundational for a quality education. Yet, so many schools across the developing world lack fundamental resources, like textbooks, libraries, electricity and Internet connectivity. The SolarSPELL (Solar Powered Educational Learning Library) is designed specifically to address these infrastructural challenges, by bringing relevant, digital educational content to offline, off-grid locations. SolarSPELL is a portable, ruggedized, solar-powered digital library that broadcasts a webpage with open-access educational content over an offline WiFi hotspot, content that is curated for a particular audience in a specified locality—in this case, for schoolchildren and teachers in remote locations. It is a hands-on, iteratively developed project that has involved undergraduate students in all facets and at every stage of development. This talk will examine the design, development, and deployment of a for-the-field technology that looks simple but has a quite complex background.
Laura Hosman is Assistant Professor at Arizona State University, holding a joint appointment in the School for the Future of Innovation in Society and in The Polytechnic School. Her work is action-oriented and focuses on the role for information and communications technology (ICT) in developing countries. Presently, she focuses on ICT-in-education projects, and brings her passion for experiential learning to the classroom by leading real-world-focused, project-based courses that have seen student-built technology deployed in schools in Haiti, Vanuatu, Micronesia, Samoa, and Tonga.
Information Science Brown Bag talks, hosted by the Program on Information Science, consists of regular discussions and brainstorming sessions on all aspects of information science and uses of information science and technology to assess and solve institutional, social and research problems. These are informal talks. Discussions are often inspired by real-world problems being faced by the lead discussant.
Reputation Management for Early Career ResearchersMicah Altman
In the rapidly changing world of research and scholarly communications, researchers are faced with a fast growing range of options to publicly disseminate, review, and discuss research—options which will affect their long-term reputation. Early career scholars must be especially thoughtful in choosing how much effort to invest in dissemination and communication, and what strategies to use.
Dr. Micah Altman briefly reviews a number of bibliometric and scientometric studies of quantitative research impact, a sampling of influential qualitative writings advising this area, and an environmental scan of emerging researcher profile systems. Based on this review, and on professional experience on dozens of review panels, Dr. Altman suggests some steps early career researchers may consider when disseminating their research and participating in public reviews and discussion.
Towards Contested Collective Intelligence
Simon Buckingham Shum, Director Connected Intelligence Centre, University of Technology Sydney
This talk is to open up a dialogue with the important work of the SWARM project. I’ll introduce the key ideas that have shaped my work on interactive software tools to make thinking visible, shareable and contestable, some of the design prototypes, and some of the lessons we’ve learnt en route.
Teaching, Assessment and Learning Analytics: Time to Question AssumptionsSimon Buckingham Shum
Presented by the Assessment Research Centre
and the Melbourne Centre for the Study of Higher Education
Teaching, Assessment and Learning Analytics: Time to Question Assumptions
Simon Buckingham Shum
Professor of Learning Informatics, and Director of the Connected Intelligence Centre (CIC)
University of Technology Sydney
When: 11.30 -12.30 pm, Wed. 13 Sep 2017
Where: Frank Tate Room, Level 9, 100 Leicester St, Carlton
This will be a non-technical talk accessible to a broad range of educational practitioners and researchers, designed to provoke a conversation that provides time to question assumptions. The field of Learning Analytics sits at the convergence of two fields: Learning (including learning technology, educational research and learning/assessment sciences) and Analytics (statistics; visualisation; computer science; data science; AI). Many would add Human-Computer Interaction (e.g. participatory design; user experience; usability evaluation) as a differentiator from related fields such as Educational Data Mining, since the Learning Analytics community attracts many with a concern for the sociotechnical implications of designing and embedding analytics in educational organisations.
Learning Analytics is viewed by many educators with the same suspicion they reserve for AI or “learning management systems”. While in some cases this is justified, I will question other assumptions with some learning analytics examples which can serve as objects for us to think with. I am curious to know what connections/questions arise when these are shared..
Simon Buckingham Shum is Professor of Learning Informatics at the University of Technology Sydney, where he was appointed in August 2014 to direct the new Connected Intelligence Centre. Previously he was Professor of Learning Informatics and an Associate Director at The UK Open University’s Knowledge Media Institute. He is active in the field of Learning Analytics as a co-founder and former Vice President of the Society for Learning Analytics Research, and Program Co-Chair of LAK18, the International Learning Analytics and Knowledge Conference. Previously he co-founded the Compendium Institute and Learning Emergence networks. Simon brings a Human-Centred Informatics (HCI) approach to his work, with a background in Psychology (BSc, York), Ergonomics (MSc, London) and HCI Design Argumentation (PhD, York). He co-edited Visualizing Argumentation (2003) followed by Knowledge Cartography (2008, 2nd Edn. 2014), and with Al Selvin, wrote Constructing Knowledge Art (2015). He was recently appointed as a Fellow of The RSA. http://Simon.BuckinghamShum.net
MIT Program on Information Science Talk -- Julia Flanders on Jobs, Roles, Ski...Micah Altman
Julia Flanders, who is the Director of the Digital Scholarship Group in the Northeastern University Library, and a Professor of Practice in Northeastern's English Department gave a talk on Jobs, Roles, Skills, Tools: Working in the Digital Academy as part of the Program on Information Science Brown Bag Series.
In the talk, illustrated by the slides below, Julia discusses the evolving landscape of digital humanities (and digital scholarship more broadly) and considers the relationship between technology, tool development, and professional roles.
For more see: http://informatics.mit.edu/event/brown-bag-jobs-roles-skills-tools-working-digital-academy-julia-flanders
SOLARSPELL: THE SOLAR POWERED EDUCATIONAL LEARNING LIBRARY - EXPERIENTIAL LEA...Micah Altman
Access to high-quality, relevant information is absolutely foundational for a quality education. Yet, so many schools across the developing world lack fundamental resources, like textbooks, libraries, electricity and Internet connectivity. The SolarSPELL (Solar Powered Educational Learning Library) is designed specifically to address these infrastructural challenges, by bringing relevant, digital educational content to offline, off-grid locations. SolarSPELL is a portable, ruggedized, solar-powered digital library that broadcasts a webpage with open-access educational content over an offline WiFi hotspot, content that is curated for a particular audience in a specified locality—in this case, for schoolchildren and teachers in remote locations. It is a hands-on, iteratively developed project that has involved undergraduate students in all facets and at every stage of development. This talk will examine the design, development, and deployment of a for-the-field technology that looks simple but has a quite complex background.
Laura Hosman is Assistant Professor at Arizona State University, holding a joint appointment in the School for the Future of Innovation in Society and in The Polytechnic School. Her work is action-oriented and focuses on the role for information and communications technology (ICT) in developing countries. Presently, she focuses on ICT-in-education projects, and brings her passion for experiential learning to the classroom by leading real-world-focused, project-based courses that have seen student-built technology deployed in schools in Haiti, Vanuatu, Micronesia, Samoa, and Tonga.
Information Science Brown Bag talks, hosted by the Program on Information Science, consists of regular discussions and brainstorming sessions on all aspects of information science and uses of information science and technology to assess and solve institutional, social and research problems. These are informal talks. Discussions are often inspired by real-world problems being faced by the lead discussant.
Reputation Management for Early Career ResearchersMicah Altman
In the rapidly changing world of research and scholarly communications, researchers are faced with a fast growing range of options to publicly disseminate, review, and discuss research—options which will affect their long-term reputation. Early career scholars must be especially thoughtful in choosing how much effort to invest in dissemination and communication, and what strategies to use.
Dr. Micah Altman briefly reviews a number of bibliometric and scientometric studies of quantitative research impact, a sampling of influential qualitative writings advising this area, and an environmental scan of emerging researcher profile systems. Based on this review, and on professional experience on dozens of review panels, Dr. Altman suggests some steps early career researchers may consider when disseminating their research and participating in public reviews and discussion.
Artificial Intelligence AI in Libraries Training for Innovation WebinarSaid Ali Said
Objectives The objectives of the webinar are to:
• introduce AI in libraries
• describe the IDEA Institute on AI and its contribution to providing professional, innovative training in AI to library and other information professionals
• understand challenges and opportunities in implementing AI in libraries based on real-world experiences of the first cohort of Institute Fellows
• consider equity, diversity, inclusion and accessibility issues, and ethical questions, in AI implementation.
Speakers
Prof. Dr. Dania Bilal
Professor, School of Information Sciences at the University of Tennessee in Knoxville, TN.
Researcher, scholar and educator in Human Information Behavior, Human–Computer Interaction (HCI), User Experience and Design (UXD), Human–AI Interaction, and Information Science Theory.
Research focus is on user information interaction and behavior (children, teenagers and adults) with information systems, products and interfaces; and on user-centered design for better user engagement and experiences.
Principal Investigator and co-developer, IDEA Institute on Artificial Intelligence.
Clara M. Chu
Director and Mortenson Distinguished Professor, Mortenson Center for International Library Programs, University of Illinois at Urbana-Champaign, IL.
• Expert in developing appropriate and strategic solutions to deliver equitable and relevant library services in culturally diverse and dynamic libraries.
• Studies the information needs of culturally diverse communities in a globalized and technological society.
• Co-developer, IDEA Institute on Artificial Intelligence.
Target Audience
• Staff in any type of library and information center or information environment.
• Library and information science students, educators and researchers.
Writing Analytics for Epistemic Features of Student Writing #icls2016 talkSimon Knight
Talk presented at #ICLS2016 presented in Singapore. I discuss levels of description as sites of epistemic cognition focusing on writing and use of textual features to associate rubric scores with epistemic cognition.
My thanks to my collaborators (listed on the paper) particularly Laura Allen, who also generously let me adapt the later slides on NLP studies of writing.
Abstract: Literacy, encompassing the ability to produce written outputs from the reading of multiple sources, is a key learning goal. Selecting information, and evaluating and integrating claims from potentially competing documents is a complex literacy task. Prior research exploring differing behaviours and their association to constructs such as epistemic cognition has used ‘multiple document processing’ (MDP) tasks. Using this model, 270 paired participants, wrote a review of a document. Reports were assessed using a rubric associated with features of complex literacy behaviours. This paper focuses on the conceptual and empirical associations between those rubric-marks and textual features of the reports on a set of natural language processing (NLP) indicators. Findings indicate the potential of NLP indicators for providing feedback regarding the writing of such outputs, demonstrating clear relationships both across rubric facets and between rubric facets and specific NLP indicators.
ETHICS IN E-LEARNING
Assist.Prof.Dr. Elif TOPRAK – Anadolu University
etoprak1@anadolu.edu.tr
Assist.Prof.Dr. Berrin ÖZKANAL – Anadolu University
Res. Assist.Dr. Sinan AYDIN – Anadolu University
Instructor Seçil KAYA – Anadolu University
TOJET: The Turkish Online Journal of Educational Technology – April 2010, volume 9 Issue 2
This is a brief a brief review of current multi-disciplinary and collaborative projects at Kno.e.sis led by Prof. Amit Sheth. They cover research in big social data, IoT, semantic web, semantic sensor web, health informatics, personalized digital health, social data for social good, smart city, crisis informatics, digital data for material genome initiative, etc. Dec 2015 edition.
Making Decisions in a World Awash in Data: We’re going to need a different bo...Micah Altman
In his abstract, Scriffignano summarizes as follows:
l explore some of the ways in which the massive availability of data is changing and the types of questions we must ask in the context of making business decisions. Truth be told, nearly all organizations struggle to make sense out of the mounting data already within the enterprise. At the same time, businesses, individuals, and governments continue to try to outpace one another, often in ways that are informed by newly-available data and technology, but just as often using that data and technology in alarmingly inappropriate or incomplete ways. Multiple “solutions” exist to take data that is poorly understood, promising to derive meaning that is often transient at best. A tremendous amount of “dark” innovation continues in the space of fraud and other bad behavior (e.g. cyber crime, cyber terrorism), highlighting that there are very real risks to taking a fast-follower strategy in making sense out of the ever-increasing amount of data available. Tools and technologies can be very helpful or, as Scriffignano puts it, “they can accelerate the speed with which we hit the wall.” Drawing on unstructured, highly dynamic sources of data, fascinating inference can be derived if we ask the right questions (and maybe use a bit of different math!). This session will cover three main themes: The new normal (how the data around us continues to change), how are we reacting (bringing data science into the room), and the path ahead (creating a mindset in the organization that evolves). Ultimately, what we learn is governed as much by the data available as by the questions we ask. This talk, both relevant and occasionally irreverent, will explore some of the new ways data is being used to expose risk and opportunity and the skills we need to take advantage of a world awash in data.
Presentation to the second LIS DREaM workshop held at the British Library on Monday 30th January 2012.
More information available at: http://lisresearch.org/dream-project/dream-event-3-workshop-monday-30-january-2012/
Invited talk on fairness in AI systems at the 2nd Workshop on Interactive Natural Language Technology for Explainable AI co-located with the International Conference on Natural Language Generation, 18/12/2020.
Analysing User Knowledge, Competence and Learning during Online ActivitiesStefan Dietze
Research talk given at Italian National Research Council (CNR), Institute for Educational Technologies (ITD) on learning analytics in everyday online activities.
Fairness-aware learning: From single models to sequential ensemble learning a...Eirini Ntoutsi
An overview of fairness-aware learning: from batch-learning with single models to batch-learning with sequential ensembles and to fairness-aware learning over non-stationary data streams
Mining and Understanding Activities and Resources on the WebStefan Dietze
Research Seminar at KMRC Tübingen, Germany, on mining and understanding of Web acivities and resources through knowledge discovery and machine learning approaches.
Krishnaprasad Thirunarayan, Trust Management: Multimodal Data Perspective,
Invited Tutorial, The 2015 International Conference on Collaboration
Technologies and Systems (CTS 2015), June 2015
What role do “power learners” play in online learning communities?@cristobalcobo
This study focusses on the role of highly active participants in online learning communities on Facebook. These people, often known as “power users” in the literature on social computing, are a common feature of a wide range of online learning groups, and are responsible not only for creating most of the content but also for getting discussion going and providing a basis for other’s participation. We test whether similar dynamics hold true in the context of online learning.
Based on a transactional dataset of almost 10,000 interactions with an online community of 32 postgraduate students who were following the same online course, we find evidence that power users also exist in the context of online learning. However, whilst they do create a lot of content, we find that they are not fundamental to keeping the group together, and in fact are less adept at creating content which generates responses than other “normal” users. This suggests that online learning communities may have different dynamics to other types of electronic community: it also suggests that design efforts should not be focused solely on attracting a small core of “power learners”. Rather, diverse types of users are needed for online learning communities to survive and prosper.
Authors:
Cristóbal Cobo, Center for Research - Ceibal Foundation, Uruguay
Monica Bulger, Data & Society Research Institute, United States
Jonathan Bright, Oxford Internet Institute, United Kingdom
Ryan den Rooijen, Oxford Internet Institute, United Kingdom
Presented at the LINC Conference (MIT, 2016) Digital Inclusion: Transforming Education through Technology.
UCL joint Institute of Education (London Knowledge Lab) & UCL Interaction Centre seminar, 20th April 2016. Replay: https://youtu.be/0t0IWvcO-Uo
Algorithmic Accountability & Learning Analytics
Simon Buckingham Shum
Connected Intelligence Centre, University of Technology Sydney
ABSTRACT. As algorithms pervade societal life, they are moving from the preserve of computer science to becoming the object of far wider academic and media attention. Many are now asking how the behaviour of algorithms can be made “accountable”. But why are they “opaque” and to whom? As this vital discussion unfolds in relation to Big Data in general, the Learning Analytics community must articulate what would count as meaningful questions and satisfactory answers in educational contexts. In this talk, I propose different lenses that we can bring to bear on a given learning analytics tool, to ask what it would mean for it to be accountable, and to whom. From a Human-Centred Informatics perspective, it turns out that algorithmic accountability may be the wrong focus.
BIO. Simon Buckingham Shum is Professor of Learning Informatics at the University of Technology Sydney, which he joined in August 2014 to direct the new Connected Intelligence Centre. Prior to that he was at The Open University’s Knowledge Media Institute 1995-2014. He brings a Human-Centred Informatics (HCI) approach to his work, with a background in Psychology (BSc, York), Ergonomics (MSc, London) and HCI (PhD, York) where he worked with Rank Xerox Cambridge EuroPARC on Design Rationale. He co-edited Visualizing Argumentation (2003) followed by Knowledge Cartography (2008, 2nd Edn. 2014), and with Al Selvin wrote Constructing Knowledge Art (2015). He is active in the emerging field of Learning Analytics and is a co-founder of the Society for Learning Analytics Research, Compendium Institute and Learning Emergence network.
NCompass Live - March 13, 2019
http://nlc.nebraska.gov/NCompassLive/
In the presentation on February 13, I covered what emerging technology is and how it relates to libraries. Now it’s time to dive into what that means on a larger scale. What makes technology good or bad? Who is really qualified to make that determination? Anyone who tracks emerging technology will start to think about how this technology will affect the future of our communities and the world. What will make a piece of technology influential and powerful in the world? Tune in if you want to learn more about the following topics:
•Which ethics matter most in technology?
•What makes technology good or bad?
•What potential dangers should we be aware of with new technology?
•What factors might affect society in the long-run?
There are no easy answers to any of these questions. The concept of ‘ethics’ tends to be a gray area for many, and understandably so. This presentation won’t give you a finite right or wrong stance on all technology. But it will provide you with the tools to make the decision for yourself.
Presenter: Amanda Sweet, Technology Innovation Librarian, Nebraska Library Commission.
Artificial Intelligence AI in Libraries Training for Innovation WebinarSaid Ali Said
Objectives The objectives of the webinar are to:
• introduce AI in libraries
• describe the IDEA Institute on AI and its contribution to providing professional, innovative training in AI to library and other information professionals
• understand challenges and opportunities in implementing AI in libraries based on real-world experiences of the first cohort of Institute Fellows
• consider equity, diversity, inclusion and accessibility issues, and ethical questions, in AI implementation.
Speakers
Prof. Dr. Dania Bilal
Professor, School of Information Sciences at the University of Tennessee in Knoxville, TN.
Researcher, scholar and educator in Human Information Behavior, Human–Computer Interaction (HCI), User Experience and Design (UXD), Human–AI Interaction, and Information Science Theory.
Research focus is on user information interaction and behavior (children, teenagers and adults) with information systems, products and interfaces; and on user-centered design for better user engagement and experiences.
Principal Investigator and co-developer, IDEA Institute on Artificial Intelligence.
Clara M. Chu
Director and Mortenson Distinguished Professor, Mortenson Center for International Library Programs, University of Illinois at Urbana-Champaign, IL.
• Expert in developing appropriate and strategic solutions to deliver equitable and relevant library services in culturally diverse and dynamic libraries.
• Studies the information needs of culturally diverse communities in a globalized and technological society.
• Co-developer, IDEA Institute on Artificial Intelligence.
Target Audience
• Staff in any type of library and information center or information environment.
• Library and information science students, educators and researchers.
Writing Analytics for Epistemic Features of Student Writing #icls2016 talkSimon Knight
Talk presented at #ICLS2016 presented in Singapore. I discuss levels of description as sites of epistemic cognition focusing on writing and use of textual features to associate rubric scores with epistemic cognition.
My thanks to my collaborators (listed on the paper) particularly Laura Allen, who also generously let me adapt the later slides on NLP studies of writing.
Abstract: Literacy, encompassing the ability to produce written outputs from the reading of multiple sources, is a key learning goal. Selecting information, and evaluating and integrating claims from potentially competing documents is a complex literacy task. Prior research exploring differing behaviours and their association to constructs such as epistemic cognition has used ‘multiple document processing’ (MDP) tasks. Using this model, 270 paired participants, wrote a review of a document. Reports were assessed using a rubric associated with features of complex literacy behaviours. This paper focuses on the conceptual and empirical associations between those rubric-marks and textual features of the reports on a set of natural language processing (NLP) indicators. Findings indicate the potential of NLP indicators for providing feedback regarding the writing of such outputs, demonstrating clear relationships both across rubric facets and between rubric facets and specific NLP indicators.
ETHICS IN E-LEARNING
Assist.Prof.Dr. Elif TOPRAK – Anadolu University
etoprak1@anadolu.edu.tr
Assist.Prof.Dr. Berrin ÖZKANAL – Anadolu University
Res. Assist.Dr. Sinan AYDIN – Anadolu University
Instructor Seçil KAYA – Anadolu University
TOJET: The Turkish Online Journal of Educational Technology – April 2010, volume 9 Issue 2
This is a brief a brief review of current multi-disciplinary and collaborative projects at Kno.e.sis led by Prof. Amit Sheth. They cover research in big social data, IoT, semantic web, semantic sensor web, health informatics, personalized digital health, social data for social good, smart city, crisis informatics, digital data for material genome initiative, etc. Dec 2015 edition.
Making Decisions in a World Awash in Data: We’re going to need a different bo...Micah Altman
In his abstract, Scriffignano summarizes as follows:
l explore some of the ways in which the massive availability of data is changing and the types of questions we must ask in the context of making business decisions. Truth be told, nearly all organizations struggle to make sense out of the mounting data already within the enterprise. At the same time, businesses, individuals, and governments continue to try to outpace one another, often in ways that are informed by newly-available data and technology, but just as often using that data and technology in alarmingly inappropriate or incomplete ways. Multiple “solutions” exist to take data that is poorly understood, promising to derive meaning that is often transient at best. A tremendous amount of “dark” innovation continues in the space of fraud and other bad behavior (e.g. cyber crime, cyber terrorism), highlighting that there are very real risks to taking a fast-follower strategy in making sense out of the ever-increasing amount of data available. Tools and technologies can be very helpful or, as Scriffignano puts it, “they can accelerate the speed with which we hit the wall.” Drawing on unstructured, highly dynamic sources of data, fascinating inference can be derived if we ask the right questions (and maybe use a bit of different math!). This session will cover three main themes: The new normal (how the data around us continues to change), how are we reacting (bringing data science into the room), and the path ahead (creating a mindset in the organization that evolves). Ultimately, what we learn is governed as much by the data available as by the questions we ask. This talk, both relevant and occasionally irreverent, will explore some of the new ways data is being used to expose risk and opportunity and the skills we need to take advantage of a world awash in data.
Presentation to the second LIS DREaM workshop held at the British Library on Monday 30th January 2012.
More information available at: http://lisresearch.org/dream-project/dream-event-3-workshop-monday-30-january-2012/
Invited talk on fairness in AI systems at the 2nd Workshop on Interactive Natural Language Technology for Explainable AI co-located with the International Conference on Natural Language Generation, 18/12/2020.
Analysing User Knowledge, Competence and Learning during Online ActivitiesStefan Dietze
Research talk given at Italian National Research Council (CNR), Institute for Educational Technologies (ITD) on learning analytics in everyday online activities.
Fairness-aware learning: From single models to sequential ensemble learning a...Eirini Ntoutsi
An overview of fairness-aware learning: from batch-learning with single models to batch-learning with sequential ensembles and to fairness-aware learning over non-stationary data streams
Mining and Understanding Activities and Resources on the WebStefan Dietze
Research Seminar at KMRC Tübingen, Germany, on mining and understanding of Web acivities and resources through knowledge discovery and machine learning approaches.
Krishnaprasad Thirunarayan, Trust Management: Multimodal Data Perspective,
Invited Tutorial, The 2015 International Conference on Collaboration
Technologies and Systems (CTS 2015), June 2015
What role do “power learners” play in online learning communities?@cristobalcobo
This study focusses on the role of highly active participants in online learning communities on Facebook. These people, often known as “power users” in the literature on social computing, are a common feature of a wide range of online learning groups, and are responsible not only for creating most of the content but also for getting discussion going and providing a basis for other’s participation. We test whether similar dynamics hold true in the context of online learning.
Based on a transactional dataset of almost 10,000 interactions with an online community of 32 postgraduate students who were following the same online course, we find evidence that power users also exist in the context of online learning. However, whilst they do create a lot of content, we find that they are not fundamental to keeping the group together, and in fact are less adept at creating content which generates responses than other “normal” users. This suggests that online learning communities may have different dynamics to other types of electronic community: it also suggests that design efforts should not be focused solely on attracting a small core of “power learners”. Rather, diverse types of users are needed for online learning communities to survive and prosper.
Authors:
Cristóbal Cobo, Center for Research - Ceibal Foundation, Uruguay
Monica Bulger, Data & Society Research Institute, United States
Jonathan Bright, Oxford Internet Institute, United Kingdom
Ryan den Rooijen, Oxford Internet Institute, United Kingdom
Presented at the LINC Conference (MIT, 2016) Digital Inclusion: Transforming Education through Technology.
UCL joint Institute of Education (London Knowledge Lab) & UCL Interaction Centre seminar, 20th April 2016. Replay: https://youtu.be/0t0IWvcO-Uo
Algorithmic Accountability & Learning Analytics
Simon Buckingham Shum
Connected Intelligence Centre, University of Technology Sydney
ABSTRACT. As algorithms pervade societal life, they are moving from the preserve of computer science to becoming the object of far wider academic and media attention. Many are now asking how the behaviour of algorithms can be made “accountable”. But why are they “opaque” and to whom? As this vital discussion unfolds in relation to Big Data in general, the Learning Analytics community must articulate what would count as meaningful questions and satisfactory answers in educational contexts. In this talk, I propose different lenses that we can bring to bear on a given learning analytics tool, to ask what it would mean for it to be accountable, and to whom. From a Human-Centred Informatics perspective, it turns out that algorithmic accountability may be the wrong focus.
BIO. Simon Buckingham Shum is Professor of Learning Informatics at the University of Technology Sydney, which he joined in August 2014 to direct the new Connected Intelligence Centre. Prior to that he was at The Open University’s Knowledge Media Institute 1995-2014. He brings a Human-Centred Informatics (HCI) approach to his work, with a background in Psychology (BSc, York), Ergonomics (MSc, London) and HCI (PhD, York) where he worked with Rank Xerox Cambridge EuroPARC on Design Rationale. He co-edited Visualizing Argumentation (2003) followed by Knowledge Cartography (2008, 2nd Edn. 2014), and with Al Selvin wrote Constructing Knowledge Art (2015). He is active in the emerging field of Learning Analytics and is a co-founder of the Society for Learning Analytics Research, Compendium Institute and Learning Emergence network.
NCompass Live - March 13, 2019
http://nlc.nebraska.gov/NCompassLive/
In the presentation on February 13, I covered what emerging technology is and how it relates to libraries. Now it’s time to dive into what that means on a larger scale. What makes technology good or bad? Who is really qualified to make that determination? Anyone who tracks emerging technology will start to think about how this technology will affect the future of our communities and the world. What will make a piece of technology influential and powerful in the world? Tune in if you want to learn more about the following topics:
•Which ethics matter most in technology?
•What makes technology good or bad?
•What potential dangers should we be aware of with new technology?
•What factors might affect society in the long-run?
There are no easy answers to any of these questions. The concept of ‘ethics’ tends to be a gray area for many, and understandably so. This presentation won’t give you a finite right or wrong stance on all technology. But it will provide you with the tools to make the decision for yourself.
Presenter: Amanda Sweet, Technology Innovation Librarian, Nebraska Library Commission.
Big Data Analytics : Understanding for Research ActivityAndry Alamsyah
Big Data Analytics Presentation at International Workshop Colloquium Exploring Research Opportunity. School of Business and Management (SBM) - ITB. Bandung, 8 August 2019.
The Ethical Maze to Be Navigated: AI Bias in Data Science and Technology.sakshibhattco
The rapid advancement of data science and artificial intelligence (AI) in recent years has revolutionized a wide range of sectors, including healthcare and finance.
Artificial intelligence (AI) refers to a constellation of technologies, including machine learning, perception, reasoning, and natural language processing. While the field has been pursuing principles and applications for over 65 years, recent advances, uses, and attendant public excitement have returned it to the spotlight. The impact of early AI 1 systems is already being felt, bringing with it challenges and opportunities, and laying the foundation on which future advances in AI will be integrated into social and economic domains. The potential wide-ranging impact make it necessary to look carefully at the ways in which these technologies are being applied now, whom they’re benefiting, and how they’re structuring our social, economic, and interpersonal lives.
This presentation goes over Data Mining the City, a course taught at Columbia University GSAPP. This lecture also covers, complexity, cybernetics and agent based modeling.
e-SIDES workshop at BDV Meet-Up, Sofia 14/05/2018e-SIDES.eu
The following presentation was given at the workshop "Technology solutions for privacy issues: what is the best way forward?" organized by e-SIDES at the BDVe Meet-up in Sofia on May 14, 2018. The workshop, chaired by Gabriella Cattaneo from IDC, involved stakeholders from ICT-18 projects.
An expanding and expansive view of computing researchNAVER Engineering
My recent service for five years as the Assistant Director of the US National Science Foundation leading the Directorate of Computer and Information Science and Engineering has afforded me a broad view of computing research and education. The field of computing is in the midst of another “golden age” and is also at another nexus point – a point of change – where future research directions, and new ways in which research will be done, are coming into focus.
In this talk we will discuss these current and future CS research topics and trends, placing them in the context of the longer-term evolution of our field. We will also discuss computer science education (at several levels), as well as the forces that promise to disrupt not just computer science education, but higher education more broadly.
The Ethics of Artificial Intelligence in Digital Ecosystemswashikmaryam
The ethics of AI go beyond just the technology itself. When we consider AI within the complex web of digital platforms and services (the digital ecosystem), new ethical concerns arise.
A big focus is on how AI decisions can be biased, reflecting the data it's trained on and potentially leading to discrimination. We also need to be mindful of privacy issues and how AI might be used to manipulate users.
To ensure ethical AI in digital ecosystems, we need to consider these potential pitfalls during development and use frameworks to make responsible choices. This includes reflecting on the decision-making process and how AI can be used for good.
Trusted, Transparent and Fair AI using Open SourceAnimesh Singh
Fairness, robustness, and explainability in AI are some of the key cornerstones of trustworthy AI. Through its open source projects, IBM and IBM Research bring together the developer, data science and research community to accelerate the pace of innovation and instrument trust into AI.
Cross discipline collaboration benefits from group think, a consolidation of soft system methodology and user focused design that all starts with design thinking that sees clients, designers, developers and information architects working together to address user problems and needs. As with any great adventure, design thinking starts with exploration and discovery.This presentation examines the high level tenants of system thinking, expands the scope of user thinking to include tools and devices that users employ to find out designs and delve into the specifics of design thinking, its methods and outcomes.
Artificial Intelligence in Cyber Security Research Paper Writing.pptxkellysmith617941
The risk of cyber security is one of the reasons why advancement in artificial intelligence is very important. If you understand the core of the subject and thinking of a topic for artificial intelligence in cyber security research paper writing, then our specialized team can help you.
The Generative AI System Shock, and some thoughts on Collective Intelligence ...Simon Buckingham Shum
Keynote Address: Team-based Learning Collaborative Asia Pacific Community (TBLC-APC) Symposium (“Impact of emerging technologies on learning strategies”) 8-9 February 2024, Sydney https://tbl.sydney.edu.au
Slides from my contribution to the panel convened by Jeremy Roschelle at the International Society for the Learning Sciences: Engaging Learning Scientists in Policy Challenges: AI and the Future of Learning
Deliberative Democracy as a strategy for co-designing university ethics aro...Simon Buckingham Shum
Buckingham Shum, S. (2021). Deliberative Democracy as a strategy for co-designing university ethics around analytics and AI in education. AARE2021: Australian Association for Research in Education, 28 Nov. – 2 Dec. 2021
Deliberative Democracy as a Strategy for Co-designing University Ethics Around Analytics and AI in Education
Simon Buckingham Shum
Connected Intelligence Centre, University of Technology Sydney
Universities can see an increasing range of student and staff activity as it becomes digitally visible in their platform ecosystems. The fields of Learning Analytics and AI in Education have demonstrated the significant benefits that ethically responsible, pedagogically informed analysis of student activity data can bring, but such services are only possible because they are undeniably a form of “surveillance”, raising legitimate questions about how the use of such tools should be governed.
Our prior work has drawn on the rich concepts and methods developed in human-centred system design, and participatory/co-design, to design, deploy and validate practical tools that give a voice to non-technical stakeholders (e.g. educators; students) in shaping such systems. We are now expanding the depth and breadth of engagement that we seek, looking to the Deliberative Democracy movement for inspiration. This is a response to the crisis in confidence in how typical democratic systems engage citizens in decision making. A hallmark is the convening of a Deliberative Mini-Public (DMP) which may work at different scales (organisation; community; region; nation) and can take diverse forms (e.g. Citizens’ Juries; Citizens’ Assemblies; Consensus Conferences; Planning Cells; Deliberative Polls). DMP’s combination of stratified random sampling to ensure authentic representation, neutrally facilitated workshops, balanced expert briefings, and real support from organisational leaders, has been shown to cultivate high quality dialogue in sometimes highly conflicted settings, leading to a strong sense of ownership of the DMP's final outputs (e.g. policy recommendations).
This symposium contribution will describe how the DMP model is informing university-wide consultation on the ethical principles that should govern the use of analytics and AI around teaching and learning data.
March 2021 • 24/7 Instant Feedback on Writing: Integrating AcaWriter into yo...Simon Buckingham Shum
Slides accompanying the monthly UTS educator briefing https://cic.uts.edu.au/events/24-7-instant-feedback-on-writing-integrating-acawriter-into-your-teaching-18-march/
What difference could instant feedback on draft writing make to your students? Over the last 5 years the Connected Intelligence Centre has been developing and piloting an automated feedback tool for academic writing (AcaWriter), working closely with academics across several faculties. The research portal documents how educators and students engage with this kind of AI, and what we’ve learnt about integrating it into teaching and assessment.
In May, AcaWriter was launched to all students along with an information portal. Now we want to start upskilling academics, tutors and learning technologists, in a monthly session to give you the chance to learn about AcaWriter, and specifically, good practices for integrating it into your subject. CIC can support you, and we hope you may be interested in co-designing publishable research.
AcaWriter handles several different ‘genres’ of writing, including reflective writing (e.g. a Reflective Essay; Reflective Blogs/Journals on internships/work-placements) and analytical writing (e.g. Argumentative Essays; Research Abstracts & Introductions). This briefing will demo AcaWriter, and show it can be embedded in student activities. We hope this sparks ideas for your own teaching, which we can discuss in more detail.
ICQE20: Quantitative Ethnography Visualizations as Tools for ThinkingSimon Buckingham Shum
Slides for this keynote talk to the 2nd International Conference on Quantitative Ethnography
http://simon.buckinghamshum.net/2021/02/icqe2020-keynote-qe-viz-as-tools-for-thinking/
24/7 Instant Feedback on Writing: Integrating AcaWriter into your TeachingSimon Buckingham Shum
https://cic.uts.edu.au/events/24-7-instant-feedback-on-writing-integrating-acawriter-into-your-teaching-2-dec/
What difference could instant feedback on draft writing make to your students? Over the last 5 years the Connected Intelligence Centre has been developing and piloting an automated feedback tool for academic writing (AcaWriter), working closely with academics across several faculties. The research portal documents how educators and students engage with this kind of AI, and what we’ve learnt about integrating it into teaching and assessment.
In May, AcaWriter was launched to all students along with an information portal. Now we want to start upskilling academics, tutors and learning technologists, in a monthly session to give you the chance to learn about AcaWriter, and specifically, good practices for integrating it into your subject. CIC can support you, and we hope you may be interested in co-designing publishable research.
AcaWriter handles several different ‘genres’ of writing, including reflective writing (e.g. a Reflective Essay; Reflective Blogs/Journals on internships/work-placements) and analytical writing (e.g. Argumentative Essays; Research Abstracts & Introductions).
This briefing will demo AcaWriter, and show it can be embedded in student activities. We hope this sparks ideas for your own teaching, which we can discuss in more detail.
An introduction to argumentation for UTS:CIC PhD students (with some Learning Analytics examples, but potentially of wider interest to students/researchers)
Webinar: Learning Informatics Lab, University of Minnesota
Replay the talk: https://youtu.be/dcJZeDIMr2I
Learning Informatics
AI • Analytics • Accountability • Agency
Simon Buckingham Shum
Professor of Learning Informatics
Director, Connected Intelligence Centre
University of Technology Sydney
Abstract:
“Health Informatics”. “Urban Informatics”. “Social Informatics”. Informatics offers systemic ways of analyzing and designing the interaction of natural and artificial information processing systems. In the context of education, I will describe some Learning Informatics lenses and practices which we have developed for co-designing analytics and AI with educators and students. We have a particular focus on closing the feedback loop to equip learners with competencies to navigate a complex, uncertain future, such as critical thinking, professional reflection and teamwork. En route, we will touch on how we build educators’ trust in novel tools, our design philosophy of “embracing imperfection” in machine intelligence, and the ways that these infrastructures embody values. Speaking from the perspective of leading an institutional innovation centre in learning analytics, I hope that our experiences spark productive reflection around as the UMN Learning Informatics Lab builds its program.
Biography:
Simon Buckingham Shum is Professor of Learning Informatics at the University of Technology Sydney, where he serves as inaugural director of the Connected Intelligence Centre. CIC is a transdisciplinary innovation centre, using analytics to provide new insights for university teams, with particular expertise in educational data science. Simon’s career-long fascination with software’s ability to make thinking visible has seen him active in communities including Computer-Supported Cooperative Work, Hypertext, Design Rationale, Scholarly Publishing, Semantic Web, Computational Argumentation, Educational Technology and Learning Analytics. The challenge of visualizing contested knowledge has produced several books: Visualizing Argumentation, Knowledge Cartography, and Constructing Knowledge Art. He has been active over the last decade in shaping the field of Learning Analytics, co-founding the Society for Learning Analytics Research, and catalyzing several strands: Social Learning Analytics, Discourse Analytics, Dispositional Analytics and Writing Analytics. http://Simon.BuckinghamShum.net
Despite AI’s potential for beneficial use, it creates important risks for Australians. AI, big data, and AI-informed decision making can cause exclusion, discrimination, skill loss, and economic impact; and can affect privacy, security of critical infrastructure and social well-being. What types of technology raise particular human rights concerns? Which human rights are particularly implicated?
Abstract: The emerging configuration of educational institutions, technologies, scientific practices, ethics policies and companies can be usefully framed as the emergence of a new “knowledge infrastructure” (Paul Edwards). The idea that we may be transitioning into significantly new ways of knowing – about learning and learners, teaching and teachers – is both exciting and daunting, because new knowledge infrastructures redefine roles and redistribute power, raising many important questions. What should we see when open the black box powering analytics? How do we empower all stakeholders to engage in the design process? Since digital infrastructure fades quickly into the background, how can researchers, educators and learners engage with it mindfully? This isn’t just interesting to ponder academically: your school or university will be buying products that are being designed now. Or perhaps educational institutions should take control, building and sharing their own open source tools? How are universities accelerating the transition from analytics innovation to infrastructure? Speaking from the perspective of leading an institutional innovation centre in learning analytics, I hope that our experiences designing code, competencies and culture for learning analytics sheds helpful light on these questions.
Towards Collaboration Translucence: Giving Meaning to Multimodal Group DataSimon Buckingham Shum
Vanessa Echeverria, Roberto Martinez-Maldonado, and Simon Buck- ingham Shum.. 2019. Towards Collaboration Translucence: Giving Meaning to Multimodal Group Data. In Proceedings of ACM CHI conference (CHI’19). ACM, New York, NY, USA, Paper 39, 16 pages. https://doi.org/10.1145/3290605.3300269
Collocated, face-to-face teamwork remains a pervasive mode of working, which is hard to replicate online. Team members’ embodied, multimodal interaction with each other and artefacts has been studied by researchers, but due to its complexity, has remained opaque to automated analysis. However, the ready availability of sensors makes it increasingly affordable to instrument work spaces to study teamwork and groupwork. The possibility of visualising key aspects of a collaboration has huge potential for both academic and professional learning, but a frontline challenge is the enrichment of quantitative data streams with the qualitative insights needed to make sense of them. In response, we introduce the concept of collaboration translucence, an approach to make visible selected features of group activity. This is grounded both theoretically (in the physical, epistemic, social and affective dimensions of group activity), and contextually (using domain-specific concepts). We illustrate the approach from the automated analysis of healthcare simulations to train nurses, generating four visual proxies that fuse multimodal data into higher order patterns.
Panel held at LAK13: 3rd International Conference on Learning Analytics & Knowledge
http://simon.buckinghamshum.net/2013/03/lak13-edu-data-scientists-scarce-breed
Educational Data Scientists: A Scarce Breed
The Educational Data Scientist is currently a poorly understood, rarely sighted breed. Reports vary: some are known to be largely nocturnal, solitary creatures, while others have been reported to display highly social behaviour in broad daylight. What are their primary habits? How do they see the world? What ecological niches do they occupy now, and will predicted seismic shifts transform the landscape in their favour? What survival skills do they need when running into other breeds? Will their numbers grow, and how might they evolve? In this panel, the conference will hear and debate not only broad perspectives on the terrain, but will have been exposed to some real life specimens, and caught glimpses of the future ecosystem.
Keynote Address, International Conference of the Learning Sciences, London Festival of Learning
Transitioning Education’s Knowledge Infrastructure:
Shaping Design or Shouting from the Touchline?
Abstract: Bit by bit, a data-intensive substrate for education is being designed, plumbed in and switched on, powered by digital data from an expanding sensor array, data science and artificial intelligence. The configurations of educational institutions, technologies, scientific practices, ethics policies and companies can be usefully framed as the emergence of a new “knowledge infrastructure” (Paul Edwards).
The idea that we may be transitioning into significantly new ways of knowing – about learning and learners – is both exciting and daunting, because new knowledge infrastructures redefine roles and redistribute power, raising many important questions. For instance, assuming that we want to shape this infrastructure, how do we engage with the teams designing the platforms our schools and universities may be using next year? Who owns the data and algorithms, and in what senses can an analytics/AI-powered learning system be ‘accountable’? How do we empower all stakeholders to engage in the design process? Since digital infrastructure fades quickly into the background, how can researchers, educators and learners engage with it mindfully? If we want to work in “Pasteur’s Quadrant” (Donald Stokes), we must go beyond learning analytics that answer research questions, to deliver valued services to frontline educational users: but how are universities accelerating the analytics innovation to infrastructure transition?
Wrestling with these questions, the learning analytics community has evolved since its first international conference in 2011, at the intersection of learning and data science, and an explicit concern with those human factors, at many scales, that make or break the design and adoption of new educational tools. We are forging open source platforms, links with commercial providers, and collaborations with the diverse disciplines that feed into educational data science. In the context of ICLS, our dialogue with the learning sciences must continue to deepen to ensure that together we influence this knowledge infrastructure to advance the interests of all stakeholders, including learners, educators, researchers and leaders.
Speaking from the perspective of leading an institutional analytics innovation centre, I hope that our experiences designing code, competencies and culture for learning analytics sheds helpful light on these questions.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Honest Reviews of Tim Han LMA Course Program.pptxtimhan337
Personal development courses are widely available today, with each one promising life-changing outcomes. Tim Han’s Life Mastery Achievers (LMA) Course has drawn a lot of interest. In addition to offering my frank assessment of Success Insider’s LMA Course, this piece examines the course’s effects via a variety of Tim Han LMA course reviews and Success Insider comments.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
The French Revolution Class 9 Study Material pdf free download
Black Box Learning Analytics? Beyond Algorithmic Transparency
1. Black Box Learning Analytics?
Beyond Algorithmic Transparency
Simon Buckingham Shum
Professor of Learning Informatics
Director, UTS Connected Intelligence Centre
@sbuckshum • Simon.BuckinghamShum.net
utscic.edu.au
Learning Analytics Summer Institute, Ann Arbor, June 2017
7. Societal impact of data science is now reaching mass audiences
7
https://weaponsofmathdestructionbook.com • http://datasociety.net • http://artificialinteligencenow.com
• https://obamawhitehouse.archives.gov/blog/2016/05/04/big-risks-big-opportunities-intersection-big-data-and-civil-rights
9. A student warning system, somewhere near you…
Student: “Being told by the LMS every time I login that I’m at
grave risk of failing is stressful. I already informed Student
Services about my disability and recent bereavement, and I’m
working with my tutor to catch up…”
10. A student warning system, somewhere near you…
University: “Don’t worry it’s nothing
personal. It’s just the algorithm.”
Student: “Being told by the LMS every time I login that I’m at
grave risk of failing is stressful. I already informed Student
Services about my disability and recent bereavement, and I’m
working with my tutor to catch up…”
11. A social network visualisation, somewhere near you…
Student: “Being picked out like this as some
sort of loner makes me feel uncomfortable…
I chat with peers all the time in the cafes.”
12. A social network visualisation, somewhere near you…
Student: “Being picked out like this as some
sort of loner makes me feel uncomfortable…
I chat with peers all the time in the cafes.”
University: “Don’t worry it’s nothing
personal. It’s just the algorithm.”
13. What would it mean for learning analytics to be
accountable?
…there’s intense societal interest right now in
algorithms
14. Algorithms are generating huge interest in the
media, policy, social justice, and academia
In an increasingly algorithmic world […]
What, then, do we talk about when we
talk about “governing algorithms”?
http://governingalgorithms.org
15. Be afraid: unaccountable algorithms are
counting our money – and our other assets
15
http://www.lse.ac.uk/newsAndMedia/videoAndAudio/channels/publicLecturesAndEvents/player.aspx?id=3350
19. 19
We must be able to open the black box
(Frank Pasquale)
20. Statisticians / Data Miners
Programmers
Computer Scientists
Social Scientists
Designers
Historians
Lawyers
Policy Makers
Business Entrepreneurs
20
A topic now
engaging
diverse
expertises…
22. 22
What are Algorithms?
abstract rules for transforming data
which to exert influence require
programming as executable code
operating on data structures
running on a platform
in an increasingly distributed architecture
Paul Dourish: The Social Lives of Algorithms. Lecture, 23 Feb. 2016, University of Melbourne. http://www.eng.unimelb.edu.au/engage/events/lectures/dourish-2016
23. What makes algorithms opaque?
intentional secrecy
technical illiteracy
complexity of infrastructure
23Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society 3(1). http://doi.org/10.1177/2053951715622512
24. 24Paul Dourish: The Social Lives of Algorithms. Lecture, 23 Feb. 2016, University of Melbourne. http://www.eng.unimelb.edu.au/engage/events/lectures/dourish-2016
Algorithms are not to be found ‘in’ Pasquale’s black box
‘Algorithms’ are abstractions invented by computer scientists, mathematicians, etc.
They may be impossible to reverse engineer from a live, material, distributed system.
To understand them as phenomena, and perhaps to call them to account, we need to
ask how they were conceived, with what intent, who sanctioned their implementation,
etc… (Paul Dourish)
28. Algorithms don’t act, it’s the whole system that counts
Ananny, M., 2016. Toward an Ethics of Algorithms: Convening, Observation, Probability, and Timeliness. Science, Technology & Human Values, 41(1), pp.93–117
“Algorithmic Assemblages” are more useful units of analysis
Aggregation
inferring associations
Classification
inferring boundaries
Temporality
inferring when to act
algo code
work practices
cultural norms
29. Stakeholders in a Learning Analytics system
Ethical Principles
Educational/Learning
Sciences researcher
Learning Theory
Algorithm
Learning Analytics
Researcher
Educator
Learner
Learning Outcomes
Educational Insights
Programmer
Software, Hardware
User Interface
Data
33. Accountability in terms of: Ethics of Technology
Teleological critique: do the analytics lead to consequences
that we value?
Virtues critique: What are the values implicit/explicit in
the design (at every level), and do stakeholders perceive
and use the system as intended? (cf. HCI Claims Analysis)
Ethics of
Technology
Deontological critique: do the analytics produce results that
satisfy current duties/rules/policies/principles, and/or
correspond with how we already classify the world?
Ananny, M., 2016. Toward an Ethics of Algorithms: Convening, Observation, Probability, and Timeliness. Science, Technology & Human Values, 41(1), pp.93–117
…but for an emerging field, esp. with personalisation, D & T can be challenging to apply
35. Accountability in terms of: Computer Science
Computer
Science
Source code availability: to what extent can developers
inspect and test the source code?
Algorithmic Integrity: does the running code
(code+data+platform) operationalise the algorithm with
integrity? Is it possible to reverse engineer the algorithm
from the system?
Source code maintainability: how easily can a developer
modify the source code?
System output intelligibility: can we understand why the
implemented system generates its outputs under different
conditions? (a particular issue with machine learning)
37. Accountability in terms of: Data Science
Data Science
Model Integrity: for machine learning, do the target
variables and class labels embody discriminatory bias?
Training Data Integrity: for machine learning, does the
training set embody discriminatory bias?
Feature Selection Integrity: does the discriminatory power of
features do justice to the complexity of the phenomenon?
Proxy Integrity: do apparent proxy features for a target
quality embody discriminatory bias?
Barocas, S. and A. Selbst (In Press). Big Data's Disparate Impact. California Law Review 104. Preprint: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2477899
38. Algorithmic shortlisting for interview
http://www.nytimes.com/2015/06/26/upshot/can-an-algorithm-hire-better-than-a-human.html?_r=0
39. (fictional scenario)
Ascilite 2011 Keynote: Slide 77: http://people.kmi.open.ac.uk/sbs/2011/12/learning-analytics-ascilite2011-keynote
Obviously no
university would
vet student
applications by
algorithm…
K
40. Stella Lowry & Gordon Macpherson, A Blot on the Profession, 296 BRITISH MEDICAL J. 657 (1988).
http://www.theguardian.com/news/datablog/2013/aug/14/problem-with-algorithms-magnifying-misbehaviour
Obviously no
university would
vet student
applications by
algorithm…
L
41. Returning to algorithmic staff recruitment…
41
“It is dangerously naïve, however, to
believe that big data will automatically
eliminate human bias from the decision-
making process.
…inherit the prejudices of prior decision-
makers
…reflect the widespread biases that
persist in society
…Discover …regularities that are really
just preexisting patterns of exclusion and
inequality.”
Written Testimony of Solon Barocas: U.S. Equal Employment Opportunity Commission, Meeting of July 1, 2015 - EEOC at 50: Progress and Continuing Challenges in
Eradicating Employment Discrimination. http://www1.eeoc.gov//eeoc/meetings/7-1-15/barocas.cfm?renderforprint=1
42. Returning to algorithmic staff recruitment…
42
“…Adopted or applied without care,
data mining can deny historically
disadvantaged and vulnerable groups
full participation in society.”
“…it can be unusually hard to identify
the source of the problem, to explain
it to a court, or to remedy it technically
or through legal action.”
Written Testimony of Solon Barocas: U.S. Equal Employment Opportunity Commission, Meeting of July 1, 2015 - EEOC at 50: Progress and Continuing Challenges in
Eradicating Employment Discrimination. http://www1.eeoc.gov//eeoc/meetings/7-1-15/barocas.cfm?renderforprint=1
44. Accountability in terms of: Human-Data Interaction
Human-Data
Interaction
“Accountable Data Transactions”: does the data
infrastructure have clear protocols for personal data
requests, permission, and audit?
Crabtree, A. and R. Mortier (2015). Human Data Interaction: Historical Lessons from Social Studies and CSCW. Proc. European Conference on Computer Supported
Cooperative Work, Oslo (19-23 Sept 2015), Springer: London. Preprint: http://mor1.github.io/publications/pdf/ecscw15-hdi.pdf
Social Data Infrastructure: as interactions around data are
formalised and automated, can this infrastructure be held to
account?
Collective Data Ownership: is collectively owned data
handled in ways that preserve individual rights?
User Agency: to what degree do citizens have control over
who accesses their data, and can they comprehend what
analysis will be done, and by whom?
46. Accountability in terms of: User-Centred Design
Intelligibility: what, if any, explanation can a user ask the
analytic system to provide about its behaviour, can they
understand the answers, and can they give feedback?
Participatory Design: to what extent are stakeholders (e.g.
academics; instructors; students) involved in the system
design process, and are there interfaces for them to modify the
deployed system’s behaviour?
User-Centred
Design
User Sensemaking: who are the target end-users, do they
understand the analytic output, and can they take appropriate
action based on it?
48. Accountability in terms of:
Learning Sciences & Educational Technology
Improved Learning: in what ways do learners benefit from
using the system?
Improved Teaching: in what ways do educators benefit
from using the system?
Learning
Sciences &
Educational
Technology
Conceptual Integrity: does the algorithm implement the
intended constructs (theory/model/rubric…) with integrity?
Conceptual Integrity: is the data congruent with other
sources of educational evidence?
49. worked example 1
analytics for professional,
academic, reflective writing
Buckingham Shum, S., Á. Sándor, R. Goldsmith, X. Wang, R. Bass and M. McWilliams (2016). Reflecting on Reflective Writing Analytics: Assessment Challenges and Iterative Evaluation of a
Prototype Tool. 6th International Learning Analytics & Knowledge Conference (LAK16), Edinburgh, UK, April 25 - 29 2016, ACM, New York, NY. http://dx.doi.org/10.1145/2883851.2883955
52. COMPARISON OF HUMAN VS. MACHINE
52
human highlighting automated highlighting
53. Example accountability analysis: writing feedback
53
Ethics: Does the system output results that match human analysts? Does instant feedback
lead to novel benefits, or is it in fact damaging to students? What is the motivation behind
the focus on reflection (e.g. rather than factual accuracy)?
Computer Science: Does the NLP platform implement the linguistic features with integrity?
Data Science: Do the linguistic features have sufficient discriminatory power? Do they
ignore important qualities of value? Do they bias against certain kinds of student?
Human-Data Interaction: Do students consent to their writing being analysed?
User-Centred Design: Were students and educators involved in the design of the system?
Can they make sense of the user interface and act on output?
Learning Technology: What is the educational basis for the parser rules? Does the
algorithm implement the theory/rubric with integrity? Educator/student reaction?
Legal: Could a student sue the university for parser errors, or discrimination?
54. worked example 2
building and visualizing
a student social network
Kitto, K. et al. (2016). The Connected Learning Analytics Toolkit. 6th International Learning Analytics & Knowledge Conference (LAK16), Edinburgh, UK, April 25 - 29 2016,
ACM, New York, NY. http://beyondlms.org
56. Example accountability analysis: SNA visualisation
56
Ethics: Do users validate the social networks? Does the system lead to novel benefits?
What is the motivation behind the design of the system?
Computer Science: Does the social network tool implement SNA metrics correctly?
Data Science: Is the dataset biased or incomplete in important respects? Does the social
network visualization bias against certain kinds of student?
Human-Data Interaction: Should students be permitted to control their degree of visibility
on different platforms? Does disclosing peer data to a student violate peers’ rights?
User-Centred Design: Were students and educators involved in the design of the system?
Can they make sense of the user interface and act on output?
Learning Technology: What is the educational rationale for making social ties visible? Do
the selected user actions implement this with integrity? Do students find it helpful?
Legal: Could a student sue for discrimination due to the network map?
58. Participatory design methods that build trust
Carlos G. Prieto-Alvarez, Roberto Martinez-Maldonado, Theresa Anderson (In Press). Co-designing in Learning Analytics: Tools and Techniques. In:
Jason Lodge, Jared Cooney Horvath & Linda Corrin (Eds.), From data and analytics to the classroom: Translating learning analytics for teachers.
59. Communicating the algorithm to educators and students?
Milligan, S. and Oliveira, E. (2017). Design features that make assessment analytics trustworthy: a case study.
DesignLAK17 Workshop, LAK17, Vancouver, March 2017. https://sites.google.com/site/designlak17
60. 60
Milligan, S. and Griffin, P. (2016).
Understanding learning and learning
design in MOOCs: A measurement-based
interpretation. Journal of Learning
Analytics, 3(2), 88– 115.
http://dx.doi.org/10.18608/jla.2016.32.5
“Capability to
learn in
scaled
environments”
Make the mapping from
clicks to constructs
explicit
61. ETHICAL DESIGN CRITIQUE (EDC) PROCESS
Prototype Analytics
Tool received
EDC Review
Document
prepared
EDC
Workshop
records
issues
Core Team
reviews &
responds
EDC
Workshop
participants
approve
CIO, DVC(ES) &
(CS) Signoff
Analytics
Tool
deployed
CIC team
Pro-VC Education
Equity & Diversity Unit
Planning & Quality Unit
IT Division
Director, Risk
Director, Teaching Innovation
Data Privacy Officer
Indigenous Students Centre
63. EDC: ETHICAL DESIGN CRITIQUE
The EDC Template
<insert screenshot of dashboard element>
EDC team is invited to comment on:
Strengths
• e.g. Data is at an appropriate level that it does not
disclose inappropriate information about a cohort
or individuals
63
64. EDC: ETHICAL DESIGN CRITIQUE
Indicate concerns e.g.
• Data is displayed from incomplete sources, or has been filtered in ways,
that users might not expect (Specify why this violates expectations)
• Data is shown for which there is no apparent reasonable use
(Recommend removal)
• There is scope for misinterpretation due to poor design
(Suggest improvement)
• This data is useful but could be inappropriately used
(Caution and specify examples of inappropriate usage)
64
65. Conclusion: a new career in the
Learning Analytics profession?
“Certified LASI Engineer”
Trained to audit Learning Analytics System Integrity
at multiple levels, via multiple lenses
-> New principles to guide design, vendor selection,
academic peer review (and maybe legal deliberation)