Recommending courses to students in online platforms is studied widely. Almost all studies target closed platforms, that belong to a University or some other educational provider. This makes the course recommenders situation specific. Over the last years, a demand has developed for recommender system that suit open online platforms. Those platforms have some common characteristics, such as the lack of rich user profiles with content metadata. Instead they log user interactions within the platform that can be used for analysis and personalization. In this paper, we investigate how user interactions and activities tracked within open online learning platforms can be used to provide recommendations. We present a study in which we investigate the application of several state-of-the-art recommender algorithms, including a graph-based recommender approach. We use data from the OpenU open online learning platform that is in use by the Open University of the Netherlands. The results show that user-based and memory-based methods perform better than model-based and factorization methods. Particularly, the graph-based recommender system proves to outperform the classical approaches on prediction accuracy of recommendations in terms of recall. We conclude that, if the algorithms are chosen wisely, recommenders can contribute to a better experience of learners in open online courses.
Soude Fazeli, Enayat Rajabi, Leonardo Lezcano, Hendrik Drachsler, Peter Sloep
Dutch Cooking with xAPI Recipes, The Good, the Bad, and the ConsistentHendrik Drachsler
This paper presents the experiences of several Dutch projects in their application of the xAPI standard and different design patterns including the deployment of Learning Record Stores. In this paper we share insights and argue for the formation of an international Special Interest Group on interoperability issues to contribute to the Open Analytics Framework as envisioned by SoLAR and enacted by the Apereo Learning Analytics Initiative. Therefore, we provide an overview of the advantages and disadvantages of implementing the current xAPI standard by presenting projects that applied xAPI in very different ways followed by the lessons learned.
Presentation given at PELARS Policy event, Brussles, 09.11.2016. A follow up op the first LACE Policy event in April 2015. Special focus is on the exploitation and sustainability activities for LACE in the SIG LACE SoLAR.
What do analytics on learning analytics tell us? How can we make sense of this emerging field’s historical roots, current state, and future trends, based on how its members report and debate their research?
Challenge submissions should exploit the LAK Dataset for a meaningful purpose. This may include submissions which cover one or more of the following, non-exclusive list of topics:
Analysis & assessment of the emerging LAK community in terms of topics, people, citations or connections with other fields
Innovative applications to explore, navigate and visualise the dataset (and/or its correlation with other datasets)
Usage of the dataset as part of recommender systems
Analysis of the evolution of LAK discipline
Improvement or enrichment of the LAK Dataset
Updated Flyer of the LACE project with latest tangible outcomes and collaboration possibilities.
LACE connects players in the fields of Learning Analytics (LA) and Educational Data Mining (EDM) in order to support the development of a European community and share emerging best practices.
Objectives
-------------
• Promote knowledge creation and exchange
• Increase the evidence base about Learning Analytics
• Contribute to the definition of future directions
• Build consensus on pressing topics like data interoperability, data sharing, ethics and privacy, and Learning Analytics supported instructional design
Activities
• Organise events to connect organisations that are conducting LA/EDM research
• Create and curate a knowledge base to capture evidence for the effectiveness of Learning Analytics
• Produce reviews to inform the LACE community about latest developments in the field
DELICATE checklist - to establish trusted Learning AnalyticsHendrik Drachsler
The DELICATE checklist contains eight action points that should be considered by managers and decision makers planning the implementation of Learning Analytics / Educational Data Mining solutions either for their own institution or with an external provider.
The eight points are:
1. Determination: Decide on the purpose of learning analytics for your institution. What aspects of learning or learner services are you trying to improve?
2. Explain: Define the scope of data collection and usage. Who has a need to have access to the data or the results? Who manages the datasets? On what criteria?
3. Legitimate: Explain how you operate within the legal frameworks, refer to the essential legislation. Is the data collection excessive, random, or fit for purpose?
4. Involve: Talk to stakeholders and give assurances about the data distribution and use. Give as much control as possible to data subjects (permission architecture), and provide access to their data for the individuals.
5. Consent: Seek consent through clear consent questions. Provide an opt-out option.
6. Anonymise: De-identify individuals as much as possible, aggregate data into meta-models.
7. Technical aspects: Monitor who has access to data, especially in areas with high staff turn-over. Establish data storage to high security standards.
8. External partners: Make sure externals provide highest data security standards. Ensure data is only used for intended purposes and not passed on to third parties.
We hope that the DELICATE checklist will be a helpful instrument for any educational institution to demystify the ethics and privacy discussions around Learning Analytics. As we have tried to show in this article, there are ways to design and provide privacy conform Learning Analytics that can benefit all stakeholders and keep control with the users themselves and within the established trusted relationship between them and the institution.
The presentation provides an overview of the R&D activities of the Learning Analytics topic at the Open Universiteit in October 2013.
http://portal.ou.nl/documents/363049/789b3323-d55c-4e3e-93ba-a716ade14463
http://creativecommons.org/licenses/by-nc-sa/3.0/
Drachsler, H., Specht, M. (2013).
Dutch Cooking with xAPI Recipes, The Good, the Bad, and the ConsistentHendrik Drachsler
This paper presents the experiences of several Dutch projects in their application of the xAPI standard and different design patterns including the deployment of Learning Record Stores. In this paper we share insights and argue for the formation of an international Special Interest Group on interoperability issues to contribute to the Open Analytics Framework as envisioned by SoLAR and enacted by the Apereo Learning Analytics Initiative. Therefore, we provide an overview of the advantages and disadvantages of implementing the current xAPI standard by presenting projects that applied xAPI in very different ways followed by the lessons learned.
Presentation given at PELARS Policy event, Brussles, 09.11.2016. A follow up op the first LACE Policy event in April 2015. Special focus is on the exploitation and sustainability activities for LACE in the SIG LACE SoLAR.
What do analytics on learning analytics tell us? How can we make sense of this emerging field’s historical roots, current state, and future trends, based on how its members report and debate their research?
Challenge submissions should exploit the LAK Dataset for a meaningful purpose. This may include submissions which cover one or more of the following, non-exclusive list of topics:
Analysis & assessment of the emerging LAK community in terms of topics, people, citations or connections with other fields
Innovative applications to explore, navigate and visualise the dataset (and/or its correlation with other datasets)
Usage of the dataset as part of recommender systems
Analysis of the evolution of LAK discipline
Improvement or enrichment of the LAK Dataset
Updated Flyer of the LACE project with latest tangible outcomes and collaboration possibilities.
LACE connects players in the fields of Learning Analytics (LA) and Educational Data Mining (EDM) in order to support the development of a European community and share emerging best practices.
Objectives
-------------
• Promote knowledge creation and exchange
• Increase the evidence base about Learning Analytics
• Contribute to the definition of future directions
• Build consensus on pressing topics like data interoperability, data sharing, ethics and privacy, and Learning Analytics supported instructional design
Activities
• Organise events to connect organisations that are conducting LA/EDM research
• Create and curate a knowledge base to capture evidence for the effectiveness of Learning Analytics
• Produce reviews to inform the LACE community about latest developments in the field
DELICATE checklist - to establish trusted Learning AnalyticsHendrik Drachsler
The DELICATE checklist contains eight action points that should be considered by managers and decision makers planning the implementation of Learning Analytics / Educational Data Mining solutions either for their own institution or with an external provider.
The eight points are:
1. Determination: Decide on the purpose of learning analytics for your institution. What aspects of learning or learner services are you trying to improve?
2. Explain: Define the scope of data collection and usage. Who has a need to have access to the data or the results? Who manages the datasets? On what criteria?
3. Legitimate: Explain how you operate within the legal frameworks, refer to the essential legislation. Is the data collection excessive, random, or fit for purpose?
4. Involve: Talk to stakeholders and give assurances about the data distribution and use. Give as much control as possible to data subjects (permission architecture), and provide access to their data for the individuals.
5. Consent: Seek consent through clear consent questions. Provide an opt-out option.
6. Anonymise: De-identify individuals as much as possible, aggregate data into meta-models.
7. Technical aspects: Monitor who has access to data, especially in areas with high staff turn-over. Establish data storage to high security standards.
8. External partners: Make sure externals provide highest data security standards. Ensure data is only used for intended purposes and not passed on to third parties.
We hope that the DELICATE checklist will be a helpful instrument for any educational institution to demystify the ethics and privacy discussions around Learning Analytics. As we have tried to show in this article, there are ways to design and provide privacy conform Learning Analytics that can benefit all stakeholders and keep control with the users themselves and within the established trusted relationship between them and the institution.
The presentation provides an overview of the R&D activities of the Learning Analytics topic at the Open Universiteit in October 2013.
http://portal.ou.nl/documents/363049/789b3323-d55c-4e3e-93ba-a716ade14463
http://creativecommons.org/licenses/by-nc-sa/3.0/
Drachsler, H., Specht, M. (2013).
Open Education Challenge 2014: exploiting Linked Data in Educational Applicat...Stefan Dietze
Presentation from mentoring event of Open Education Europa Challenge (http://www.openeducationchallenge.eu/) about using Linked Data in educational applications.
B2: Open Up: Open Data in the Public SectorMarieke Guy
Parallel session [B2: Open Up: Open Data in the Public Sector] run at the Institutional Web Management Workshop 2013 (IWMW 2013) event, University of Bath on 26 - 28th June 2013.
Keynote talk to LEARN (LERU/H2020 project) for research data management. Emphasizes that problems are cultural not technical. Promotes modern approaches such as Git / continuousIntegration, announces DAT. Asserts that the Right to Read in the Right to Mine. Calls for widespread development of contentmining (TDM)
Meeting the Research Data Management Challenge - Rachel Bruce, Kevin Ashley, ...Jisc
Universities and researchers need to be able to manage research data effectively to fulfil research funders requirements and ultimately to contribute to research excellence. UK universities are comparatively well advanced in what is a global challenge, but none the less there needs to be further advances in university policy, technical and support services. This session will share best practice in research data management and information about key tools that can help to develop university solutions; and it will also inform participants about the latest Jisc initiatives to help build university research data services and shared services.
"Open Science, Open Data" training for participants of Software Writing Skills for Your Research - Workshop for Proficient, Helmholtz Centre Potsdam - GFZ German Research Centre for Geosciences, Telegrafenberg, December 16, 2015
The Needs of stakeholders in the RDM process - the role of LEARNLEARN Project
Presentation at 3rd LEARN workshop on Research Data Management, “Make research data management policies work”
Helsinki, 28 June 2016, by Martin Moyle/Paul Ayris, UCL Library Services
Open Data in a Big Data World: easy to say, but hard to do?LEARN Project
Presentation at 3rd LEARN workshop on Research Data Management, “Make research data management policies work”
Helsinki, 28 June 2016, by Sarah Callaghan, STFC Rutherford Appleton Laboratory
Liberating facts from the scientific literature - Jisc Digifest 2016Jisc
Text and data mining (TDM) techniques can be applied to a wide range of materials, from published research papers, books and theses, to cultural heritage materials, digitised collections, administrative and management reports and documentation, etc. Use cases include academic research, resource discovery and business intelligence.
This workshop will show the value and benefits of TDM techniques and demonstrate how ContentMine aims to liberate 100,000,000 facts from the scientific literature, and ContentMine will provide a hands on demo on a topical and accessible scientific/medical subject.
How Technology is Changing the Future of LearningDavid Kelly
These slides were used in support of a keynote I delivered at the 2015 eACH Conference.
If you're interested in bringing this talk/workshop into your event or organization, please contact me at LnDDave@gmail.com.
Open Education Challenge 2014: exploiting Linked Data in Educational Applicat...Stefan Dietze
Presentation from mentoring event of Open Education Europa Challenge (http://www.openeducationchallenge.eu/) about using Linked Data in educational applications.
B2: Open Up: Open Data in the Public SectorMarieke Guy
Parallel session [B2: Open Up: Open Data in the Public Sector] run at the Institutional Web Management Workshop 2013 (IWMW 2013) event, University of Bath on 26 - 28th June 2013.
Keynote talk to LEARN (LERU/H2020 project) for research data management. Emphasizes that problems are cultural not technical. Promotes modern approaches such as Git / continuousIntegration, announces DAT. Asserts that the Right to Read in the Right to Mine. Calls for widespread development of contentmining (TDM)
Meeting the Research Data Management Challenge - Rachel Bruce, Kevin Ashley, ...Jisc
Universities and researchers need to be able to manage research data effectively to fulfil research funders requirements and ultimately to contribute to research excellence. UK universities are comparatively well advanced in what is a global challenge, but none the less there needs to be further advances in university policy, technical and support services. This session will share best practice in research data management and information about key tools that can help to develop university solutions; and it will also inform participants about the latest Jisc initiatives to help build university research data services and shared services.
"Open Science, Open Data" training for participants of Software Writing Skills for Your Research - Workshop for Proficient, Helmholtz Centre Potsdam - GFZ German Research Centre for Geosciences, Telegrafenberg, December 16, 2015
The Needs of stakeholders in the RDM process - the role of LEARNLEARN Project
Presentation at 3rd LEARN workshop on Research Data Management, “Make research data management policies work”
Helsinki, 28 June 2016, by Martin Moyle/Paul Ayris, UCL Library Services
Open Data in a Big Data World: easy to say, but hard to do?LEARN Project
Presentation at 3rd LEARN workshop on Research Data Management, “Make research data management policies work”
Helsinki, 28 June 2016, by Sarah Callaghan, STFC Rutherford Appleton Laboratory
Liberating facts from the scientific literature - Jisc Digifest 2016Jisc
Text and data mining (TDM) techniques can be applied to a wide range of materials, from published research papers, books and theses, to cultural heritage materials, digitised collections, administrative and management reports and documentation, etc. Use cases include academic research, resource discovery and business intelligence.
This workshop will show the value and benefits of TDM techniques and demonstrate how ContentMine aims to liberate 100,000,000 facts from the scientific literature, and ContentMine will provide a hands on demo on a topical and accessible scientific/medical subject.
How Technology is Changing the Future of LearningDavid Kelly
These slides were used in support of a keynote I delivered at the 2015 eACH Conference.
If you're interested in bringing this talk/workshop into your event or organization, please contact me at LnDDave@gmail.com.
Curso introdutório do framework Scrum, analisando de forma simples e direta a sua utilização como fonte de melhoria na produção de projetos.
Lembrando que esses slides apenas servem para ilustrar o curso e portanto não devem ser tratados como única fonte de conhecimento.
Pequena apresentação para o evento do GBG Aracaju (Google Business Group Aracaju) na Semana da Computação da Universidade Federal de Sergipe (UFS).
Mostrando a potencialidade da ferramenta para as empresas e ótima integração com outros produtos do Google, ajudando a alcançar uma grande produtividade em seu negócio.
This was a presentation delivered at the 10th Northumbria Conference in York during July 2013. It provides a background, and introduction and overview to the Library Analytics and Metrics Project (LAMP) work that Jisc, Mimas (University of Manchester) and University of Huddersfield are collaborating on.
The project will develop a prototype shared library analytics service for UK universities and colleges.
[DSC Europe 22] Machine learning algorithms as tools for student success pred...DataScienceConferenc1
The goal of higher education institutions is to provide quality education to students. Predicting academic success and early intervention to help at-risk students is an important task for this purpose. This talk explores the possibilities of applying machine learning in developing predictive models of academic performance. What factors lead to success at university? Are there differences between students of different generations? Answers are given by applying machine learning algorithms to a data set of 400 students of three generations of IT studies. The results show differences between students with regard to student responsibility and regularity of class attendance and great potential of applying machine learning in developing predictive models.
Overview of C-SAP open educational resources projectCSAPOER
This presentation showcases, discusses and reflects upon the work of the C-SAP "Open Educational Resources" project. Our project, "Evaluating the Practice of Opening up Resources for Learning and Teaching in the Social Sciences", was part of a pilot programme (funded by the HEA and JISC), which sought to explore issues around the sharing of educational material from a disciplinary perspective. Whilst exploring, with our academic project partners, the principles and issues around releasing educational material (institutional, contractual, administrative), we have also sought to develop some insights into the processes of sharing practice, and look forward to discussing the findings in this forum.
Open Science and Ethics studies in SLE researchdavinia.hl
Beardsley, M., Santos, P., Hernández-Leo, D., Michos, K. (2019). Ethics in educational technology research: informing participants in data sharing risks. British Journal of Educational Technology, 50(3), 1019-1034, https://doi.org/10.1111/bjet.12781
Beardsley, M., Hernández-Leo, D., Ramirez, R., (2018) Seeking reproducibility: Assessing a multimodal study of the testing effect. Journal of Computer Assisted Learning, 2018, vol. 34, no 4, p. 378-386.
Scholarly social media applications platforms for knowledge sharing and net...tullemich
This short presentation deals with some of the current publishing workflows to platforms for scholarly knowledge sharing and SoMe networking. It is touched upon what kind of implications emerge from operating in these open and networked virtual research environments (VRE) e.g. publishing open access.
Linking Heterogeneous Scholarly Data Sources in an Interoperable Setting: the...Platforma Otwartej Nauki
“Open Research Data: Implications for Science and Society”, Warsaw, Poland, May 28–29, 2015, conference organized by the Open Science Platform — an initiative of the Interdisciplinary Centre for Mathematical and Computational Modelling at the University of Warsaw. pon.edu.pl @OpenSciPlatform #ORD2015
"The Influence of Online Studies and Information using Learning Analytics"Fahmi Ahmed
This research will help people with inadequate knowledge to get
a better understanding of online study or e-learning. Through this
study, the social impact of online users or learners can be
increased, and the users can have a clear idea of online study. In
this research, the graphs will be presented according to country,
gender, age, online resources, etc. showing the impact of online
study and information on online users. The learners will get an
understandable knowledge of the type of sources, what is their
purpose, and resources people can use in online study. From this,
the learners will get a guide or path that how easily they can learn
online for study in a more flexible way. The outcomes are
visualized using the R language and Tableau with pre-processed
data.
Similar to Recommendations for Open Online Education: An Algorithmic Study (20)
In this webinar, Prof Hendrik Drachsler will reflect on the process of applying learning analytics solutions within higher education settings, its implications, and the critical lessons learned in the Trusted Learning Research Program. The talk will focus on the experience of edutec.science research collective consisting of researchers from the Netherlands and Germany that contribute to the Trusted Learning Analytics (TLA) research program. The TLA program aims to provide actionable and supportive feedback to students and stands in the tradition of human-centered learning analytics concepts. Thus, the TLA program aims to contribute to unfolding the full potential of each learner. It, therefore, applies sensor technology to support psychomotor as well as web technology to support meta-cognitive and collaborative learning skills with high-informative feedback methods. Prof. Drachsler applies validated measurement instruments from the field of psychometric and investigates to what extent Learning Analytics interventions can reproduce the findings of these instruments. During this webinar, Prof Drachsler will discuss the lessons learned from implementing TLA systems. He will touch on TLA prerequisites like ethics, privacy, and data protection, as well as high informative feedback for psychomotor, collaborative, and meta-cognitive competencies and the ongoing research towards a repository, methods, tools and skills that facilitate the uptake of TLA in Germany and the Netherlands.
Smart Speaker as Studying Assistant by Joao ParganaHendrik Drachsler
The thesis by Joao Pargana followed two main goals, first, a smart speaker application was created to support learners in informal learning processes through a question/answer application. Second, the impact of the application was tested amongst various users by analyzing how adoption and
transition to newer learning procedures can occur.
Dieser Entwurf eines Verhaltenskodex richtet sich an Hochschulen, die mittels Learning Analytics die Qualität des Lernens und Lehrens verbessern wollen. Der Kodex kann als Vorlage zur Erstellung von organisationsspezifischen Verhaltenskodizes dienen. Er sollte an Hochschulen, die Learning Analytics einführen wollen, durch Konsultationen mit allen Interessengruppen überprüft und an die Ziele sowie die bestehende Praxis innerhalb der jeweiligen Hochschulen angepasst werden. Der Kodex wurde auf Grundlage einer Analyse bestehender europäischer Kodizes und der in Deutschland geltenden Rechtsgrundlage vom Innovationsforum Trusted Learning Analytics des hessenweiten Projektes "Digital gestütztes Lehren und Lernen in Hessen" entwickelt.
Abstract (English):
This code of conduct can be used as a template for creating organization-specific codes of conduct in Germany. The Code was developed on the basis of an analysis of existing European codes of conduct and the legal basis for the usage of data in higher education in Germany.
Rödling, S. (2019). Entwicklung einer Applikation zum assoziativen Medien Ler...Hendrik Drachsler
Ziel der vorliegenden Bachelorarbeit ist es, den Einfluss von zusätzlicher am Handgelenk wahr-genommener Vibration in Verbindung mit der visuellen Darstellung eines Lerninhaltes auf denLernerfolg zu messen. Der Lernerfolg wird hierbei durch die Lerngeschwindigkeit sowie denUmfang der Wissenskonsolidierung über die Testreihe definiert. Zu diesem Zweck wurde eine Experimentalstudie zumAssoziativen Lernendurchgeführt. Für die Studie verwendeten 33Probanden eine App, die für die vorliegende Arbeit entwickelt wurde. Im Mittel aller Studiener-gebnisse wurden sowohl für die Lerngeschwindigkeit als auch für die Wissenskonsolidierungbessere Werte erzielt, wenn die Probanden die Möglichkeit hatten, den Lerninhalt sowohl visu-ell als auch haptisch zu erfahren. Die festgestellten Unterschiede des Lernerfolges erreichtenjedoch keine statistische Signifikanz. Die Abweichungen der Ergebnisse nach der Umsetzungder vorgeschlagenen Änderungen am Studiendesign sind abzuwarten. Die Bachelorarbeit ist vor allem für den Bildungsbereich interessant.
The present bachelor thesis aims to measure the influence of vibration perceived at the wrist in connection with the visual representation of learning content on the learning success. The learning success is defined by the learning speed and the extent of knowledge consolidation over the test series. For this purpose, an experimental study on Associative Learning was conducted. For the study, 33 test persons used an app, which was developed for the present work. On average of all study results better values were achieved for both learning speed and knowledge consolidation, if the test persons could experience the learning content both visually and haptically. However, the differences in learning outcomes did not reach statistical significance. The results of the deviations after the implementation of the proposed changes to the study design must be awaited. The Bachelor’s thesis is particularly interesting for the education sector.
E.Leute: Learning the impact of Learning Analytics with an authentic datasetHendrik Drachsler
Nowadays, data sets of the interactions of users and their corresponding demographic data are becoming more and more valuable for companies and academic institutions like universities
when optimizing their key performance indicators. Whether it is to develop a model to predict the optimal learning path for a student or to sell customers additional products, data sets to
train these models are in high demand. Despite the importance and need for big data sets it still has not become apparent to every decision-maker how crucial data sets like these are for the
future success of their operations.
The objective of this thesis is to demonstrate the use of a data set, gathered from the virtual learning environment of a distance learning university, by answering a selection of questions in
Learning Analytics. Therefore, a real-world data set was analyzed and the selected questions were answered by using state-of-the-art machine learning algorithms.
Romano, G. (2019) Dancing Trainer: A System For Humans To Learn Dancing Using...Hendrik Drachsler
Masters thesis by Romano, G., (2019). Dancing is the ability to feel the music and express it in rhythmic movements with the body. But learning how to dance can be challenging because it requires proper coordination and understanding of rhythm and beat. Dancing courses, online courses or learning with free content are ways to learn dancing. However, solutions with human-computer interaction are rare or
missing. The Dancing Trainer (DT) is proposed as a generic solution to fill this gap. For the beginning, only Salsa is implemented, but more dancing styles can be added. The DT uses the Kinect to interact multimodally with the user. Moreover, this work shows that dancing steps can be defined as gestures with the Kinect v2 to build a dancing corpus. An experiment with
25 participants is conducted to determine the user experience, strengths and weaknesses of the DT. The outcome shows that the users liked the system and that basic dancing steps were
learned.
In May 2018, the new General Data Protection Regulation (GDPR) will enter into force in the European Union. This new regulation is considered as the most modern data protection law for Big Data societies of tomorrow. The GDPR will bring major changes to data ownership and the way data can be accessed, processed, stored, and analysed in the European Union. From May 2018 onwards, data subjects gain fundamental rights such as ‘the right to access data’ or ‘the right to be forgotten’. This will force Big Data system designers to follow a privacy-by-design approach for their infrastructures and fundamentally change the way data can be treated in the European Union.
The presentation provides an overview of the Trusted Learning Analytics Programme as it has been recently initiated at the University of Frankfurt and the DIPF research institute in Germany. Educational data is under special focus of the GDPR, as it is considered as highly sensitive like data from a nuclear plant. It shows opportunities and challenges for using educational data for learning analytics purposes under the light of the GDPR 2018.
Fighting level 3: From the LA framework to LA practice on the micro-levelHendrik Drachsler
This presentation explores shortcomings of learning analytics for the wide adoption in educational organisations. It is NOT about ethics and privacy rather than focuses on shortcomings of learning analytics for teachers and students in the classroom (micro-level). We investigated if and to what extend learning analytics dashboards are addressing educational concepts. Map opportunities and challenges for the use of Learning Analytics dashboards for the design of courses, and present an evaluation instrument for the effects of Learning Analytics called EFLA. EFLA can be used to measure the effects of LA tools at the teacher and student side. It is a robust but light (8 items) measurement to quickly investigate the level of adoption of learning analytics in a course (micro-level). The presentation concludes that Learning Analytics is still to much a computer science dicipline that does not fulfill the often claimed position of the middle space between educational and computer science research.
Privacy and Analytics – it’s a DELICATE Issue. A Checklist for Trusted Learni...Hendrik Drachsler
The widespread adoption of Learning Analytics (LA) and Educational Data Mining (EDM) has somewhat stagnated recently, and in some prominent cases even been reversed following concerns by governments, stakeholders and civil rights groups about privacy and ethics applied to the handling of personal data. In this ongoing discussion, fears and realities are often indistin-guishably mixed up, leading to an atmosphere of uncertainty among potential beneficiaries of Learning Analytics, as well as hesitations among institutional managers who aim to innovate their institution’s learning support by implementing data and analytics with a view on improving student success. In this presentation, we try to get to the heart of the matter, by analysing the most common views and the propositions made by the LA community to solve them. We conclude the paper with an eight-point checklist named DELICATE that can be applied by researchers, policy makers and institutional managers to facilitate a trusted implementation of Learning Analytics.
Presentation given at Serious Request 2015, #SR15, Heerlen.
Within the Open University we started a 12 hours marathon college, to collect money for the charity action of radiostation 3FM. The collected money will go to the red cross and support young people in conflict areas.
The Impact of Learning Analytics on the Dutch Education SystemHendrik Drachsler
The article reports the findings of a Group Concept Mapping
study that was conducted within the framework of the Learning Analytics Summer Institute (LASI) in the Netherlands. Learning Analytics are expected to be beneficial for students and teacher empowerment, personalization, research on learning design, and feedback for performance. The study depicted some management and economics issues and identified some possible treats. No differences were found between novices and experts on how important and feasible are changes in education triggered by
Learning Analytics.
Paper available at: http://dl.acm.org/citation.cfm?id=2567617&CFID=427722877&CFTOKEN=73282080
Standardisierte Medizinische Übergaben - Wie lernen, lehren und implementiere...Hendrik Drachsler
Presentation given at Workshop 22 Jahrestagung der Gesellschaft für Medizinische Ausbildung, 27.09.2013, GMA2013, Graz, Austria.
http://portal.ou.nl/documents/363049/fd32b9eb-df7b-4b18-bf5a-d9560425625e
http://creativecommons.org/licenses/by-nc-sa/3.0/
Sopka, S., Druener, S., Stieger, L., Hynes, H., Stoyanov, S., Orrego, C., Secanell, M., Maher, B., Henn, P., Drachsler, H. (2013). Standardized Medical handovers – How to Learn, teach and implement? Workshop at Jahrestagung der Gesellschaft für Medizinische Ausbildung (Annual Meeting of the Society for Medical Education), Graz, Austria.
Hoe ziet de toekomst van Learning Analytics er uit?Hendrik Drachsler
Presentation given in the Dutch Masterclass: 'Hoe ziet de toekomst van Learning Analytics er uit?'
http://portal.ou.nl/documents/363049/1adc41e5-52f5-4b08-8b98-bf19b635931a
http://creativecommons.org/licenses/by-nc-sa/3.0/
Drachsler, H., (September, 2013). Hoe ziet de toekomst van Learning Analytics er uit? Open Universiteit, CELSTEC, Heerlen, The Netherlands.
Presentation given at Learning Analytics Summer School Institute (LASI) to kickoff the national GCM study on LA, Amsterdam, The Netherlands.
http://portal.ou.nl/documents/363049/3430aeb1-2450-4587-8f26-e56efd7b80c4
http://creativecommons.org/licenses/by-nc-sa/3.0/
Stoyanov, S., Drachsler, H. (2013). Group Concept Mapping on Learning Analytics. Presentation given at Learning Analytics Summer School Institute (LASI) to kickoff the national GCM study on LA, Amsterdam, The Netherlands.
TEL4Health research at University College Cork (UCC)Hendrik Drachsler
Invited talk given at Application of Science to Simulation, Education and Research on Training for Health Professionals Centre (ASSERT for Health Care)
http://portal.ou.nl/documents/363049/e42710d3-255b-46df-bcba-169f7a5e0341
http://creativecommons.org/licenses/by-nc-sa/3.0/
Drachsler, H., (May, 2013). TEL4Health research at University College Cork (UCC). Invited talk given at Application of Science to Simulation, Education and Research on Training for Health Professionals Centre (ASSERT for Health Care). Cork, Ireland.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
Normal Labour/ Stages of Labour/ Mechanism of LabourWasim Ak
Normal labor is also termed spontaneous labor, defined as the natural physiological process through which the fetus, placenta, and membranes are expelled from the uterus through the birth canal at term (37 to 42 weeks
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Recommendations for Open Online Education: An Algorithmic Study
1. Recommendations for Open Online
Education:
An Algorithmic Study
Soude Fazeli1, Enayat Rajabi2, Leonardo Lezcano3, Hendrik
Drachsler1, Peter Sloep1
1 Open University Netherlands, 2 Dalhousie University,
3 eBay Inc.
27.07.2016, ICALT 2016, Austin, Texas, USA
2. 3
• Hendrik Drachsler
Associate Professor
Learning Technologies
• Research topics:
Personalization,
Recommender Systems,
Learning Analytics,
Mobile devices
• Application domains:
Schools, HEI, Medical
education
WhoAmI
2006 - 2009
@HDrachsler
4. Context of the study
• Goal: Personalization of Learning (based on prior knowledge)
• Problem: Selection from a huge variety of possibilities (Information
overflow)
• Solution: Recommender systems that points a target user to content of
interest based on her user profile
Recommendations for Open Education: An Algorithimic Study
Pagina 4
5. Problem definition
Recommendations for Open Education: An Algorithimic Study
Pagina 5
Institutional Course RecSys Open Education RecSys
VS.
Rich learner and course
metadata
Sparse learner and course
metadata
6. Pagina 6
RQ: How to recommend courses to
learners in open education platforms?
Recommendations for Open Education: An Algorithimic Study
Research Question
7. Pagina 7
1. Content-based 2. Collaborative filtering ✓
Recommendations for Open Education: An Algorithimic Study
Recommender system algorithms
Our Input Data are mainly user indirect ratings, thus
collaborative filtering are more relevant for us
8. 8
Drachsler, H., Verbert, K., Santos, O., and Manouselis, N. (2015).
Recommender Systems for Learning. 2nd Handbook on Recommender
Systems. Berlin:Springer
Recommender system algorithms
9. Pagina 9
• Memory-based
• Use statistical approaches to infer similarity between users based on
the users’ data stored in memory
• k-Nearest Neighbour method (kNN, with neighbourhood size k)
• Similarity metrics: Pearson correlation, Cosine similarity, and the
Jaccard coefficient.
• Model-based
• Use probabilistic approaches to create a model of users’ feedback
• Matrix factorization, and Bayesian networks
• are faster than memory-based algorithms
• more costly (required resources and maintenance)
In this study, we use both memory-based (both user-based and item-based) and
model-based algorithms to test which one performs best on the Open U platform.
Recommendations for Open Education: An Algorithimic Study
Collaborative Filtering (CF) algorithms
10. Pagina 10
H1: Item-based outperforms user-based approaches
H2: Model-based outperforms memory-based approaches
Recommendations for Open Education: An Algorithimic Study
Hypothesis
11. Experiment
Pagina 11
1. Dataset
• From Open Education Platform: OpenU
A broad national online learning platform for lifelong learning
• Data collected: from March 2009 until September 2013
• Users: OpenU Users are professionals from various domains
Dataset Users Learning
objects
Transactions Sparsity
(%)
OpenU 3462 105 92,689 98.14
Recommendations for Open Education: An Algorithimic Study
12. Pagina 12
• Figure 1: Course completion in related to the students’ activity
• Each blue X: the Percentage of Online Interactions (POI) for a given student and a
given course, relative to the highest online interactions of a student in that course.
• Online interactions = student’s contributions to chat sessions and forum messages.
The course completion rate for
OpenU students goes up dramatically
with increases in students’
interactions (course-mates and the
academic staff)
Recommendations for Open Education: An Algorithimic Study
Experiment
1. Data set
13. Pagina 13
Experiment
2. Algorithms
2.1. Memory-based
• Most CF algorithms are based on kNN methods:
• Find like-minded users and introduces them as the target user’s
nearest neighbours
• The appropriate similarity measure depends on whether the input data is:
• Explicit (e.g. 5-star ratings) or
• Implicit user feedback (e.g. views, downloads, clicks, etc.)
• Open U = Implicit user feedback (activities) -> Jaccard coefficient and
Cosine are appropriate
Recommendations for Open Education: An Algorithimic Study
14. Pagina 14
Experiment
2. Algorithms
2.2. Model-based
• Bayesian Personalized Ranking (BPR) method proposed by Rendle et al.
• They applied their BPR to the state-of-the-art matrix factorization
models to improve the learning process in the Bayesian model used
(BPRMF).
• MostPopular approach
• Makes recommendations based on general popularity of items
• Items are weighted based on how often they have been seen in the past
S. Rendle, C. Freudenthaler, Z. Gantner, and L. Schmidt-Thieme, “BPR: Bayesian
Personalized Ranking from Implicit Feedback,” in UAI ’09 Proceedings of the Twenty-
Fifth Conference on Uncertainty in Artificial Intelligence, 2009, pp. 452–461
15. Pagina 15
Experiment
2. Algorithms
2.3. Graph-based
• Implicit networks: a graph
– Nodes: users; Edges: similarity
relationships; Weights: similarity values
• Improve the process of finding nearest
neighbors
– By invoking graph search algorithms
– Memory-based and user-based
– For more information, see our ECTEL2014
paper:
S. Fazeli, B. Loni, H. Drachsler, and P. Sloep, “Which Recommender system Can Best Fit
Social Learning Platforms?,” in 9th European Conference on Technology Enhanced
Learning, EC-TEL 2014, 2014, pp. 84–97.
16. Pagina 16
Experiment
3. Settings
• Metrics
• Precision (ratio number of relevant items recommended to the total
number of recommended items)
• Recall shows the probability that a relevant item is recommended
• Both Precision and recall range from 0 to 1.
• The number of courses in this experiment is 105 thus
• The number of top-n items to be recommended is 5 (approx. 5% of
the courses) and 10 (approx.10% of the courses).
• For each memory-based CF algorithm, we evaluated six neighbourhood
sizes (k={5,10,20,30,50,100}).
Recommendations for Open Education: An Algorithimic Study
17. Pagina 17
1. Memory-based
• User-based with Jaccard (UB1)
• User-based with Cosine (UB2)
• Item-based with Jaccard (IB1)
• Item-based with Cosine (IB2)
2. Model-based
• MostPopular (MB1)
• Bayesian Personalized Ranking with Matrix Factorization (MB2)
3. Graph-based
• User-based with T-index (UB3)
Experiment
Which algorithm and parameters are best suited for
the users of the Open U learning platform?
18. Experimental study
3. Results
Pagina 18
Values for the highest-
scoring
neighbourhood size
are in bold, the
highest values among
all are underlined
19. Discussions
H1: Item-based outperformed user-based methods.
User-based CFs exceeded all expectations - contrary to what the
recommender systems literature suggests.
• Item-based results were expected to trump the user-based
since the number of items (courses) is much smaller than the
number of users for our dataset.
• User-based algorithms performed better on the Open U data
than those that make use of similarities between items
(courses).
• Therefore: we reject H1.
Pagina 19
Recommendations for Open Education: An Algorithimic Study
20. Discussions
H2: Matrix factorization methods outperform memory-based
methods.
• The user-based recommenders (UB1, UB2, UB3), which are memory-
based, widely outperform the model-based ones (MB1, MB2).
• We expected the matrix factorization (model-based CFs) to perform
better since they often prove to outperform prediction accuracy of
recommendations particularly when explicit user feedback is available
(e.g. 5-star ratings).
• So we reject also H2.
Pagina 20
Recommendations for Open Education: An Algorithimic Study
21. Pagina 21
• This study sought to find out how best to generate
personalized recommendations from user activities
within an open online course platform.
• The results show that user-based and memory-
based methods perform better than item and
model-based factorization methods.
• The UB1 algorithms seem to be most suited to
provide accurate recommendations to the users of
our Open U platform.
Recommendations for Open Education: An Algorithimic Study
Conclusion
22. Pagina 22
Ongoing and Further work
1. Integrating the selected recommender algorithms in the OpenU platform
to provide online recommendations.
2. Studying how the graph-based approach can help to improve the
process of finding like-minded neighbours in terms of social network
analysis (SNA)
3. User study
– To measure novelty and serendipity of the recommendations made
for OpenU users.
Recommendations for Open Education: An Algorithimic Study
Almost all studies on recommending courses have been on “closed online course platforms” that typically belong to a specific university:
Examples: CourseAgent of the University of Pittsburgh and CourseRank of Stanford University.
But in Coursera, edX, Udacity, MiriadaX, FutureLearn or OpenU:
The need arises for recommenders that are free from curricular concerns but address the individual information needs of the learners.
Moreover, most of the existing course recommenders are dependent on learner or course content metadata.
In many open online course platforms, often no comprehensive metadata about the learners are available.
Typically, learners provide as little data they can, merely to get through the sign-up procedure.
Also, availability of rich content metadata for open online courses is often an issue.
Data regarding user interactions and activities become richer and more extensive
As learners spend more time online and also more and more learners take advantage of the open online offers.
Which recommender algorithms performs best for learners in Open Education platforms?
To develop a recommender system one first needs to find out about the available input data. The user activities in the OpenU dataset are mainly implicit, coming from tracking data, like signing up for a course or contributing to a forum. Therefore, Collaborative Filtering (CF) recommenders can be applied. Such methods make recommendations for a target user based on other users’ opinions and interests. Content-based methods should be used when, first, rich content data is available (not the case in this study) and, second, user rating information (5-star, binary, unary) is not available. However, “even if very few ratings are available, simple rating-based predictors outperform purely metadata- based ones” [9],. This is due to the difference between the
item descriptions and the items themselves, and users often rate how much they like an items, not their descriptions.
Upon signing up users receive a student ID and can register for free in any OpenU course they are interested in.
Figure 1 shows how course completion is related to the students’ activities and interactions.
Each blue X: the Percentage of Online Interactions (POI) for a given student and a given course, relative to the highest online interactions of a student in that course.
Online interactions:
Calculated based on a student’s contributions to chat sessions and forum messages.
Most CF algorithms are based on kNN methods:
Find like-minded users and introduces them as the target user’s nearest neighbours
The appropriate similarity measure depends on whether the input data is:
Explicit (e.g. 5-star ratings) or
Implicit user feedback (e.g. views, downloads, clicks, etc.)
The OpenU data is implicit user feedback: (userID,itemID) tuples
Item refers to course in the current study
Also known as positive feedback only
Thus the Jaccard coefficient and Cosine are appropriate since they use implicit user feedback
Metrics
Precision is defined as the ratio of the number of relevant items recommended to the total number of recommended items.
Recall shows the probability that a relevant item is recommended: the number of relevant items divided by the total number of relevant items in the entire test set.
Both Precision and recall range from 0 to 1.
The number of courses in this experiment is 105 thus
The number of top-n items to be recommended is 5 (approx. 5% of the courses) and 10 (approx.10% of the courses).
Data split
A random 80% of the data was assigned to a training set, the rest constituted the test set.
For each memory-based CF algorithm, we evaluated six neighbourhood sizes(k={5,10,20,30,50,100}).
Model-based algorithms use latent factors (f={3, 5,10}).