Updated version of the RecSys TEL lecture I already gave as invited talk in the UK, NL and DE. The conclusion parts is totally new and aligned to the new book on RecSys for Learning at Springer that will appear soon in 2012.
Types of recommender systems in information retrieval. Collaborative filtering is a very widely used method in recommendation systems. Content based filtering and collaborative filtering are two major approaches. Hybrid systems are now being employed to get better recommendations. One such method is content-boosted collaborative filtering.
Past, present, and future of Recommender Systems: an industry perspectiveXavier Amatriain
Keynote for the ACM Intelligent User Interface conference in 2016 in Sonoma, CA. I start with the past by talking about the Recommender Problem, and the Netflix Prize. Then I go into the Present and the Future by talking about approaches that go beyond rating prediction and ranking and by finishing with some of the most important lessons learned over the years. Throughout my talk I put special emphasis on the relation between algorithms and the User Interface.
Types of recommender systems in information retrieval. Collaborative filtering is a very widely used method in recommendation systems. Content based filtering and collaborative filtering are two major approaches. Hybrid systems are now being employed to get better recommendations. One such method is content-boosted collaborative filtering.
Past, present, and future of Recommender Systems: an industry perspectiveXavier Amatriain
Keynote for the ACM Intelligent User Interface conference in 2016 in Sonoma, CA. I start with the past by talking about the Recommender Problem, and the Netflix Prize. Then I go into the Present and the Future by talking about approaches that go beyond rating prediction and ranking and by finishing with some of the most important lessons learned over the years. Throughout my talk I put special emphasis on the relation between algorithms and the User Interface.
In this lecture, I will first cover the recent advances in neural recommender systems such as autoencoder-based and MLP-based recommender systems. Then, I will introduce the recent achievement for automatic playlist continuation in music recommendation.
An introduction to Recommendation engines
and how these systems work.
Both content based and collaborative filtering models are introduced.
Hotel recommendation system is explained as a case study.
Tutorial on People Recommendations in Social Networks - ACM RecSys 2013,Hong...Anmol Bhasin
Tutorials at ACM RecSys 2013
Social Networks
Learning to Rank
Beyond Friendship
Pref. Handling
Beyond Friendship: The Art, Science and Applications of Recommending People to People in Social Networks
by Luiz Augusto Pizzato (University of Sydney, Australia)
& Anmol Bhasin (LinkedIn, USA)
While Recommender Systems are powerful drivers of engagement and transactional utility in social networks, People recommenders are a fairly involved and diverse subdomain. Consider that movies are recommended to be watched, news is recommended to be read, people however, are recommended for a plethora of reasons – such as recommendation of people to befriend, follow, partner, targets for an advertisement or service, recruiting, partnering romantically and to join thematic interest groups.
This tutorial aims to first describe the problem domain, touch upon classical approaches like link analysis and collaborative filtering and then take a rapid deep dive into the unique aspects of this problem space like Reciprocity, Intent understanding of recommender and the recomendee, Contextual people recommendations in communication flows and Social Referrals – a paradigm for delivery of recommendations using the Social Graph. These aspects will be discussed in the context of published original work developed by the authors and their collaborators and in many cases deployed in massive-scale real world applications on professional networks such as LinkedIn.
Introduction
The basics of Social Recommenders
People recommender systems
Special Topics in People Recommenders
Why reciprocal (people) recommenders are different to traditional (product) recommendations
Multi-Objective Optimization
Intent Understanding
Feature Engineering
Social Referral
Pathfinding
Concluding remarks
The pre-requisite for this tutorial is some familiarity with foundational Recommender Systems, Data Mining, Machine Learning and Social Network Analysis literature.
Date
Oct 13, 2013 (08:30 – 10:15)
What really are recommendations engines nowadays?
This presentation introduces the foundations of recommendation algorithms, and covers common approaches as well as some of the most advanced techniques. Although more focused on efficiency than theoretical properties, basics of matrix algebra and optimization-based machine learning are used through the presentation.
Table of Contents:
1. Collaborative Filtering
1.1 User-User
1.2 Item-Item
1.3 User-Item
* Matrix Factorization
* Stochastic Gradient Descent (SGD)
* Truncated Singular Value Decomposition (SVD)
* Alternating Least Square (ALS)
* Deep Learning
2. Content Extraction
* Item-Item Similarities
* Deep Content Extraction: NLP, CNN, LSTM
3. Hybrid Models
4. In Production
4.1 Problematics
4.2 Solutions
4.3 Tools
Overview of the Recommender system or recommendation system. RFM Concepts in brief. Collaborative Filtering in Item and User based. Content-based Recommendation also described.Product Association Recommender System. Stereotype Recommendation described with advantage and limitations.Customer Lifetime. Recommender System Analysis and Solving Cycle.
Recommender Systems represent one of the most widespread and impactful applications of predictive machine learning models.
Amazon, YouTube, Netflix, Facebook and many other companies generate an important fraction of their revenues thanks to their ability to model and accurately predict users ratings and preferences.
In this presentation we cover the following points:
→ introduction to recommender systems
→ working with explicit vs implicit feedback
→ content-based vs collaborative filtering approaches
→ user-based and item-item methods
→ machine learning and deep learning models
→ pros & cons of the methods: scalability, accuracy, explainability
The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using a simple programming model. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-avaiability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-availabile service on top of a cluster of computers, each of which may be prone to failures.
In this lecture, I will first cover the recent advances in neural recommender systems such as autoencoder-based and MLP-based recommender systems. Then, I will introduce the recent achievement for automatic playlist continuation in music recommendation.
An introduction to Recommendation engines
and how these systems work.
Both content based and collaborative filtering models are introduced.
Hotel recommendation system is explained as a case study.
Tutorial on People Recommendations in Social Networks - ACM RecSys 2013,Hong...Anmol Bhasin
Tutorials at ACM RecSys 2013
Social Networks
Learning to Rank
Beyond Friendship
Pref. Handling
Beyond Friendship: The Art, Science and Applications of Recommending People to People in Social Networks
by Luiz Augusto Pizzato (University of Sydney, Australia)
& Anmol Bhasin (LinkedIn, USA)
While Recommender Systems are powerful drivers of engagement and transactional utility in social networks, People recommenders are a fairly involved and diverse subdomain. Consider that movies are recommended to be watched, news is recommended to be read, people however, are recommended for a plethora of reasons – such as recommendation of people to befriend, follow, partner, targets for an advertisement or service, recruiting, partnering romantically and to join thematic interest groups.
This tutorial aims to first describe the problem domain, touch upon classical approaches like link analysis and collaborative filtering and then take a rapid deep dive into the unique aspects of this problem space like Reciprocity, Intent understanding of recommender and the recomendee, Contextual people recommendations in communication flows and Social Referrals – a paradigm for delivery of recommendations using the Social Graph. These aspects will be discussed in the context of published original work developed by the authors and their collaborators and in many cases deployed in massive-scale real world applications on professional networks such as LinkedIn.
Introduction
The basics of Social Recommenders
People recommender systems
Special Topics in People Recommenders
Why reciprocal (people) recommenders are different to traditional (product) recommendations
Multi-Objective Optimization
Intent Understanding
Feature Engineering
Social Referral
Pathfinding
Concluding remarks
The pre-requisite for this tutorial is some familiarity with foundational Recommender Systems, Data Mining, Machine Learning and Social Network Analysis literature.
Date
Oct 13, 2013 (08:30 – 10:15)
What really are recommendations engines nowadays?
This presentation introduces the foundations of recommendation algorithms, and covers common approaches as well as some of the most advanced techniques. Although more focused on efficiency than theoretical properties, basics of matrix algebra and optimization-based machine learning are used through the presentation.
Table of Contents:
1. Collaborative Filtering
1.1 User-User
1.2 Item-Item
1.3 User-Item
* Matrix Factorization
* Stochastic Gradient Descent (SGD)
* Truncated Singular Value Decomposition (SVD)
* Alternating Least Square (ALS)
* Deep Learning
2. Content Extraction
* Item-Item Similarities
* Deep Content Extraction: NLP, CNN, LSTM
3. Hybrid Models
4. In Production
4.1 Problematics
4.2 Solutions
4.3 Tools
Overview of the Recommender system or recommendation system. RFM Concepts in brief. Collaborative Filtering in Item and User based. Content-based Recommendation also described.Product Association Recommender System. Stereotype Recommendation described with advantage and limitations.Customer Lifetime. Recommender System Analysis and Solving Cycle.
Recommender Systems represent one of the most widespread and impactful applications of predictive machine learning models.
Amazon, YouTube, Netflix, Facebook and many other companies generate an important fraction of their revenues thanks to their ability to model and accurately predict users ratings and preferences.
In this presentation we cover the following points:
→ introduction to recommender systems
→ working with explicit vs implicit feedback
→ content-based vs collaborative filtering approaches
→ user-based and item-item methods
→ machine learning and deep learning models
→ pros & cons of the methods: scalability, accuracy, explainability
The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using a simple programming model. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-avaiability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-availabile service on top of a cluster of computers, each of which may be prone to failures.
This is a second version of the slides to support my presentation at Forth Valley College, incorporating Margaret McKay's slides on accessibility and inclusion.
Presentation held at Web Analytics Wednesday in Stockholm on the 15th of February 2009 by Jesper Åström.
For commentary, please see jesperastrom.com or e-mail jesper.joakim.astrom@gmail.com.
Twitter 101 for Small Business presented by Ilona Olayan from Social Strategy1 and Hakan Degirmenci from Twitter and hosted by OfficeArrow - small business community. This session is Twitter 101 and we’ll be covering the basics of Twitter including what the term hashtag means, but if you do happen to already have a Twitter account and are somewhat familiar with hashtags, the hashtag we are using for today’s webinar is #OASocial.
The presentation includes Twitter basics and how to's, the big picture of the new Twitter, including Twitter statistics, and what to use twitter for as a business.
Immersive Recommendation Workshop, NYC Media Lab'17Longqi Yang
The rapid evolution of deep learning technologies and the explosion of diverse user interaction traces have brought significant challenges and opportunities to recommendation and personalized systems. In this workshop, we discussed recent trends and techniques in user modeling and presented our work on immersive recommendation systems. These systems learn users’ preferences from diverse digital trace modalities (text, image and unstructured data streams) in a wide range of recommendation domains (creative art, food, news, and events). The workshop included a light tutorial on OpenRec, an open source framework that enables quick prototyping of complex recommender systems via modularization.
This workshop is based on research and development done at Cornell Tech as part of the Connected Experiences Lab, supported by Oath and NSF.
Mendeley: Recommendation Systems for Academic LiteratureKris Jack
I gave this talk to an MSc class about Semantic Technologies at the Technical University of Graz (TUG) on 2012/01/12.
It presents what recommendation systems are and how they are often used before delving into how they are used at Mendeley. Real-world results from Mendeley’s article recommendation system are also presented.
The work presented here has been partially funded by the European Commission as part of the TEAM IAPP project (grant no. 251514) within the FP7 People Programme (Marie Curie).
Mendeley: crowdsourcing and recommending research on a large scaleKris Jack
I was invited to be the keynote speaker at a special track on Recommendation; Data Sharing and Research Practices in Science 2.0 at the I-KNOW 2011 conference (http://i-know.tugraz.at/) on 2011/09/07.
It presents the challanges involved in crowdsourcing the world's largest research catalogue and then building a recommendation service on top of them that scales to serve millions of users.
Cross discipline collaboration benefits from group think, a consolidation of soft system methodology and user focused design that all starts with design thinking that sees clients, designers, developers and information architects working together to address user problems and needs. As with any great adventure, design thinking starts with exploration and discovery.This presentation examines the high level tenants of system thinking, expands the scope of user thinking to include tools and devices that users employ to find out designs and delve into the specifics of design thinking, its methods and outcomes.
Similar to RecSysTEL lecture at advanced SIKS course, NL (20)
In this webinar, Prof Hendrik Drachsler will reflect on the process of applying learning analytics solutions within higher education settings, its implications, and the critical lessons learned in the Trusted Learning Research Program. The talk will focus on the experience of edutec.science research collective consisting of researchers from the Netherlands and Germany that contribute to the Trusted Learning Analytics (TLA) research program. The TLA program aims to provide actionable and supportive feedback to students and stands in the tradition of human-centered learning analytics concepts. Thus, the TLA program aims to contribute to unfolding the full potential of each learner. It, therefore, applies sensor technology to support psychomotor as well as web technology to support meta-cognitive and collaborative learning skills with high-informative feedback methods. Prof. Drachsler applies validated measurement instruments from the field of psychometric and investigates to what extent Learning Analytics interventions can reproduce the findings of these instruments. During this webinar, Prof Drachsler will discuss the lessons learned from implementing TLA systems. He will touch on TLA prerequisites like ethics, privacy, and data protection, as well as high informative feedback for psychomotor, collaborative, and meta-cognitive competencies and the ongoing research towards a repository, methods, tools and skills that facilitate the uptake of TLA in Germany and the Netherlands.
Smart Speaker as Studying Assistant by Joao ParganaHendrik Drachsler
The thesis by Joao Pargana followed two main goals, first, a smart speaker application was created to support learners in informal learning processes through a question/answer application. Second, the impact of the application was tested amongst various users by analyzing how adoption and
transition to newer learning procedures can occur.
Dieser Entwurf eines Verhaltenskodex richtet sich an Hochschulen, die mittels Learning Analytics die Qualität des Lernens und Lehrens verbessern wollen. Der Kodex kann als Vorlage zur Erstellung von organisationsspezifischen Verhaltenskodizes dienen. Er sollte an Hochschulen, die Learning Analytics einführen wollen, durch Konsultationen mit allen Interessengruppen überprüft und an die Ziele sowie die bestehende Praxis innerhalb der jeweiligen Hochschulen angepasst werden. Der Kodex wurde auf Grundlage einer Analyse bestehender europäischer Kodizes und der in Deutschland geltenden Rechtsgrundlage vom Innovationsforum Trusted Learning Analytics des hessenweiten Projektes "Digital gestütztes Lehren und Lernen in Hessen" entwickelt.
Abstract (English):
This code of conduct can be used as a template for creating organization-specific codes of conduct in Germany. The Code was developed on the basis of an analysis of existing European codes of conduct and the legal basis for the usage of data in higher education in Germany.
Rödling, S. (2019). Entwicklung einer Applikation zum assoziativen Medien Ler...Hendrik Drachsler
Ziel der vorliegenden Bachelorarbeit ist es, den Einfluss von zusätzlicher am Handgelenk wahr-genommener Vibration in Verbindung mit der visuellen Darstellung eines Lerninhaltes auf denLernerfolg zu messen. Der Lernerfolg wird hierbei durch die Lerngeschwindigkeit sowie denUmfang der Wissenskonsolidierung über die Testreihe definiert. Zu diesem Zweck wurde eine Experimentalstudie zumAssoziativen Lernendurchgeführt. Für die Studie verwendeten 33Probanden eine App, die für die vorliegende Arbeit entwickelt wurde. Im Mittel aller Studiener-gebnisse wurden sowohl für die Lerngeschwindigkeit als auch für die Wissenskonsolidierungbessere Werte erzielt, wenn die Probanden die Möglichkeit hatten, den Lerninhalt sowohl visu-ell als auch haptisch zu erfahren. Die festgestellten Unterschiede des Lernerfolges erreichtenjedoch keine statistische Signifikanz. Die Abweichungen der Ergebnisse nach der Umsetzungder vorgeschlagenen Änderungen am Studiendesign sind abzuwarten. Die Bachelorarbeit ist vor allem für den Bildungsbereich interessant.
The present bachelor thesis aims to measure the influence of vibration perceived at the wrist in connection with the visual representation of learning content on the learning success. The learning success is defined by the learning speed and the extent of knowledge consolidation over the test series. For this purpose, an experimental study on Associative Learning was conducted. For the study, 33 test persons used an app, which was developed for the present work. On average of all study results better values were achieved for both learning speed and knowledge consolidation, if the test persons could experience the learning content both visually and haptically. However, the differences in learning outcomes did not reach statistical significance. The results of the deviations after the implementation of the proposed changes to the study design must be awaited. The Bachelor’s thesis is particularly interesting for the education sector.
E.Leute: Learning the impact of Learning Analytics with an authentic datasetHendrik Drachsler
Nowadays, data sets of the interactions of users and their corresponding demographic data are becoming more and more valuable for companies and academic institutions like universities
when optimizing their key performance indicators. Whether it is to develop a model to predict the optimal learning path for a student or to sell customers additional products, data sets to
train these models are in high demand. Despite the importance and need for big data sets it still has not become apparent to every decision-maker how crucial data sets like these are for the
future success of their operations.
The objective of this thesis is to demonstrate the use of a data set, gathered from the virtual learning environment of a distance learning university, by answering a selection of questions in
Learning Analytics. Therefore, a real-world data set was analyzed and the selected questions were answered by using state-of-the-art machine learning algorithms.
Romano, G. (2019) Dancing Trainer: A System For Humans To Learn Dancing Using...Hendrik Drachsler
Masters thesis by Romano, G., (2019). Dancing is the ability to feel the music and express it in rhythmic movements with the body. But learning how to dance can be challenging because it requires proper coordination and understanding of rhythm and beat. Dancing courses, online courses or learning with free content are ways to learn dancing. However, solutions with human-computer interaction are rare or
missing. The Dancing Trainer (DT) is proposed as a generic solution to fill this gap. For the beginning, only Salsa is implemented, but more dancing styles can be added. The DT uses the Kinect to interact multimodally with the user. Moreover, this work shows that dancing steps can be defined as gestures with the Kinect v2 to build a dancing corpus. An experiment with
25 participants is conducted to determine the user experience, strengths and weaknesses of the DT. The outcome shows that the users liked the system and that basic dancing steps were
learned.
In May 2018, the new General Data Protection Regulation (GDPR) will enter into force in the European Union. This new regulation is considered as the most modern data protection law for Big Data societies of tomorrow. The GDPR will bring major changes to data ownership and the way data can be accessed, processed, stored, and analysed in the European Union. From May 2018 onwards, data subjects gain fundamental rights such as ‘the right to access data’ or ‘the right to be forgotten’. This will force Big Data system designers to follow a privacy-by-design approach for their infrastructures and fundamentally change the way data can be treated in the European Union.
The presentation provides an overview of the Trusted Learning Analytics Programme as it has been recently initiated at the University of Frankfurt and the DIPF research institute in Germany. Educational data is under special focus of the GDPR, as it is considered as highly sensitive like data from a nuclear plant. It shows opportunities and challenges for using educational data for learning analytics purposes under the light of the GDPR 2018.
Fighting level 3: From the LA framework to LA practice on the micro-levelHendrik Drachsler
This presentation explores shortcomings of learning analytics for the wide adoption in educational organisations. It is NOT about ethics and privacy rather than focuses on shortcomings of learning analytics for teachers and students in the classroom (micro-level). We investigated if and to what extend learning analytics dashboards are addressing educational concepts. Map opportunities and challenges for the use of Learning Analytics dashboards for the design of courses, and present an evaluation instrument for the effects of Learning Analytics called EFLA. EFLA can be used to measure the effects of LA tools at the teacher and student side. It is a robust but light (8 items) measurement to quickly investigate the level of adoption of learning analytics in a course (micro-level). The presentation concludes that Learning Analytics is still to much a computer science dicipline that does not fulfill the often claimed position of the middle space between educational and computer science research.
Presentation given at PELARS Policy event, Brussles, 09.11.2016. A follow up op the first LACE Policy event in April 2015. Special focus is on the exploitation and sustainability activities for LACE in the SIG LACE SoLAR.
Dutch Cooking with xAPI Recipes, The Good, the Bad, and the ConsistentHendrik Drachsler
This paper presents the experiences of several Dutch projects in their application of the xAPI standard and different design patterns including the deployment of Learning Record Stores. In this paper we share insights and argue for the formation of an international Special Interest Group on interoperability issues to contribute to the Open Analytics Framework as envisioned by SoLAR and enacted by the Apereo Learning Analytics Initiative. Therefore, we provide an overview of the advantages and disadvantages of implementing the current xAPI standard by presenting projects that applied xAPI in very different ways followed by the lessons learned.
Recommendations for Open Online Education: An Algorithmic StudyHendrik Drachsler
Recommending courses to students in online platforms is studied widely. Almost all studies target closed platforms, that belong to a University or some other educational provider. This makes the course recommenders situation specific. Over the last years, a demand has developed for recommender system that suit open online platforms. Those platforms have some common characteristics, such as the lack of rich user profiles with content metadata. Instead they log user interactions within the platform that can be used for analysis and personalization. In this paper, we investigate how user interactions and activities tracked within open online learning platforms can be used to provide recommendations. We present a study in which we investigate the application of several state-of-the-art recommender algorithms, including a graph-based recommender approach. We use data from the OpenU open online learning platform that is in use by the Open University of the Netherlands. The results show that user-based and memory-based methods perform better than model-based and factorization methods. Particularly, the graph-based recommender system proves to outperform the classical approaches on prediction accuracy of recommendations in terms of recall. We conclude that, if the algorithms are chosen wisely, recommenders can contribute to a better experience of learners in open online courses.
Soude Fazeli, Enayat Rajabi, Leonardo Lezcano, Hendrik Drachsler, Peter Sloep
Privacy and Analytics – it’s a DELICATE Issue. A Checklist for Trusted Learni...Hendrik Drachsler
The widespread adoption of Learning Analytics (LA) and Educational Data Mining (EDM) has somewhat stagnated recently, and in some prominent cases even been reversed following concerns by governments, stakeholders and civil rights groups about privacy and ethics applied to the handling of personal data. In this ongoing discussion, fears and realities are often indistin-guishably mixed up, leading to an atmosphere of uncertainty among potential beneficiaries of Learning Analytics, as well as hesitations among institutional managers who aim to innovate their institution’s learning support by implementing data and analytics with a view on improving student success. In this presentation, we try to get to the heart of the matter, by analysing the most common views and the propositions made by the LA community to solve them. We conclude the paper with an eight-point checklist named DELICATE that can be applied by researchers, policy makers and institutional managers to facilitate a trusted implementation of Learning Analytics.
DELICATE checklist - to establish trusted Learning AnalyticsHendrik Drachsler
The DELICATE checklist contains eight action points that should be considered by managers and decision makers planning the implementation of Learning Analytics / Educational Data Mining solutions either for their own institution or with an external provider.
The eight points are:
1. Determination: Decide on the purpose of learning analytics for your institution. What aspects of learning or learner services are you trying to improve?
2. Explain: Define the scope of data collection and usage. Who has a need to have access to the data or the results? Who manages the datasets? On what criteria?
3. Legitimate: Explain how you operate within the legal frameworks, refer to the essential legislation. Is the data collection excessive, random, or fit for purpose?
4. Involve: Talk to stakeholders and give assurances about the data distribution and use. Give as much control as possible to data subjects (permission architecture), and provide access to their data for the individuals.
5. Consent: Seek consent through clear consent questions. Provide an opt-out option.
6. Anonymise: De-identify individuals as much as possible, aggregate data into meta-models.
7. Technical aspects: Monitor who has access to data, especially in areas with high staff turn-over. Establish data storage to high security standards.
8. External partners: Make sure externals provide highest data security standards. Ensure data is only used for intended purposes and not passed on to third parties.
We hope that the DELICATE checklist will be a helpful instrument for any educational institution to demystify the ethics and privacy discussions around Learning Analytics. As we have tried to show in this article, there are ways to design and provide privacy conform Learning Analytics that can benefit all stakeholders and keep control with the users themselves and within the established trusted relationship between them and the institution.
Updated Flyer of the LACE project with latest tangible outcomes and collaboration possibilities.
LACE connects players in the fields of Learning Analytics (LA) and Educational Data Mining (EDM) in order to support the development of a European community and share emerging best practices.
Objectives
-------------
• Promote knowledge creation and exchange
• Increase the evidence base about Learning Analytics
• Contribute to the definition of future directions
• Build consensus on pressing topics like data interoperability, data sharing, ethics and privacy, and Learning Analytics supported instructional design
Activities
• Organise events to connect organisations that are conducting LA/EDM research
• Create and curate a knowledge base to capture evidence for the effectiveness of Learning Analytics
• Produce reviews to inform the LACE community about latest developments in the field
Presentation given at Serious Request 2015, #SR15, Heerlen.
Within the Open University we started a 12 hours marathon college, to collect money for the charity action of radiostation 3FM. The collected money will go to the red cross and support young people in conflict areas.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
RecSysTEL lecture at advanced SIKS course, NL
1. Recommender Systems
for Learning
12. 04. 2012 Advanced SIKS course on Technology-Enhanced Learning
Landgoed Huize Bergen, Vught, Nederland
Hendrik Drachsler
Centre for Learning Sciences and Technology (CELSTEC)
Open University of the Netherlands 1
2. Goals of the lecture
1. Crash course Recommender Systems (RecSys)
2. Overview of RecSys in TEL
3. Conclusions and open research issues
for RecSys in TEL
2
4. Introduction::Application areas
Application areas
• E-commerce websites (Amazon)
• Video, Music websites (Netflix, last.fm)
• Content websites (CNN, Google News)
• Other Information Systems (Zite APP)
Major claims
• Highly application-oriented research area, every domain and
task needs a specific RecSys
• Always build around content or products they never
exist on their own
4
5. Introduction::Definition
Using the opinions of a community of users to
help individuals in that community to identify more
effectively content of interest from a potentially
overwhelming set of choices.
Resnick & Varian (1997). Recommender Systems, Communications of the ACM, 40(3).
5
6. Introduction::Definition
Using the opinions of a community of users to
help individuals in that community to identify more
effectively content of interest from a potentially
overwhelming set of choices.
Resnick & Varian (1997). Recommender Systems, Communications of the ACM, 40(3).
Any system that produces personalized
recommendations as output or has the effect of
guiding the user in a personalized way to interesting
or useful objects in a large space of possible options.
Burke R. (2002). Hybrid Recommender Systems: Survey and Experiments,
User Modeling & User Adapted Interaction, 12, pp. 331-370.
5
15. Introduction::Example
What did we learn from the small exercise?
• There are different kinds of recommendations
a. People who bought X also bought Y
b. There are options to receive even more personalized
recommendations
• When registering, we have to tell the RecSys what we like
(and what not). Thus, it requires information to offer suitable
recommendations and it learns our preferences.
6
17. Introduction:: The Long Tail
“We are leaving the age of information and
entering the age of recommendation”.
Anderson, C. (2004)
Anderson, C. (2004). The Long Tail. Wired Magazine.
7
25. Introduction:: Age of RecSys?
... another 10 minutes, research on RecSys is
becoming very popular.
Some examples:
– ACM RecSys conference
– ICWSM: Weblog and Social Media
– WebKDD: Web Knowledge Discovery and Data Mining
– WWW: The original WWW conference
– SIGIR: Information Retrieval
– ACM KDD: Knowledge Discovery and Data Mining
– LAK: Learning Analytics and Knowledge
– Educational data mining conference
– ICML: Machine Learning
– ...
... and various workshops, books, and journals.
10
26. Objectives
of RecSys probabilistic combination of
– Item-based method
– User-based method
– Matrix Factorization
– (May be) content-based method
The idea is to pick from my
previous list 20-50 movies that
share similar audience with
“Taken”, then how much I will like
depend on how much I liked those
early movies
– In short: I tend to watch this movie
because I have watched those
movies … or
11
– People who have watched those
movies also liked this movie
27. Objectives::RecSys Aims
• Converting Browsers into
Buyers
• Increasing Cross-sales
• Building Loyalty
Foto by markhillary
Schafer, Konstan & Riedel, (1999). RecSys in e-commerce. Proc. of the 1st ACM on
electronic commerce, Denver, Colorado, pp. 158-169.
12
28. Objectives::RecSys Tasks
Find good items
presenting a ranked list of
recommendendations.
probabilistic combination of
– Item-based method
– User-based method
– Matrix Factorization
– (May be) content-based method
Find all good items
user wants to identify all
The idea is to pick from my
items that might be previous list 20-50 movies that
share similar audience with
interesting, e.g. medical “Taken”, then how much I will like
depend on how much I liked those
or legal cases early movies
– In short: I tend to watch this movie
because I have watched those
Herlocker, Konstan, Borchers, & Riedl (2004). Evaluating Collaborative Filtering
movies … or
Recommender Systems. ACM Transactions on–Informationhave watched those pp. 5-53.
13
People who Systems, 22(1),
movies also liked this movie
29. Objectives::RecSys Tasks
Find good items Receive sequence of items
presenting a ranked list of sequence of related items is
recommendendations. recommended to the user,
e.g. music recommender
probabilistic combination of
– Item-based method
– User-based method
– Matrix Factorization
Find all good items Annotation in context
– (May be) content-based method
user wants to identify all predicted usefulness of an
items that might be item that pick from mythatis currently
The idea is to the user
previous list 20-50 movies
interesting, e.g. medical viewing, e.g. linkslike
share similar audience with within a
“Taken”, then how much I will
or legal cases websitehow much I liked those
depend on
early movies
– In short: I tend to watch this movie
because I have watched those
Herlocker, Konstan, Borchers, & Riedl (2004). Evaluating Collaborative Filtering
movies … or
Recommender Systems. ACM Transactions on–Informationhave watched those pp. 5-53.
13
People who Systems, 22(1),
movies also liked this movie
30. Objectives::RecSys Tasks
Find good items Receive sequence of items
presenting a ranked list of sequence of related items is
recommendendations. recommended to the user,
e.g. music recommender
There are more tasks available... of
probabilistic combination
– Item-based method
– User-based method
– Matrix Factorization
Find all good items Annotation in context
– (May be) content-based method
user wants to identify all predicted usefulness of an
items that might be item that pick from mythatis currently
The idea is to the user
previous list 20-50 movies
interesting, e.g. medical viewing, e.g. linkslike
share similar audience with within a
“Taken”, then how much I will
or legal cases websitehow much I liked those
depend on
early movies
– In short: I tend to watch this movie
because I have watched those
Herlocker, Konstan, Borchers, & Riedl (2004). Evaluating Collaborative Filtering
movies … or
Recommender Systems. ACM Transactions on–Informationhave watched those pp. 5-53.
13
People who Systems, 22(1),
movies also liked this movie
31. RecSys Technologies
1. Predict how much a user
may like a certain product
2. Create a list of Top-N
best items
3. Adjust its prediction
based on feedback of the
target user and like-
minded users
Hanani et al., (2001). Information Filtering: Overview of Issues, Research and Systems",
User Modeling and User-Adapted Interaction, 11.
14
32. RecSys Technologies
1. Predict how much a user
may like a certain product
2. Create a list of Top-N
best items
3. Adjust its prediction
based on feedback of the Just some examples
target user and like- there are more
minded users technologies available.
Hanani et al., (2001). Information Filtering: Overview of Issues, Research and Systems",
User Modeling and User-Adapted Interaction, 11.
14
33. Technologies::Collaborative filtering
User-based filtering
(Grouplens, 1994)
Take about 20-50 people who share
similar taste with you, afterwards
predict how much you might like an
item depended on how much the others
liked it.
You may like it because your
“friends” liked it.
15
34. Technologies::Collaborative filtering
User-based filtering Item-based filtering
(Grouplens, 1994) (Amazon, 2001)
Take about 20-50 people who share Pick from your previous list 20-50 items
similar taste with you, afterwards that share similar people with “the
predict how much you might like an target item”, how much you will like the
item depended on how much the others target item depends on how much the
liked it. others liked those earlier items.
You may like it because your You tend to like that item because
“friends” liked it. you have liked those items.
15
35. Technologies::Content-based filtering
Information needs of user and characteristics of items are
represented in keywords, attributes, tags that describe
past selections, e.g., TF-IDF.
16
36. Technologies::Hybrid RecSys
Combination of techniques to overcome
disadvantages and advantages of single techniques.
Advantages Disadvantages
probabilistic combination of
– Item-based method
• No content analysis • Cold-start problem
– User-based method
– Matrix Factorization
• Quality improves • Over-fitting
– (May be) content-based method
• No cold-start problem • New user / item problem
The idea is to pick from my
• No new user / item • Sparsity
previous list 20-50 movies that
share similar audience with
problem “Taken”, then how much I will like
depend on how much I liked those
early movies
– In short: I tend to watch this movie
because I have watched those
movies … or
17
– People who have watched those
movies also liked this movie
37. Technologies::Hybrid RecSys
Combination of techniques to overcome
disadvantages and advantages of single techniques.
Advantages Disadvantages
probabilistic combination of
– Item-based method
• No content analysis • Cold-start problem
– User-based method
– Matrix Factorization
• Quality improves • Over-fitting
– (May be) content-based method
• No cold-start problem • New user / item problem
The idea is to pick from my
• No new user / item • Sparsity
previous list 20-50 movies that
share similar audience with
problem “Taken”, then how much I will like
Just some examples there
depend on how much I liked those
early movies
are more (dis)advantages
– In short: I tend to watch this movie
because I have watched those
17
movies … or
available.
– People who have watched those
movies also liked this movie
38. Technologies::Overview
probabilistic combination of
– Item-based method
– User-based method
– Matrix Factorization
– (May be) content-based method
Hanani et al., (2001). Information Filtering: Overview of Issues, Research and Systems",
User Modeling and User-Adapted Interaction, 11, 2001
18
39. Evaluation
of RecSys
probabilistic combination of
– Item-based method
– User-based method
– Matrix Factorization
– (May be) content-based method
The idea is to pick from my
previous list 20-50 movies that
share similar audience with
“Taken”, then how much I will like
depend on how much I liked those
early movies
– In short: I tend to watch this movie
because I have watched those
movies … or
19
– People who have watched those
movies also liked this movie
40. Evaluation::General idea
Most of the time based on performance measures
(“How good are your recommendations?”)
For example:
•Predict what rating will a user give an item?
•Will the user select an item?
•What is the order of usefulness of items to a user?
Herlocker, Konstan, Riedl (2004). Evaluating Collaborative Filtering Recommender
Systems. ACM Transactions on Information Systems, 22(1), 5-53.
20
42. Evaluation::Approaches
Measures 1. Offline study
•User preference
•Prediction accuracy
•Coverage
•Confidence
•Trust
•Novelty 2. User study
•Serendipity
•Diversity
•Utility
•Risk
•Robustness +
•Privacy
•Adaptivity
•Scalability
22
43. Evaluation::Metrics
Precision – The portion of
recommendations that were
successful. (Selected by the
algorithm and by the user)
Recall – The portion of relevant
items selected by algorithm
compared to a total number of
relevant items available.
F1 - Measure balances Precision
and Recall into a single
measurement.
Gunawardana, A., Shani, G., (2009). A Survey of Accuracy Evaluation Metrics of
Recommendation Tasks, Journal of Machine Learning Research, 10(Dec):2935−2962,
2009. 23
44. Evaluation::Metrics
Precision – The portion of
recommendations that were
successful. (Selected by the
algorithm and by the user)
Recall – The portion of relevant
items selected by algorithm
compared to a total number of
relevant items available.
F1 - Measure balances Precision
and Recall into a single
measurement.
Gunawardana, A., Shani, G., (2009). A Survey of Accuracy Evaluation Metrics of
Recommendation Tasks, Journal of Machine Learning Research, 10(Dec):2935−2962,
2009. 23
45. Evaluation::Metrics
Precision – The portion of
recommendations that were
successful. (Selected by the
algorithm and by the user)
Recall – The portion of relevant
items selected by algorithm
compared to a total number of
relevant items available.
F1 - Measure balances Precision
and Recall into a single
measurement.
Gunawardana, A., Shani, G., (2009). A Survey of Accuracy Evaluation Metrics of
Recommendation Tasks, Journal of Machine Learning Research, 10(Dec):2935−2962,
2009. 23
46. Evaluation::Metrics
Precision – The portion of
recommendations that were
successful. (Selected by the
algorithm and by the user)
Recall – The portion of relevant
items selected by algorithm
compared to a total number of
relevant items available.
F1 - Measure balances Precision Just some examples there
and Recall into a single are more metrics available
measurement. like MAE, RSME.
Gunawardana, A., Shani, G., (2009). A Survey of Accuracy Evaluation Metrics of
Recommendation Tasks, Journal of Machine Learning Research, 10(Dec):2935−2962,
2009. 23
47. Evaluation::Metrics
5
Conclusion:
4
Pearson is better
RMSE
than Cosine, 3
Pearson
because less 2
errors in predicting Cosine
1
TOP-N items. 0
Netflix BookCrossing
Gunawardana, A., Shani, G., (2009). A Survey of Accuracy Evaluation Metrics of
Recommendation Tasks, Journal of Machine Learning Research, 10(Dec):2935−2962,
2009. 24
48. Evaluation::Metrics
5
Conclusion:
4
Pearson is better
RMSE
than Cosine, 3
Pearson
because less 2
errors in predicting Cosine
1
TOP-N items. 0
Netflix BookCrossing
News Story Clicks
Conclusion: 80%
Cosine better than Precision
60%
Pearson, because
40%
of higher precision
20%
and recall value on
TOP-N items. 0%
5% 10% 15% 20% 25% 30% 35% 40%
Recall
Gunawardana, A., Shani, G., (2009). A Survey of Accuracy Evaluation Metrics of
Recommendation Tasks, Journal of Machine Learning Research, 10(Dec):2935−2962,
2009. 24
49. RecSys::TimeToThink
What do you expect that a RecSys for
Learning should do with respect to ...
• Objectives
• Tasks
• Technology
Blackmore’s custom-built LSD Drive
• Evaluation http://www.flickr.com/photos/
rootoftwo/
25
50. Goals of the lecture
1. Crash course Recommender Systems (RecSys)
2. Overview of RecSys in TEL
3. Conclusions and open research issues
for RecSys in TEL
26
52. TEL RecSys::Definition
Using the experiences of a community of
learners to help individual learners in that
community to identify more effectively learning
content or peer students from a potentially
overwhelming set of choices.
Extended definition by Resnick & Varian (1997). Recommender Systems, Communications
of the ACM, 40(3).
28
58. TEL RecSys:: Technologies
Drachsler, H., Pecceu, D., Arts, T., Hutten, E., Rutledge, L., Van Rosmalen, P., Hummel, H. G. K., & Koper, R.
(2009). ReMashed - Recommendations for Mash-Up Personal Learning Environments. In U. Cress, V.
Dimitrova & M. Specht (Eds.), Learning in the Synergy of Multiple Disciplines. Proceedings of the EC-
TEL 2009 (pp. 788-793). September, 29 - October, 2, 2009, Nice, France. Springer LNCS Vol. 5794.
32
59. TEL RecSys:: Technologies
Drachsler, H., Pecceu, D., Arts, T., Hutten, E., Rutledge, L., Van Rosmalen, P., Hummel, H. G. K., & Koper, R.
(2009). ReMashed - Recommendations for Mash-Up Personal Learning Environments. In U. Cress, V.
Dimitrova & M. Specht (Eds.), Learning in the Synergy of Multiple Disciplines. Proceedings of the EC-
TEL 2009 (pp. 788-793). September, 29 - October, 2, 2009, Nice, France. Springer LNCS Vol. 5794.
33
60. TEL RecSys:: Technologies
RecSys Task:
Find good items
Hybrid RecSys:
•Content-based on
interests
•Collaborative filtering
Drachsler, H., Pecceu, D., Arts, T., Hutten, E., Rutledge, L., Van Rosmalen, P., Hummel, H. G. K., & Koper, R.
(2009). ReMashed - Recommendations for Mash-Up Personal Learning Environments. In U. Cress, V.
Dimitrova & M. Specht (Eds.), Learning in the Synergy of Multiple Disciplines. Proceedings of the EC-
TEL 2009 (pp. 788-793). September, 29 - October, 2, 2009, Nice, France. Springer LNCS Vol. 5794.
33
61. TEL RecSys::Tasks
Find good items
e.g. relevant items for a learning
task or a learning goal
The idea is to pick from my
previous list 20-50 movies that
share similar audience with
“Taken”, then how much I will like
depend on how much I liked those
early movies
– In short: I tend to watch this movie
Drachsler, H., Hummel, H., Koper, R., (2009). Identifyinghave goal, user model and
because I the watched those
conditions of recommender systems for formal and or
movies … informal learning. Journal of
Digital Information. 10(2). 34
– People who have watched those
movies also liked this movie
62. TEL RecSys::Tasks
Find good items
e.g. relevant items for a learning
task or a learning goal
Receive sequence of items
e.g. recommend a learning path
to achieve a certain
competence
The idea is to pick from my
previous list 20-50 movies that
share similar audience with
“Taken”, then how much I will like
depend on how much I liked those
early movies
– In short: I tend to watch this movie
Drachsler, H., Hummel, H., Koper, R., (2009). Identifyinghave goal, user model and
because I the watched those
conditions of recommender systems for formal and or
movies … informal learning. Journal of
Digital Information. 10(2). 34
– People who have watched those
movies also liked this movie
63. TEL RecSys::Tasks
Find good items
e.g. relevant items for a learning
task or a learning goal
Receive sequence of items
e.g. recommend a learning path
to achieve a certain
competence
Annotation in context The idea is to pick from my
e.g. take into account location, previous list 20-50 movies that
share similar audience with
time, noise level, prior “Taken”, then how much I will like
knowledge, peers around depend on how much I liked those
early movies
– In short: I tend to watch this movie
Drachsler, H., Hummel, H., Koper, R., (2009). Identifyinghave goal, user model and
because I the watched those
conditions of recommender systems for formal and or
movies … informal learning. Journal of
Digital Information. 10(2). 34
– People who have watched those
movies also liked this movie
64. Evaluation
of TEL
RecSys probabilistic combination of
– Item-based method
– User-based method
– Matrix Factorization
– (May be) content-based method
The idea is to pick from my
previous list 20-50 movies that
share similar audience with
“Taken”, then how much I will like
depend on how much I liked those
early movies
– In short: I tend to watch this movie
because I have watched those
movies … or
35
– People who have watched those
movies also liked this movie
66. TEL RecSys::Review study
Manouselis, N., Drachsler, H., Vuorikari, R., Hummel, H. G. K., & Koper, R. (2011).
Recommender Systems in Technology Enhanced Learning. In P. B. Kantor, F. Ricci,
L. Rokach, & B. Shapira (Eds.), Recommender Systems Handbook (pp. 387-415).
Berlin: Springer. 36
67. TEL RecSys::Review study
Manouselis, N., Drachsler, H., Vuorikari, R., Hummel, H. G. K., & Koper, R. (2011).
Recommender Systems in Technology Enhanced Learning. In P. B. Kantor, F. Ricci,
L. Rokach, & B. Shapira (Eds.), Recommender Systems Handbook (pp. 387-415).
Berlin: Springer. 36
68. TEL RecSys::Review study
Conclusions:
Half of the systems (11/20) still at design or prototyping
stage only 9 systems evaluated through trials with
human users.
Manouselis, N., Drachsler, H., Vuorikari, R., Hummel, H. G. K., & Koper, R. (2011).
Recommender Systems in Technology Enhanced Learning. In P. B. Kantor, F. Ricci,
L. Rokach, & B. Shapira (Eds.), Recommender Systems Handbook (pp. 387-415).
Berlin: Springer. 36
70. The TEL recommender
research is a bit like this...
We need to design for each domain an
appropriate recommender system that fits the goals, tasks,
and particular constraints
37
71. But...
“The performance results
of different research
efforts in recommender
systems are hardly
comparable.”
(Manouselis et al., 2010)
Kaptain Kobold
http://www.flickr.com/photos/
kaptainkobold/3203311346/
38
72. But...
TEL recommender
experiments lack results
“The performance
transparency and
of different research
efforts in recommender
standardization.
systems are hardly
They need to be
comparable.”
repeatable to test:
•(Manouselis et al., 2010)
Validity
• Verification Kaptain Kobold
http://www.flickr.com/photos/
• Compare results
kaptainkobold/3203311346/
38
73. Data-driven Research and Learning Analytics
EATEL-
Hendrik Drachsler (a), Katrien Verbert (b)
(a) CELSTEC, Open University of the Netherlands
(b) Dept. Computer Science, K.U.Leuven, Belgium
39
76. TEL RecSys::Evaluation/datasets
Drachsler, H., Bogers, T., Vuorikari, R., Verbert, K., Duval, E., Manouselis, N., Beham, G.,
Lindstaedt, S., Stern, H., Friedrich, M., & Wolpers, M. (2010). Issues and Considerations
regarding Sharable Data Sets for Recommender Systems in Technology Enhanced Learning.
Presentation at the 1st Workshop Recommnder Systems in Technology Enhanced Learning
(RecSysTEL) in conjunction with 5th European Conference on Technology Enhanced
Learning (EC-TEL 2010): Sustaining TEL: From Innovation to Learning and Practice.
September, 28, 2010, Barcelona, Spain. 41
78. Evaluation::Metrics
MAE – Mean Absolute Error:
Deviation of recommendations
from the user-specified ratings.
The lower the MAE, the more
accurately the RecSys predicts user
ratings.
Verbert, K., Drachsler, H., Manouselis, N., Wolpers, M., Vuorikari, R., Beham, G., Duval, E.,
(2011). Dataset-driven Research for Improving Recommender Systems for Learning. Learning
Analytics & Knowledge: February 27-March 1,43 2011, Banff, Alberta, Canada
79. Evaluation::Metrics
MAE – Mean Absolute Error:
Deviation of recommendations
from the user-specified ratings.
The lower the MAE, the more
accurately the RecSys predicts user
ratings.
Outcomes:
Tanimoto similarity +
item-based CF was
the most accurate.
Verbert, K., Drachsler, H., Manouselis, N., Wolpers, M., Vuorikari, R., Beham, G., Duval, E.,
(2011). Dataset-driven Research for Improving Recommender Systems for Learning. Learning
Analytics & Knowledge: February 27-March 1,43 2011, Banff, Alberta, Canada
80. Evaluation::Metrics
MAE – Mean Absolute Error:
Deviation of recommendations
from the user-specified ratings.
The lower the MAE, the more
accurately the RecSys predicts user
ratings.
Outcomes:
•User-based CF Algorithm that
predicts the top 10 most relevant
Outcomes:
items for a user has a F1 score
Tanimoto similarity +
of almost 30%.
item-based CF was
•the most accurate.
Implicit ratings like download
rates, bookmarks can
successfully be used in TEL.
Verbert, K., Drachsler, H., Manouselis, N., Wolpers, M., Vuorikari, R., Beham, G., Duval, E.,
(2011). Dataset-driven Research for Improving Recommender Systems for Learning. Learning
Analytics & Knowledge: February 27-March 1,43 2011, Banff, Alberta, Canada
81. Goals of the lecture
1. Crash course Recommender Systems (RecSys)
2. Overview of RecSys in TEL
3. Conclusions and open research issues
for RecSys in TEL
44
82. 10 years of TEL RecSys research in one book
Chapter 1: Background
Chapter 2: TEL context
Recommender
Chapter 3: Extended survey Systems for
of 42 RecSys Learning
Chapter 4: Challenges and
Outlook
Manouselis, N., Drachsler, H., Verbert, K., Duval, E.
(2012). Recommender Systems for Learning. Berlin:
Springer.
45
83. 10 years of TEL RecSys research in one book
Chapter 1: Background
Chapter 2: TEL context
Recommender
Chapter 3: Extended survey Systems for
of 42 RecSys Learning
Chapter 4: Challenges and
Outlook
Manouselis, N., Drachsler, H., Verbert, K., Duval, E.
(2012). Recommender Systems for Learning. Berlin:
Springer.
45
91. TEL RecSys::Ideal research design
1. A selection of datasets
for your RecSys task
2. An offline study of different
algorithms on the datasets
3. A comprehensive controlled user study
to test psychological, pedagogical
and technical aspects
4. Rollout of the RecSys in
real-life scenarios
50
92. Thank you for attending this lecture!
This silde is available at:
http://www.slideshare.com/Drachsler
Email: hendrik.drachsler@ou.nl
Skype: celstec-hendrik.drachsler
Blogging at: http://www.drachsler.de
Twittering at: http://twitter.com/HDrachsler
51
93. TEL RecSys::TimeToThink
• Consider the Recommender System
framework and imagine some great TEL
RecSys that could support you in your
stakeholder role
alternatively
• Name a learning task where a TEL
RecSys would be useful for.
52