Tutorial given at LAK13 conference, Leuven, April, 9th, 2013. The presentation is informed by WP2 of the LinkedUp-project.eu that develops an Evaluation Framework for Open Web Data (Linked Data) Applications for Education purposes.
A Scalable Approach for Efficiently Generating Structured Dataset Topic ProfilesBesnik Fetahu
The increasing adoption of Linked Data principles has led
to an abundance of datasets on the Web. However, take-up and reuse is hindered by the lack of descriptive information about the nature of the data, such as their topic coverage, dynamics or evolution. To address this issue, we propose an approach for creating linked dataset profiles. A profile consists of structured dataset metadata describing topics and their relevance. Profiles are generated through the configuration of techniques for resource sampling from datasets, topic extraction from reference datasets and their ranking based on graphical models. To enable a good trade-off between scalability and accuracy of generated profiles, appropriate parameters are determined experimentally. Our evaluation considers topic profiles for all accessible datasets from the Linked Open Data cloud. The results show that our approach generates accurate profiles even with comparably small sample sizes (10%) and outperforms established topic modelling approaches.
Abstract: Traditional approaches for document classification need data which is labelled for the construction reliable classifiers which are even accurate. Unfortunately, data which is already labelled are rarely available, and often too costly to obtain. For the given learning task for which data which is trained is unavailable, abundant labelled data may be there for a different and related domain. One would like to use the related labelled data as auxiliary information to accomplish the classification task in the target domain. Recently, the paradigm of transfer learning has been introduced to enable effective learning strategies when auxiliary data obey a different probability distribution. A co-clustering based classification algorithm has been previously proposed to tackle cross-domain text classification. In this work, we extend the idea underlying this approach by making the latent semantic relationship between the two domains explicit. This goal is achieved with the use of Wikipedia. As a result, the pathway that allows propagating labels between the two domains not only captures common words, but also semantic concepts based on the content of documents. We empirically demonstrate the efficacy of our semantic-based approach to cross-domain classification using a variety of real data.Keywords: Classification, Clustering, Cross-domain Text Classification, Co-clustering, Labelled data, Traditional Approaches.
Title: Co-Clustering For Cross-Domain Text Classification
Author: Rayala Venkat, Mahanthi Kasaragadda
ISSN 2350-1022
International Journal of Recent Research in Mathematics Computer Science and Information Technology
Paper Publications
A Scalable Approach for Efficiently Generating Structured Dataset Topic ProfilesBesnik Fetahu
The increasing adoption of Linked Data principles has led
to an abundance of datasets on the Web. However, take-up and reuse is hindered by the lack of descriptive information about the nature of the data, such as their topic coverage, dynamics or evolution. To address this issue, we propose an approach for creating linked dataset profiles. A profile consists of structured dataset metadata describing topics and their relevance. Profiles are generated through the configuration of techniques for resource sampling from datasets, topic extraction from reference datasets and their ranking based on graphical models. To enable a good trade-off between scalability and accuracy of generated profiles, appropriate parameters are determined experimentally. Our evaluation considers topic profiles for all accessible datasets from the Linked Open Data cloud. The results show that our approach generates accurate profiles even with comparably small sample sizes (10%) and outperforms established topic modelling approaches.
Abstract: Traditional approaches for document classification need data which is labelled for the construction reliable classifiers which are even accurate. Unfortunately, data which is already labelled are rarely available, and often too costly to obtain. For the given learning task for which data which is trained is unavailable, abundant labelled data may be there for a different and related domain. One would like to use the related labelled data as auxiliary information to accomplish the classification task in the target domain. Recently, the paradigm of transfer learning has been introduced to enable effective learning strategies when auxiliary data obey a different probability distribution. A co-clustering based classification algorithm has been previously proposed to tackle cross-domain text classification. In this work, we extend the idea underlying this approach by making the latent semantic relationship between the two domains explicit. This goal is achieved with the use of Wikipedia. As a result, the pathway that allows propagating labels between the two domains not only captures common words, but also semantic concepts based on the content of documents. We empirically demonstrate the efficacy of our semantic-based approach to cross-domain classification using a variety of real data.Keywords: Classification, Clustering, Cross-domain Text Classification, Co-clustering, Labelled data, Traditional Approaches.
Title: Co-Clustering For Cross-Domain Text Classification
Author: Rayala Venkat, Mahanthi Kasaragadda
ISSN 2350-1022
International Journal of Recent Research in Mathematics Computer Science and Information Technology
Paper Publications
Slides: Concurrent Inference of Topic Models and Distributed Vector Represent...Parang Saraf
Abstract: Topic modeling techniques have been widely used to uncover dominant themes hidden inside an unstructured document collection. Though these techniques first originated in the probabilistic analysis of word distributions, many deep learning approaches have been adopted recently. In this paper, we propose a novel neural network based architecture that produces distributed representation of topics to capture topical themes in a dataset. Unlike many state-of-the-art techniques for generating distributed representation of words and documents that directly use neighboring words for training, we leverage the outcome of a sophisticated deep neural network to estimate the topic labels of each document. The networks, for topic modeling and generation of distributed representations, are trained concurrently in a cascaded style with better runtime without sacrificing the quality of the topics. Empirical studies reported in the paper show that the distributed representations of topics represent intuitive themes using smaller dimensions than conventional topic modeling approaches.
For more information, please visit: http://people.cs.vt.edu/parang/ or contact parang at firstname at cs vt edu
XPLODIV: An Exploitation-Exploration Aware Diversification Approach for Recom...Andrea Barraza-Urbina
Recommender Systems (RS) have emerged to guide users in the task of efficiently browsing/exploring a large product space, helping users to quickly identify interesting products. However, suggestions generated with traditional RS usually do not produce diverse results though it has been argued that diversity is a desirable feature. The study of diversity-aware RS has become an important research challenge in recent years, drawing inspiration from diversification solutions for Information Retrieval (IR). However, we argue it is not enough to adapt IR techniques to RS as they do not place the necessary importance to factors such as serendipity, novelty and discovery which are imperative to RS. In this work, we propose a diversification technique for RS that generates a diversified list of results which not only balances the tradeoff between quality (in terms of accuracy) and diversity, but also considers the trade-off between exploitation of the user profile and exploration of novel products. Our experimental evaluation shows that the proposed approach has comparable results to state of the art approaches. Moreover, through control parameters, our approach can be tuned towards more explorative or exploitative recommendations.
Confidence in Learning Analytics aka. The Pulse of Learning AnalyticsHendrik Drachsler
Presentation of the paper by Drachsler & Greller on Confidence in Learning Analytics given at LAK12 conference, April 30th 2012, Vancouver, Canada
Data and survey available at:
http://bit.ly/la_survey
Semantic annotation is done through first representing words and documents in the vector space model using Word2Vec and Doc2Vec implementations, the vectors are taken as features into a classifier, trained and a model is made which can classify a document with ACM classification tree categories, with the help of Wikipedia corpus.
Project Presentation: https://youtu.be/706HJteh1xc
Project Webpage: http://rohitsakala.github.io/semanticAnnotationAcmCategories/
Source Code: https://github.com/rohitsakala/semanticAnnotationAcmCategories
References:
Quoc V. Le, and Tomas Mikolov, ''Distributed Representations of Sentences and Documents ICML", 2014
Machine Learning based Text Classification introductionTreparel
Introduction on Classification and Clustering for modelling Text Analytics applications. Incl. Who is Treparel / 3 types of text classification / Why perform automated text classification / Appendix: The Genius Section. Support Vector Machines (SVM)
This slide introduces the concept of active learning in the field of machine learning. It explains the effectiveness of active learning and focuses on the potential of multiple oracles in active learning.
Semantic Similarity and Selection of Resources Published According to Linked ...Riccardo Albertoni
The position paper aims at discussing the potential of exploiting linked data best practice to provide metadata documenting domain specific resources created through verbose acquisition-processing pipelines. It argues that resource selection, namely the process engaged to choose a set of resources suitable for a given analysis/design purpose, must be supported by a deep comparison of their metadata. The semantic similarity proposed in our previous works is discussed for this purpose and the main issues to make it scale up to the web of data are introduced. Discussed issues contribute beyond the re-engineering of our similarity since they largely apply to every tool which is going to exploit information made available as linked data. A research plan and an exploratory phase facing the presented issues are described remarking the lessons we have learnt so far.
Adaptive User Feedback for IR-based Traceability RecoveryAnnibale Panichella
Traceability recovery allows software engineers to understand the interconnections among software artefacts and, thus, it provides an important support to software maintenance activities. In the last decade, Information Retrieval (IR) has been widely adopted as core technology of semi-automatic tools to extract traceability links between artefacts according to their textual information. However, a widely known problem of IR-based methods is that some artefacts may share more words with non-related artefacts than with related ones. To overcome this problem, enhancing strategies have been proposed in literature. One of these strategies is relevance feedback, which allows to modify the textual similarity according to information about links classified by the users. Even though this technique is widely used for natural language documents, previous work has demonstrated that relevance feedback is not always useful for software artefacts. In this paper, we propose an adaptive version of relevance feedback that, unlike the standard version, considers the characteristics of both (i) the software artefacts and (ii) the previously classified links for deciding whether and how to apply the feedback. An empirical evaluation conducted on three systems suggests that the adaptive relevance feedback outperforms both a pure IR-based method and the standard feedback.
What do analytics on learning analytics tell us? How can we make sense of this emerging field’s historical roots, current state, and future trends, based on how its members report and debate their research?
Challenge submissions should exploit the LAK Dataset for a meaningful purpose. This may include submissions which cover one or more of the following, non-exclusive list of topics:
Analysis & assessment of the emerging LAK community in terms of topics, people, citations or connections with other fields
Innovative applications to explore, navigate and visualise the dataset (and/or its correlation with other datasets)
Usage of the dataset as part of recommender systems
Analysis of the evolution of LAK discipline
Improvement or enrichment of the LAK Dataset
The presentation provides an overview of the R&D activities of the Learning Analytics topic at the Open Universiteit in October 2013.
http://portal.ou.nl/documents/363049/789b3323-d55c-4e3e-93ba-a716ade14463
http://creativecommons.org/licenses/by-nc-sa/3.0/
Drachsler, H., Specht, M. (2013).
Syllabus zajęć fakultatywnych "Antropologia dźwięków. Foniczne reprezentacje kultur". Zajęcia będą się odbywały w semestrze letnim 2010/2011 w Instytucie Etnologii i Antropologii Kulturowej UAM w Poznaniu.
prowadzący: dr Agata Stanisz, mgr Filip Rogalski
Slides: Concurrent Inference of Topic Models and Distributed Vector Represent...Parang Saraf
Abstract: Topic modeling techniques have been widely used to uncover dominant themes hidden inside an unstructured document collection. Though these techniques first originated in the probabilistic analysis of word distributions, many deep learning approaches have been adopted recently. In this paper, we propose a novel neural network based architecture that produces distributed representation of topics to capture topical themes in a dataset. Unlike many state-of-the-art techniques for generating distributed representation of words and documents that directly use neighboring words for training, we leverage the outcome of a sophisticated deep neural network to estimate the topic labels of each document. The networks, for topic modeling and generation of distributed representations, are trained concurrently in a cascaded style with better runtime without sacrificing the quality of the topics. Empirical studies reported in the paper show that the distributed representations of topics represent intuitive themes using smaller dimensions than conventional topic modeling approaches.
For more information, please visit: http://people.cs.vt.edu/parang/ or contact parang at firstname at cs vt edu
XPLODIV: An Exploitation-Exploration Aware Diversification Approach for Recom...Andrea Barraza-Urbina
Recommender Systems (RS) have emerged to guide users in the task of efficiently browsing/exploring a large product space, helping users to quickly identify interesting products. However, suggestions generated with traditional RS usually do not produce diverse results though it has been argued that diversity is a desirable feature. The study of diversity-aware RS has become an important research challenge in recent years, drawing inspiration from diversification solutions for Information Retrieval (IR). However, we argue it is not enough to adapt IR techniques to RS as they do not place the necessary importance to factors such as serendipity, novelty and discovery which are imperative to RS. In this work, we propose a diversification technique for RS that generates a diversified list of results which not only balances the tradeoff between quality (in terms of accuracy) and diversity, but also considers the trade-off between exploitation of the user profile and exploration of novel products. Our experimental evaluation shows that the proposed approach has comparable results to state of the art approaches. Moreover, through control parameters, our approach can be tuned towards more explorative or exploitative recommendations.
Confidence in Learning Analytics aka. The Pulse of Learning AnalyticsHendrik Drachsler
Presentation of the paper by Drachsler & Greller on Confidence in Learning Analytics given at LAK12 conference, April 30th 2012, Vancouver, Canada
Data and survey available at:
http://bit.ly/la_survey
Semantic annotation is done through first representing words and documents in the vector space model using Word2Vec and Doc2Vec implementations, the vectors are taken as features into a classifier, trained and a model is made which can classify a document with ACM classification tree categories, with the help of Wikipedia corpus.
Project Presentation: https://youtu.be/706HJteh1xc
Project Webpage: http://rohitsakala.github.io/semanticAnnotationAcmCategories/
Source Code: https://github.com/rohitsakala/semanticAnnotationAcmCategories
References:
Quoc V. Le, and Tomas Mikolov, ''Distributed Representations of Sentences and Documents ICML", 2014
Machine Learning based Text Classification introductionTreparel
Introduction on Classification and Clustering for modelling Text Analytics applications. Incl. Who is Treparel / 3 types of text classification / Why perform automated text classification / Appendix: The Genius Section. Support Vector Machines (SVM)
This slide introduces the concept of active learning in the field of machine learning. It explains the effectiveness of active learning and focuses on the potential of multiple oracles in active learning.
Semantic Similarity and Selection of Resources Published According to Linked ...Riccardo Albertoni
The position paper aims at discussing the potential of exploiting linked data best practice to provide metadata documenting domain specific resources created through verbose acquisition-processing pipelines. It argues that resource selection, namely the process engaged to choose a set of resources suitable for a given analysis/design purpose, must be supported by a deep comparison of their metadata. The semantic similarity proposed in our previous works is discussed for this purpose and the main issues to make it scale up to the web of data are introduced. Discussed issues contribute beyond the re-engineering of our similarity since they largely apply to every tool which is going to exploit information made available as linked data. A research plan and an exploratory phase facing the presented issues are described remarking the lessons we have learnt so far.
Adaptive User Feedback for IR-based Traceability RecoveryAnnibale Panichella
Traceability recovery allows software engineers to understand the interconnections among software artefacts and, thus, it provides an important support to software maintenance activities. In the last decade, Information Retrieval (IR) has been widely adopted as core technology of semi-automatic tools to extract traceability links between artefacts according to their textual information. However, a widely known problem of IR-based methods is that some artefacts may share more words with non-related artefacts than with related ones. To overcome this problem, enhancing strategies have been proposed in literature. One of these strategies is relevance feedback, which allows to modify the textual similarity according to information about links classified by the users. Even though this technique is widely used for natural language documents, previous work has demonstrated that relevance feedback is not always useful for software artefacts. In this paper, we propose an adaptive version of relevance feedback that, unlike the standard version, considers the characteristics of both (i) the software artefacts and (ii) the previously classified links for deciding whether and how to apply the feedback. An empirical evaluation conducted on three systems suggests that the adaptive relevance feedback outperforms both a pure IR-based method and the standard feedback.
What do analytics on learning analytics tell us? How can we make sense of this emerging field’s historical roots, current state, and future trends, based on how its members report and debate their research?
Challenge submissions should exploit the LAK Dataset for a meaningful purpose. This may include submissions which cover one or more of the following, non-exclusive list of topics:
Analysis & assessment of the emerging LAK community in terms of topics, people, citations or connections with other fields
Innovative applications to explore, navigate and visualise the dataset (and/or its correlation with other datasets)
Usage of the dataset as part of recommender systems
Analysis of the evolution of LAK discipline
Improvement or enrichment of the LAK Dataset
The presentation provides an overview of the R&D activities of the Learning Analytics topic at the Open Universiteit in October 2013.
http://portal.ou.nl/documents/363049/789b3323-d55c-4e3e-93ba-a716ade14463
http://creativecommons.org/licenses/by-nc-sa/3.0/
Drachsler, H., Specht, M. (2013).
Syllabus zajęć fakultatywnych "Antropologia dźwięków. Foniczne reprezentacje kultur". Zajęcia będą się odbywały w semestrze letnim 2010/2011 w Instytucie Etnologii i Antropologii Kulturowej UAM w Poznaniu.
prowadzący: dr Agata Stanisz, mgr Filip Rogalski
Autonomics Computing (with some of Adaptive Systems) and Requirements Enginee...Jehn
This presentantion gives an overview on Autonomic Computing. Next, show the state-of-the-art on Requirements Engineering for Autonomic Computing based on 4 papers
Slide deck to support a keynote at Libraries Developing Digital Literacies in Cardiff, Wales, UK on 17 July 2015. The keynote offers some personal reflections as well as some pointers to current Jisc work in the area of digital capability and related themes.
Data-centric AI and the convergence of data and model engineering:opportunit...Paolo Missier
A keynote talk given to the IDEAL 2023 conference (Evora, Portugal Nov 23, 2023).
Abstract.
The past few years have seen the emergence of what the AI community calls "Data-centric AI", namely the recognition that some of the limiting factors in AI performance are in fact in the data used for training the models, as much as in the expressiveness and complexity of the models themselves. One analogy is that of a powerful engine that will only run as fast as the quality of the fuel allows. A plethora of recent literature has started the connection between data and models in depth, along with startups that offer "data engineering for AI" services. Some concepts are well-known to the data engineering community, including incremental data cleaning, multi-source integration, or data bias control; others are more specific to AI applications, for instance the realisation that some samples in the training space are "easier to learn from" than others. In this "position talk" I will suggest that, from an infrastructure perspective, there is an opportunity to efficiently support patterns of complex pipelines where data and model improvements are entangled in a series of iterations. I will focus in particular on end-to-end tracking of data and model versions, as a way to support MLDev and MLOps engineers as they navigate through a complex decision space.
Multi-modal sources for predictive modeling using deep learningSanghamitra Deb
Using Vision Language models : Is it possible to prompt them similar to LLMs? when to use out of the box and when to pre-train? General multi-modal models --- deeplearning. Machine learning metrics, feature engineering and setting up an ML problem.
By popular demand, here is a case study of my first Kaggle competition from about a year ago. Hope you find it useful. Thank you again to my fantastic team.
This talk was presented in Startup Master Class 2017 - http://aaiitkblr.org/smc/ 2017 @ Christ College Bangalore. Hosted by IIT Kanpur Alumni Association and co-presented by IIT KGP Alumni Association, IITACB, PanIIT, IIMA and IIMB alumni.
My co-presenter was Biswa Gourav Singh. And contributor was Navin Manaswi.
http://dataconomy.com/2017/04/history-neural-networks/ - timeline for neural networks
AI TESTING: ENSURING A GOOD DATA SPLIT BETWEEN DATA SETS (TRAINING AND TEST) ...ijsc
Artificial Intelligence and Machine Learning have been around for a long time. In recent years, there has been a surge in popularity for applications integrating AI and ML technology. As with traditional development, software testing is a critical component of a successful AI/ML application. The development methodology used in AI/ML contrasts significantly from traditional development. In light of these distinctions, various software testing challenges arise. The emphasis of this paper is on the challenge of effectively splitting the data into training and testing data sets. By applying a k-Means clustering strategy to the data set followed by a decision tree, we can significantly increase the likelihood of the training data set to represent the domain of the full dataset and thus avoid training a model that is likely to fail because it has only learned a subset of the full data domain.
Part of the ongoing effort with Skater for enabling better Model Interpretation for Deep Neural Network models presented at the AI Conference.
https://conferences.oreilly.com/artificial-intelligence/ai-ny/public/schedule/detail/65118
A data science observatory based on RAMP - rapid analytics and model prototypingAkin Osman Kazakci
RAMP approach to analytics: Rapid Analytics and Model Prototyping; collaborative data challenges with in-built data science process management tools and analytics; An observatory of data science and scientists. Presented at the Design Theory Special Interest Group of International Design Society. Mines ParisTech and Centre for Data Science.
Machine Learning and Deep Learning from Foundations to Applications Excel, R,...Narendra Ashar
Preparing stakeholders across the organization in Advanced Machine learning, Deep Learning, Algorithms, Machine Learning for Image Processing, Machine Learning for Text Processing, Deep Learning Applications.
Courses can be tailored for
Freshers in a corporate
Senior Executives
Marketing, Business Development and other staff. who want to get a simpler view on these newer and apparently complex topics.
In this webinar, Prof Hendrik Drachsler will reflect on the process of applying learning analytics solutions within higher education settings, its implications, and the critical lessons learned in the Trusted Learning Research Program. The talk will focus on the experience of edutec.science research collective consisting of researchers from the Netherlands and Germany that contribute to the Trusted Learning Analytics (TLA) research program. The TLA program aims to provide actionable and supportive feedback to students and stands in the tradition of human-centered learning analytics concepts. Thus, the TLA program aims to contribute to unfolding the full potential of each learner. It, therefore, applies sensor technology to support psychomotor as well as web technology to support meta-cognitive and collaborative learning skills with high-informative feedback methods. Prof. Drachsler applies validated measurement instruments from the field of psychometric and investigates to what extent Learning Analytics interventions can reproduce the findings of these instruments. During this webinar, Prof Drachsler will discuss the lessons learned from implementing TLA systems. He will touch on TLA prerequisites like ethics, privacy, and data protection, as well as high informative feedback for psychomotor, collaborative, and meta-cognitive competencies and the ongoing research towards a repository, methods, tools and skills that facilitate the uptake of TLA in Germany and the Netherlands.
Smart Speaker as Studying Assistant by Joao ParganaHendrik Drachsler
The thesis by Joao Pargana followed two main goals, first, a smart speaker application was created to support learners in informal learning processes through a question/answer application. Second, the impact of the application was tested amongst various users by analyzing how adoption and
transition to newer learning procedures can occur.
Dieser Entwurf eines Verhaltenskodex richtet sich an Hochschulen, die mittels Learning Analytics die Qualität des Lernens und Lehrens verbessern wollen. Der Kodex kann als Vorlage zur Erstellung von organisationsspezifischen Verhaltenskodizes dienen. Er sollte an Hochschulen, die Learning Analytics einführen wollen, durch Konsultationen mit allen Interessengruppen überprüft und an die Ziele sowie die bestehende Praxis innerhalb der jeweiligen Hochschulen angepasst werden. Der Kodex wurde auf Grundlage einer Analyse bestehender europäischer Kodizes und der in Deutschland geltenden Rechtsgrundlage vom Innovationsforum Trusted Learning Analytics des hessenweiten Projektes "Digital gestütztes Lehren und Lernen in Hessen" entwickelt.
Abstract (English):
This code of conduct can be used as a template for creating organization-specific codes of conduct in Germany. The Code was developed on the basis of an analysis of existing European codes of conduct and the legal basis for the usage of data in higher education in Germany.
Rödling, S. (2019). Entwicklung einer Applikation zum assoziativen Medien Ler...Hendrik Drachsler
Ziel der vorliegenden Bachelorarbeit ist es, den Einfluss von zusätzlicher am Handgelenk wahr-genommener Vibration in Verbindung mit der visuellen Darstellung eines Lerninhaltes auf denLernerfolg zu messen. Der Lernerfolg wird hierbei durch die Lerngeschwindigkeit sowie denUmfang der Wissenskonsolidierung über die Testreihe definiert. Zu diesem Zweck wurde eine Experimentalstudie zumAssoziativen Lernendurchgeführt. Für die Studie verwendeten 33Probanden eine App, die für die vorliegende Arbeit entwickelt wurde. Im Mittel aller Studiener-gebnisse wurden sowohl für die Lerngeschwindigkeit als auch für die Wissenskonsolidierungbessere Werte erzielt, wenn die Probanden die Möglichkeit hatten, den Lerninhalt sowohl visu-ell als auch haptisch zu erfahren. Die festgestellten Unterschiede des Lernerfolges erreichtenjedoch keine statistische Signifikanz. Die Abweichungen der Ergebnisse nach der Umsetzungder vorgeschlagenen Änderungen am Studiendesign sind abzuwarten. Die Bachelorarbeit ist vor allem für den Bildungsbereich interessant.
The present bachelor thesis aims to measure the influence of vibration perceived at the wrist in connection with the visual representation of learning content on the learning success. The learning success is defined by the learning speed and the extent of knowledge consolidation over the test series. For this purpose, an experimental study on Associative Learning was conducted. For the study, 33 test persons used an app, which was developed for the present work. On average of all study results better values were achieved for both learning speed and knowledge consolidation, if the test persons could experience the learning content both visually and haptically. However, the differences in learning outcomes did not reach statistical significance. The results of the deviations after the implementation of the proposed changes to the study design must be awaited. The Bachelor’s thesis is particularly interesting for the education sector.
E.Leute: Learning the impact of Learning Analytics with an authentic datasetHendrik Drachsler
Nowadays, data sets of the interactions of users and their corresponding demographic data are becoming more and more valuable for companies and academic institutions like universities
when optimizing their key performance indicators. Whether it is to develop a model to predict the optimal learning path for a student or to sell customers additional products, data sets to
train these models are in high demand. Despite the importance and need for big data sets it still has not become apparent to every decision-maker how crucial data sets like these are for the
future success of their operations.
The objective of this thesis is to demonstrate the use of a data set, gathered from the virtual learning environment of a distance learning university, by answering a selection of questions in
Learning Analytics. Therefore, a real-world data set was analyzed and the selected questions were answered by using state-of-the-art machine learning algorithms.
Romano, G. (2019) Dancing Trainer: A System For Humans To Learn Dancing Using...Hendrik Drachsler
Masters thesis by Romano, G., (2019). Dancing is the ability to feel the music and express it in rhythmic movements with the body. But learning how to dance can be challenging because it requires proper coordination and understanding of rhythm and beat. Dancing courses, online courses or learning with free content are ways to learn dancing. However, solutions with human-computer interaction are rare or
missing. The Dancing Trainer (DT) is proposed as a generic solution to fill this gap. For the beginning, only Salsa is implemented, but more dancing styles can be added. The DT uses the Kinect to interact multimodally with the user. Moreover, this work shows that dancing steps can be defined as gestures with the Kinect v2 to build a dancing corpus. An experiment with
25 participants is conducted to determine the user experience, strengths and weaknesses of the DT. The outcome shows that the users liked the system and that basic dancing steps were
learned.
In May 2018, the new General Data Protection Regulation (GDPR) will enter into force in the European Union. This new regulation is considered as the most modern data protection law for Big Data societies of tomorrow. The GDPR will bring major changes to data ownership and the way data can be accessed, processed, stored, and analysed in the European Union. From May 2018 onwards, data subjects gain fundamental rights such as ‘the right to access data’ or ‘the right to be forgotten’. This will force Big Data system designers to follow a privacy-by-design approach for their infrastructures and fundamentally change the way data can be treated in the European Union.
The presentation provides an overview of the Trusted Learning Analytics Programme as it has been recently initiated at the University of Frankfurt and the DIPF research institute in Germany. Educational data is under special focus of the GDPR, as it is considered as highly sensitive like data from a nuclear plant. It shows opportunities and challenges for using educational data for learning analytics purposes under the light of the GDPR 2018.
Fighting level 3: From the LA framework to LA practice on the micro-levelHendrik Drachsler
This presentation explores shortcomings of learning analytics for the wide adoption in educational organisations. It is NOT about ethics and privacy rather than focuses on shortcomings of learning analytics for teachers and students in the classroom (micro-level). We investigated if and to what extend learning analytics dashboards are addressing educational concepts. Map opportunities and challenges for the use of Learning Analytics dashboards for the design of courses, and present an evaluation instrument for the effects of Learning Analytics called EFLA. EFLA can be used to measure the effects of LA tools at the teacher and student side. It is a robust but light (8 items) measurement to quickly investigate the level of adoption of learning analytics in a course (micro-level). The presentation concludes that Learning Analytics is still to much a computer science dicipline that does not fulfill the often claimed position of the middle space between educational and computer science research.
Presentation given at PELARS Policy event, Brussles, 09.11.2016. A follow up op the first LACE Policy event in April 2015. Special focus is on the exploitation and sustainability activities for LACE in the SIG LACE SoLAR.
Dutch Cooking with xAPI Recipes, The Good, the Bad, and the ConsistentHendrik Drachsler
This paper presents the experiences of several Dutch projects in their application of the xAPI standard and different design patterns including the deployment of Learning Record Stores. In this paper we share insights and argue for the formation of an international Special Interest Group on interoperability issues to contribute to the Open Analytics Framework as envisioned by SoLAR and enacted by the Apereo Learning Analytics Initiative. Therefore, we provide an overview of the advantages and disadvantages of implementing the current xAPI standard by presenting projects that applied xAPI in very different ways followed by the lessons learned.
Recommendations for Open Online Education: An Algorithmic StudyHendrik Drachsler
Recommending courses to students in online platforms is studied widely. Almost all studies target closed platforms, that belong to a University or some other educational provider. This makes the course recommenders situation specific. Over the last years, a demand has developed for recommender system that suit open online platforms. Those platforms have some common characteristics, such as the lack of rich user profiles with content metadata. Instead they log user interactions within the platform that can be used for analysis and personalization. In this paper, we investigate how user interactions and activities tracked within open online learning platforms can be used to provide recommendations. We present a study in which we investigate the application of several state-of-the-art recommender algorithms, including a graph-based recommender approach. We use data from the OpenU open online learning platform that is in use by the Open University of the Netherlands. The results show that user-based and memory-based methods perform better than model-based and factorization methods. Particularly, the graph-based recommender system proves to outperform the classical approaches on prediction accuracy of recommendations in terms of recall. We conclude that, if the algorithms are chosen wisely, recommenders can contribute to a better experience of learners in open online courses.
Soude Fazeli, Enayat Rajabi, Leonardo Lezcano, Hendrik Drachsler, Peter Sloep
Privacy and Analytics – it’s a DELICATE Issue. A Checklist for Trusted Learni...Hendrik Drachsler
The widespread adoption of Learning Analytics (LA) and Educational Data Mining (EDM) has somewhat stagnated recently, and in some prominent cases even been reversed following concerns by governments, stakeholders and civil rights groups about privacy and ethics applied to the handling of personal data. In this ongoing discussion, fears and realities are often indistin-guishably mixed up, leading to an atmosphere of uncertainty among potential beneficiaries of Learning Analytics, as well as hesitations among institutional managers who aim to innovate their institution’s learning support by implementing data and analytics with a view on improving student success. In this presentation, we try to get to the heart of the matter, by analysing the most common views and the propositions made by the LA community to solve them. We conclude the paper with an eight-point checklist named DELICATE that can be applied by researchers, policy makers and institutional managers to facilitate a trusted implementation of Learning Analytics.
DELICATE checklist - to establish trusted Learning AnalyticsHendrik Drachsler
The DELICATE checklist contains eight action points that should be considered by managers and decision makers planning the implementation of Learning Analytics / Educational Data Mining solutions either for their own institution or with an external provider.
The eight points are:
1. Determination: Decide on the purpose of learning analytics for your institution. What aspects of learning or learner services are you trying to improve?
2. Explain: Define the scope of data collection and usage. Who has a need to have access to the data or the results? Who manages the datasets? On what criteria?
3. Legitimate: Explain how you operate within the legal frameworks, refer to the essential legislation. Is the data collection excessive, random, or fit for purpose?
4. Involve: Talk to stakeholders and give assurances about the data distribution and use. Give as much control as possible to data subjects (permission architecture), and provide access to their data for the individuals.
5. Consent: Seek consent through clear consent questions. Provide an opt-out option.
6. Anonymise: De-identify individuals as much as possible, aggregate data into meta-models.
7. Technical aspects: Monitor who has access to data, especially in areas with high staff turn-over. Establish data storage to high security standards.
8. External partners: Make sure externals provide highest data security standards. Ensure data is only used for intended purposes and not passed on to third parties.
We hope that the DELICATE checklist will be a helpful instrument for any educational institution to demystify the ethics and privacy discussions around Learning Analytics. As we have tried to show in this article, there are ways to design and provide privacy conform Learning Analytics that can benefit all stakeholders and keep control with the users themselves and within the established trusted relationship between them and the institution.
Updated Flyer of the LACE project with latest tangible outcomes and collaboration possibilities.
LACE connects players in the fields of Learning Analytics (LA) and Educational Data Mining (EDM) in order to support the development of a European community and share emerging best practices.
Objectives
-------------
• Promote knowledge creation and exchange
• Increase the evidence base about Learning Analytics
• Contribute to the definition of future directions
• Build consensus on pressing topics like data interoperability, data sharing, ethics and privacy, and Learning Analytics supported instructional design
Activities
• Organise events to connect organisations that are conducting LA/EDM research
• Create and curate a knowledge base to capture evidence for the effectiveness of Learning Analytics
• Produce reviews to inform the LACE community about latest developments in the field
Presentation given at Serious Request 2015, #SR15, Heerlen.
Within the Open University we started a 12 hours marathon college, to collect money for the charity action of radiostation 3FM. The collected money will go to the red cross and support young people in conflict areas.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
The Metaverse and AI: how can decision-makers harness the Metaverse for their...Jen Stirrup
The Metaverse is popularized in science fiction, and now it is becoming closer to being a part of our daily lives through the use of social media and shopping companies. How can businesses survive in a world where Artificial Intelligence is becoming the present as well as the future of technology, and how does the Metaverse fit into business strategy when futurist ideas are developing into reality at accelerated rates? How do we do this when our data isn't up to scratch? How can we move towards success with our data so we are set up for the Metaverse when it arrives?
How can you help your company evolve, adapt, and succeed using Artificial Intelligence and the Metaverse to stay ahead of the competition? What are the potential issues, complications, and benefits that these technologies could bring to us and our organizations? In this session, Jen Stirrup will explain how to start thinking about these technologies as an organisation.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdf
LAK13 linkedup tutorial_evaluation_framework
1. Using Linked Data in Learning Analytics
LAK 2013 tutorial
EvaluaHon
of
Linked
Data
tools
for
Learning
AnalyHcs
Hendrik
Drachsler
(@hdrachsler,
drachsler.de)
(CELSTEC,
Open
Universiteit
Nederland,
NL)
Eelco
Herder
(L3S
Research
Center,
DE)
Mathieu
d’Aquin
(@mdaquin,
mdaquin.net)
(Knowledge
Media
InsHtute,
The
Open
University,
UK)
Stefan
Dietze
(L3S
Research
Center,
DE)
2.
Example of scientific competitions
What are the evaluation criteria of Robot Wars?
Criteria:
• Damage
• Aggression
probabilistic combination of
– Item-based method • Control
– User-based method
– Matrix Factorization
• Applause
– (May be) content-based method
2
3. RecSysTEL Evaluation criteria
1. Accuracy
1. Accuracy
2. Coverage
2. Coverage
3. Precision
4. Recall
3. Precision
4. Recall
1. Effectiveness of learning 1. Reaction of learner
2. Efficiency of learning 2. Learning improved
3. Drop out rate 3. Behaviour
4. Satisfaction 4. Results
Combine approach by Kirkpatrick model by
Drachsler et al. 2008 Manouselis et al. 2010
3
4. TEL RecSys::Review study
Conclusions:
Half of the systems (11/20) still at design or prototyping
stage only 9 systems evaluated through trials with human
users.
Manouselis, N., Drachsler, H., Vuorikari, R., Hummel, H. G. K., & Koper, R. (2011).
Recommender Systems in Technology Enhanced Learning. In P. B. Kantor, F. Ricci,
L. Rokach, & B. Shapira (Eds.), Recommender Systems Handbook (pp. 387-415).
Berlin: Springer. 4
5.
The TEL recommender
research is a bit like this...
We need to design for each domain an appropriate
recommender system that fits the goals and tasks"
5
6.
TEL recommender
experiments lack results
“The performance
transparency and
of different research
standardization.
efforts in recommender
They need tohardly
systems are be
repeatable to test:
comparable.”
• Validity
(Manouselis et al., 2010)
• Verification Kaptain Kobold
http://www.flickr.com/photos/
• Compare results kaptainkobold/3203311346/
6
7.
Data-driven Research and Learning Analytics"
EATEL-
Hendrik Drachsler (a), Katrien Verbert (b)"
"
(a) CELSTEC, Open University of the Netherlands"
(b) Dept. Computer Science, K.U.Leuven, Belgium"
"
7
7
8.
9.
TEL RecSys::Evaluation/datasets
"
Drachsler, H., Bogers, T., Vuorikari, R., Verbert, K., Duval, E., Manouselis, N., Beham, G.,
Lindstaedt, S., Stern, H., Friedrich, M., & Wolpers, M. (2010). Issues and Considerations
regarding Sharable Data Sets for Recommender Systems in Technology Enhanced Learning.
Presentation at the 1st Workshop Recommnder Systems in Technology Enhanced Learning
(RecSysTEL) in conjunction with 5th European Conference on Technology Enhanced Learning
(EC-TEL 2010): Sustaining TEL: From Innovation to Learning and Practice. September, 28,
2010, Barcelona, Spain." 9
"
10. 5. Dataset Framework
dataTEL evaluation model
Datasets
Formal Informal
Data A Data B Data C
Algorithms: Algorithms: Algorithms:
Algoritmen A Algoritmen D Algoritmen B
Algoritmen B Algoritmen E Algoritmen D
Algoritmen C
Models: Models: Models:
Learner Model A Learner Model C Learner Model A
Learner Model B Learner Model E Learner Model C
Measured attributes: Measured attributes: Measured attributes:
Attribute A Attribute A Attribute A
Attribute B Attribute B Attribute B
Attribute C Attribute C Attribute C
17
42
10
11. 5. Dataset Framework
dataTEL evaluation model
Datasets
Formal Informal
In Data A
LinkedUp we have Data B opportunity to apply a
the Data C
structured approach to develop a
community accepted evaluation framework.
Algorithms: Algorithms: Algorithms:
Algoritmen A Algoritmen D Algoritmen B
Algoritmen B Algoritmen E Algoritmen D
1. Top-Down by a literature study
Algoritmen C
2. Bottom-up by Models: with experts in the field
Models: GCM Models:
Learner Model A Learner Model C Learner Model A
Learner Model B Learner Model E Learner Model C
Measured attributes: Measured attributes: Measured attributes:
Attribute A Attribute A Attribute A
Attribute B Attribute B Attribute B
Attribute C Attribute C Attribute C
17
42
11
13.
Development
of
the
Evalua=on
Framework
P1: Initialisation P2: Establishment P3: Exit and
and Evaluation Sustainability
M0-M6: Preparation M7-M18: Competition cycle M18-M24: Finalising
Comp
etition
Revie Final
Expert
3x
Draft w of
EF proposal EF release
validation of EF
New Refin
versio ement
n of EF
Literature review Group Concept Documentation
Cognitive Mapping Mapping Dissemination
Practical experiences
and refinement
Hendrik Drachsler 25 February 2013 13
14.
Group
Concept
Mapping
• Group Concept Mapping resembles the
Post-it notes problem solving technique
and Delphi method
• GCM involves participants in a few
simple activities (generating, sorting
and rating of ideas) that most people are
used to.
GCM is different in two substantial ways:
1. Robust analysis (MDS and HCA)
GCM takes up the original participants contribution and then quantitatively
aggregate it to show their collective view (as thematic clusters)
2. Visualisation
GCM presents the results from the analysis as conceptual maps and other graphical
representations (pattern matching and go-zones).
Hendrik Drachsler 25 February 2013 14
15.
Group
Concept
Mapping
brainstorm
• innovations in way network is delivered
• (investigate) corporate/structural alignment
• assist in the development of non-traditional partnerships (Rehab with the
Medicine Community)
sort
• expand investigation and knowledge of PSN'S/PSO's
• continue STHCS sponsored forums on public health issues (medicine
managed care forum)
• inventory assets of all participating agencies (providers, Venn Diagrams)
• access additional funds for telemedicine expansion
• better utilization of current technological bridge
• continued support by STHCS to member facilities
• expand and encourage utilization of interface programs to strengthen the
viability and to improve the health care delivery system (ie teleconference)
• discussion with CCHN
Decide how to
manage multiple
tasks.
20 Manage resources effectively.
4
Work quickly
and effectively
under pressure
49
Organize the work
when directions are
not specific.
39
e t ive
ly
ly.
he eff
e ct tive
gS ime eff
ec
tin g et rc es
Ra
a ou ic.
an n s.
M res atio a sk ecif
1 na
ge rm t.
info tan le t t sp
Ma no
of por ultip
...organize the
5 e a re
4 2 ud is im age m ns
ultit hat an ctio
3 5 am ew om ire
an id wt nd
2 4 Sc dec ho he
1 d e r kw
3 5 3 an cid wo nd
De he ely na
2 4 et
1 4 niz ctiv atio
3 ga ffe orm
issues...
5 Or
2 ee inf nt.
4 5 tim of ta
1 ge de r
3 5 na itu impo
2 4 Ma ult
1 1 a m at is
3 5 an e wh
2 Sc cid
4 de
1 3
3 5
2 4
1 3
2
1
rate
Hendrik Drachsler 25 February 2013 15
19.
Group
Concept
Mapping
• innovations in way network is delivered
• (investigate) corporate/structural alignment
• assist in the development of non-traditional partnerships (Rehab with the
Medicine Community)
• expand investigation and knowledge of PSN'S/PSO's
• continue STHCS sponsored forums on public health issues (medicine
managed care forum)
• inventory assets of all participating agencies (providers, Venn Diagrams)
• access additional funds for telemedicine expansion
• better utilization of current technological bridge
• continued support by STHCS to member facilities
• expand and encourage utilization of interface programs to strengthen the
…”map” the issues...
viability and to improve the health care delivery system (ie teleconference)
• discussion with CCHN
organize
sort
Decide how to
manage multiple
tasks.
20 Manage resources effectively.
4
Work quickly and
effectively under
pressure
49
Organize the work
when directions are not
specific.
39
et
he effectiv effectively
ely .
g S time
tin Manage resource
s d
an
Ra ation .
ks. ecific
Technology
1 na
ge inform le tas t sp
5 Ma e of rtant. ltip no
4 2 ltitudimpo ge
mu ns
are
3 a muat is na ectio
5 an ma dir
2 Sc e wh w to en
4 cid wh
1 3 de e ho rk
3 5 cid wo
De d
Information Services
2 4 e the ely n an
4
1 3 5 ga
niz ectiv ma
tio
Or e eff or
2 4 5 tim inf
1 3 5 na
ge e of rtant.
2 Ma ltitudimpo
4
1 1 a muat is
3 5 an wh
2 Sc cide
4 3 de
1 3 5
2 4
1 3
2
1
rate Community & Consumer Views
Regionalization
Management STHCS as model
Financing
Hendrik Drachsler 25 February 2013 19
20.
Group
Concept
Mapping
• innovations in way network is delivered
• (investigate) corporate/structural alignment
• assist in the development of non-traditional partnerships (Rehab with the
Medicine Community)
• expand investigation and knowledge of PSN'S/PSO's
• continue STHCS sponsored forums on public health issues (medicine
managed care forum)
• inventory assets of all participating agencies (providers, Venn Diagrams)
• access additional funds for telemedicine expansion
• better utilization of current technological bridge
• continued support by STHCS to member facilities
• expand and encourage utilization of interface programs to strengthen the
viability and to improve the health care delivery system (ie teleconference)
• discussion with CCHN
Information Services Technology
organize
sort
Community & Consumer Views
Decide how to
manage multiple
tasks.
20 Manage resources effectively.
4
Work quickly and
effectively under
pressure
49
Organize the work
when directions are not
specific.
39
et
he effectiv effectively
ely .
g S time
tin Manage resource
s d
an
Ra ation .
ks. ecific
1 na
ge inform le tas t sp
5 Ma e of rtant. ltip no
4 2 ltitudimpo ge
mu ns
are
3 a muat is na ectio
5 an ma dir
2 Sc e wh w to en
4 cid wh
1 3 de e ho rk
3 5 cid wo
De d
2 4 e the ely n an
4
1 3 5 ga
niz ectiv ma
tio
Or e eff or
2 4 5 tim inf
1 3 5 na
ge e of rtant.
2 Ma ltitudimpo
4
1 1 a muat is
3 5 an wh
2 Sc cide
4 3 de
1 3 5
2 4
1 3
2
Regionalization
1
rate
map Information Services
Technology
Community & Consumer Views
Regionalization
Financing Management Mission & Ideology
Management STHCS as model
Financing
...prioritize the issues...
Hendrik Drachsler 25 February 2013 20
21.
Group
Concept
Mapping
D2.1 Evaluation Criteria and Methods
• Invited 122 external experts
• 56 experts contributed 212 indicators for the
evaluation framework
• After cleaning -> 108 indicators remained
• 26 experts sorted on similarity in meaning
• 26 experts rated on priority and
applicability
Hendrik Drachsler 25 February 2013 21
22.
Plus Minus Interesting rating
Look at and listen to the presentation
of the Evaluation Framework
Meanwhile…create notes on
P: Plus
M: Minus
I: Interesting
Write down everything that comes to your mind, generate
as many ideas as possible, do not filter your ideas.
32.
WP2: Literature review
1. Literature review of suitable evaluation approaches and criteria
2. Review of comprising initiatives such as LinkedEducation, MULCE, E3FPLE and
the SIG dataTEL
34.
Many thanks for your attention!
This silde is available at:
http://www.slideshare.com/Drachsler
Email: hendrik.drachsler@ou.nl
Skype: celstec-hendrik.drachsler
Blogging at: http://www.drachsler.de
Twittering at: http://twitter.com/HDrachsler
Hendrik Drachsler 25 February 2013 34