The main purpose of the current deliverable D2.2.1 is to hold the current version of the Evaluation Framework and to operationalise it for the LinkedUp challenge judges into a concrete evaluation instrument. This deliverable is not intended as a very elaborated report rather than a summary of the current version of the Evaluation Framework based on the extensive studies in deliverable D2.1 – Evaluation Methods and Metrics. D2.2.1will be reconsidered in the final report of WP2 to demonstrate the development of the Evaluation Framework during the life cycle of the LinkedUp project. For this purpose it is supportive to have the first version of the Evaluation Framework as a tangible outcome and an own entity as conducted in this deliverable.
http://portal.ou.nl/documents/363049/27b00ab7-2c2e-4fda-90a1-6db41e6493ac
http://creativecommons.org/licenses/by-nc-sa/3.0/
Drachsler, H., Greller, W., Stoyanov, S. (2013). D2.2.1 Evaluation Frameowork. LinkedUp project. Heerlen, The Netherlands.
LinkedTV Deliverable 2.7 - Final Linked Media Layer and EvaluationLinkedTV
This deliverable presents the evaluation of content annotation and content enrichment systems that are part of the final tool set developed within the LinkedTV consortium. The evaluations were performed on both the Linked News and Linked Culture trial content, as well as on other content annotated for this purpose. The evaluation spans three languages: German (Linked News), Dutch (Linked
Culture) and English. Selected algorithms and tools were also subject to benchmarking in two international contests: MediaEval 2014 and TAC’14. Additionally, the Microposts 2015 NEEL Challenge is being organized with the support of LinkedTV.
D2.3.1 Evaluation results of the LinkedUp Veni competitionHendrik Drachsler
This document D2.3.1 is the first report out of three deliverables (D2.3.2, D2.3.3) of Task 2.4 - Evaluation of challenge submissions. Task 2.4 is about the actual assessment of the participating projects within the LinkedUp Veni, Vidi and Vici competition on the basis of the LinkedUp Evaluation Framework (D2.2.1).
We especially report about the outcomes of the various competitions and analyse the practical experiences of the experts with the LinkedUp Evaluation Framework.
In the current document D2.3.1 we report about the Linked Data tools and ideas that have been submitted to the first data competition - Veni. In total, we received 23 submissions, 8 of them have been shortlisted and invite to a poster presentation at the Open Knowledge Conference (OKCon), 3 of them have been awarded at OKCon according to the Linkedup evaluation process, and one submission received an audience award.
This deliverable provides an overview of the Veni submissions, explains the evaluation procedure that result in a short list of the best submissions, justifies the decision for the winners, and also reports the experiences with the evaluation framework that has been created in the previous WP2 deliverables [7][8]
http://portal.ou.nl/documents/363049/b40fb118-6e65-4875-86e9-8def1266c552
http://creativecommons.org/licenses/by-nc-sa/3.0/
Drachsler, H., Stoyanov, S., Pieper, F., Guy, M. (2013). D2.3.1 Evaluation results of the LinkedUp Veni competition. LinkedUp project. Heerlen, The Netherlands.
Effort distribution, be it by phase or activity, is an important aspect of SDLC. Yet it is often overlooked in
the process of cost estimation. Poor effort allocation is one of the root causes of rework owing to early activities
being insufficiently resourced. This paper presents various phase effort distribution patterns and variation sources.
The analysis results of these patterns show some consistency in effects of size of software and team size on code
and test phase distribution variations, and some considerable deviations in design, requirements, and transition
phases, compared with recommendations in the COCOMO model. Software size, in turn, depends on the smallmedium,
medium-large companies having different schemes. Finally, the major findings of this paper discusses
about threats to validity and presents general guidelines in directing effort distribution across the various software
development methods, time duration of the phases of SDLC and the team strength. Based on the above factors,
effort distribution can be estimated.
LINKING SOFTWARE DEVELOPMENT PHASE AND PRODUCT ATTRIBUTES WITH USER EVALUATIO...csandit
This paper presents an evaluation methodology to reveal the relationships between the
attributes of software products, practices applied during the development phase and the user
evaluation of the products. For the case study, the games sector has been chosen due to easy
access to the user evaluation of this type of software products. Product attributes and practices
applied during the development phase have been collected from the developers via
questionnaires. User evaluation results were collected from a group of independent evaluators.
Two bipartite networks were created using the gathered data. The first network maps software
products to the practices applied during the development phase and the second network maps
the products to the product attributes. According to the links, similarities were determined and
subgroups of products were obtained according to selected development phase practices. By
this way, the effect of development phase on the user evaluation has been investigated.
LinkedTV Deliverable 2.7 - Final Linked Media Layer and EvaluationLinkedTV
This deliverable presents the evaluation of content annotation and content enrichment systems that are part of the final tool set developed within the LinkedTV consortium. The evaluations were performed on both the Linked News and Linked Culture trial content, as well as on other content annotated for this purpose. The evaluation spans three languages: German (Linked News), Dutch (Linked
Culture) and English. Selected algorithms and tools were also subject to benchmarking in two international contests: MediaEval 2014 and TAC’14. Additionally, the Microposts 2015 NEEL Challenge is being organized with the support of LinkedTV.
D2.3.1 Evaluation results of the LinkedUp Veni competitionHendrik Drachsler
This document D2.3.1 is the first report out of three deliverables (D2.3.2, D2.3.3) of Task 2.4 - Evaluation of challenge submissions. Task 2.4 is about the actual assessment of the participating projects within the LinkedUp Veni, Vidi and Vici competition on the basis of the LinkedUp Evaluation Framework (D2.2.1).
We especially report about the outcomes of the various competitions and analyse the practical experiences of the experts with the LinkedUp Evaluation Framework.
In the current document D2.3.1 we report about the Linked Data tools and ideas that have been submitted to the first data competition - Veni. In total, we received 23 submissions, 8 of them have been shortlisted and invite to a poster presentation at the Open Knowledge Conference (OKCon), 3 of them have been awarded at OKCon according to the Linkedup evaluation process, and one submission received an audience award.
This deliverable provides an overview of the Veni submissions, explains the evaluation procedure that result in a short list of the best submissions, justifies the decision for the winners, and also reports the experiences with the evaluation framework that has been created in the previous WP2 deliverables [7][8]
http://portal.ou.nl/documents/363049/b40fb118-6e65-4875-86e9-8def1266c552
http://creativecommons.org/licenses/by-nc-sa/3.0/
Drachsler, H., Stoyanov, S., Pieper, F., Guy, M. (2013). D2.3.1 Evaluation results of the LinkedUp Veni competition. LinkedUp project. Heerlen, The Netherlands.
Effort distribution, be it by phase or activity, is an important aspect of SDLC. Yet it is often overlooked in
the process of cost estimation. Poor effort allocation is one of the root causes of rework owing to early activities
being insufficiently resourced. This paper presents various phase effort distribution patterns and variation sources.
The analysis results of these patterns show some consistency in effects of size of software and team size on code
and test phase distribution variations, and some considerable deviations in design, requirements, and transition
phases, compared with recommendations in the COCOMO model. Software size, in turn, depends on the smallmedium,
medium-large companies having different schemes. Finally, the major findings of this paper discusses
about threats to validity and presents general guidelines in directing effort distribution across the various software
development methods, time duration of the phases of SDLC and the team strength. Based on the above factors,
effort distribution can be estimated.
LINKING SOFTWARE DEVELOPMENT PHASE AND PRODUCT ATTRIBUTES WITH USER EVALUATIO...csandit
This paper presents an evaluation methodology to reveal the relationships between the
attributes of software products, practices applied during the development phase and the user
evaluation of the products. For the case study, the games sector has been chosen due to easy
access to the user evaluation of this type of software products. Product attributes and practices
applied during the development phase have been collected from the developers via
questionnaires. User evaluation results were collected from a group of independent evaluators.
Two bipartite networks were created using the gathered data. The first network maps software
products to the practices applied during the development phase and the second network maps
the products to the product attributes. According to the links, similarities were determined and
subgroups of products were obtained according to selected development phase practices. By
this way, the effect of development phase on the user evaluation has been investigated.
The summative evaluation report by Josélia Neves presents the final results of the project, assessing its overall evolution. It considers the totality and legacy of the project, its overall success and failures, the results of its transnational application and its final deliverables and dissemination.
FRAMEWORKS BETWEEN COMPONENTS AND OBJECTSacijjournal
Before the emergence of Component-Based Frameworks, similar issues have been addressed by other
software development paradigms including e.g. Object-Oriented Programming (OOP), ComponentBased Development (CBD), and Object-Oriented Framework. In this study, these approaches especially
object-oriented Frameworks are compared to Component-Based Frameworks and their relationship are
discussed. Different software reuse methods impacts on architectural patterns and support for
application extensions and versioning. It is concluded that many of the mechanisms provided by
Component-Based Framework can be enabled by software elements at the lower level. The main
contribution of Component-Based Framework is the focus on Component development. All of them can be
built on each other in layered manner by adopting suitable design patterns. Still some things such as
which method to develop and upgrade existing application to other approach.
AN APPROACH TO IMPROVEMENT THE USABILITY IN SOFTWARE PRODUCTSijseajournal
One of the significantaspects of software quality is usability. It is one of the characteristics that judge by
the success or failure of software applications. The most important risk facing the software applications is
usability which may lead to the existence of a gap between users and systems. This may lead to system
failure because of Poor design. This is due to the design is not based on the desires and requirements of the
customer. To overcome these problems, this paper proposed an approach to improve usability of software
applications to meet the needs of the customer and interacts with the user easily with an efficient and
effective manner.The proposed approach is based prototyping technique due to itssimplicity and it does not
require additional costs to elicit precise and complete requirement and design.
The Impact of Software Complexity on Cost and Quality - A Comparative Analysi...ijseajournal
Early prediction of software quality is important for better software planning and controlling. In early
development phases, design complexity metrics are considered as useful indicators of software testing
effort and some quality attributes. Although many studies investigate the relationship between design
complexity and cost and quality, it is unclear what we have learned beyond the scope of individual studies.
This paper presented a systematic review on the influence of software complexity metrics on quality
attributes. We aggregated Spearman correlation coefficients from 59 different data sets from 57 primary
studies by a tailored meta-analysis approach. We found that fault proneness and maintainability are most
frequently investigated attributes. Chidamber & Kemerer metric suite is most frequently used but not all of
them are good quality attribute indicators. Moreover, the impact of these metrics is not different in
proprietary and open source projects. The result provides some implications for building quality model
across project type.
Publishing for the students living in the iPad era: our view of the industrySebastien Dubuis
Publishing for the iPad generation of students requires some new mindset. How to enhance textbooks to create a lively reading experience? How to seamlessly offer cross devices compatibility?
The summative evaluation report by Josélia Neves presents the final results of the project, assessing its overall evolution. It considers the totality and legacy of the project, its overall success and failures, the results of its transnational application and its final deliverables and dissemination.
FRAMEWORKS BETWEEN COMPONENTS AND OBJECTSacijjournal
Before the emergence of Component-Based Frameworks, similar issues have been addressed by other
software development paradigms including e.g. Object-Oriented Programming (OOP), ComponentBased Development (CBD), and Object-Oriented Framework. In this study, these approaches especially
object-oriented Frameworks are compared to Component-Based Frameworks and their relationship are
discussed. Different software reuse methods impacts on architectural patterns and support for
application extensions and versioning. It is concluded that many of the mechanisms provided by
Component-Based Framework can be enabled by software elements at the lower level. The main
contribution of Component-Based Framework is the focus on Component development. All of them can be
built on each other in layered manner by adopting suitable design patterns. Still some things such as
which method to develop and upgrade existing application to other approach.
AN APPROACH TO IMPROVEMENT THE USABILITY IN SOFTWARE PRODUCTSijseajournal
One of the significantaspects of software quality is usability. It is one of the characteristics that judge by
the success or failure of software applications. The most important risk facing the software applications is
usability which may lead to the existence of a gap between users and systems. This may lead to system
failure because of Poor design. This is due to the design is not based on the desires and requirements of the
customer. To overcome these problems, this paper proposed an approach to improve usability of software
applications to meet the needs of the customer and interacts with the user easily with an efficient and
effective manner.The proposed approach is based prototyping technique due to itssimplicity and it does not
require additional costs to elicit precise and complete requirement and design.
The Impact of Software Complexity on Cost and Quality - A Comparative Analysi...ijseajournal
Early prediction of software quality is important for better software planning and controlling. In early
development phases, design complexity metrics are considered as useful indicators of software testing
effort and some quality attributes. Although many studies investigate the relationship between design
complexity and cost and quality, it is unclear what we have learned beyond the scope of individual studies.
This paper presented a systematic review on the influence of software complexity metrics on quality
attributes. We aggregated Spearman correlation coefficients from 59 different data sets from 57 primary
studies by a tailored meta-analysis approach. We found that fault proneness and maintainability are most
frequently investigated attributes. Chidamber & Kemerer metric suite is most frequently used but not all of
them are good quality attribute indicators. Moreover, the impact of these metrics is not different in
proprietary and open source projects. The result provides some implications for building quality model
across project type.
Publishing for the students living in the iPad era: our view of the industrySebastien Dubuis
Publishing for the iPad generation of students requires some new mindset. How to enhance textbooks to create a lively reading experience? How to seamlessly offer cross devices compatibility?
Тренинг продаж: проход секретаря при холодном звонкеMikhail Grafsky
Вторая часть аудиосеминара бизнес-тренера Михаила Графского. В этой части Михаил делится некоторыми приемами прохода секретаря во время холодного звонка.
www.clientbridge.ru
www.grafsky.ru
The main purposes of the current deliverable D2.1 is to provide the foundations of an Evaluation Framework that can be applied to compare Open Web Data applications and rank them according to their achievements. D2.1 contains the information gained from Task 2.1 - Evaluation criteria and method review and Task 2.2 - Validation of the evaluation criteria and methods of WP2 (DoW. p. 8). According to those tasks, we conducted an expert survey with the Group Concept Mapping method to identify relevant indicators and criteria for the Evaluation Framework. In a second step, we conducted a focused literature review to extend the outcomes of the expert survey with latest indicators reported in the literature. We finally, present the initial concept of the Evaluation Framework and its criteria and indicators.
This deliverable provides the theoretical foundations for the Evaluation Framework that is further developed into a scoring sheet for the judges of LinkedUp challenge in deliverable D2.2.1. The Evaluation Framework will be further developed and amended according to the experiences collected in the three LinkedUp data competitions during the LinkedUp challenge
http://portal.ou.nl/documents/363049/ae41bc18-130b-45d2-94a6-e4b4e42726fe
http://creativecommons.org/licenses/by-nc-sa/3.0/
Drachsler, H., Greller, W., Stoyanov, S., Fetahu, B., Daga, E., Parodi, E., Mosca, M., Adamou, A., Herder, E. (2013). D2.1 Evaluation Criteria and Methods. LinkedUp project. Heerlen, The Netherlands.
LinkedTV Deliverable 5.7 - Validation of the LinkedTV ArchitectureLinkedTV
The LinkedTV architecture lays the foundation for the
LinkedTV system. It consists of the integrating platform for the end-to-end functionality, the backend components and the supporting client components. Since the architecture of a software system has a fundamental impact on quality
attributes, it is important to evaluate its design. The document at hand reports on the validation of the LinkedTV architecture.
LinkedTV Deliverable 3.8 - Design guideline document for concept-based presen...LinkedTV
This document presents guidelines on how to setup enriched video experiences.
We provide user-centric guidelines on the named entities that should be detected and selected to effectively enrich video news broadcasts. This is presented in the form of a user study.
We selected 5 news videos and manually extracted the
candidate entities from various sources, such as the transcript, visual content and related articles. An expert was asked to also provide interesting entities for the videos. The resulting 99 candidate entities were presented to 50 participants via an online survey. The participants rated the level of interestingness of the entities and the usefulness of
information from Wikipedia about these entities. Analysis of
the results shows that users prefer entities of the type
organization and person and have little interest for entities of the type location. They also indicate that subtitles are not
enough as a source of interesting entities and that the amount of interesting entities can be improved by the combined use of subtitles with entities extracted from related articles or entities suggested by an expert. The expert suggestions showed to be more accurate than any other source of entities. Wikipedia seems to be a suitable source of additional information about the entities in the news, but should be complemented with additional sources.
We provide engineering guidelines on how to present,
aggregate and process content for TV program companion
applications. We describe the content processing pipeline that was developed in WP3 to feed the content for the LinkedNews and Linked Culture demonstrators. This shows how content from the Web can be re-purposed to enrich videos by extracting the core display content and presenting it in a uniform way to the user.
Software Requirements Specification on Bengali Braille to Text TranslatorMinhas Kamal
Complete Software Requirements Specification (SRS) on a software project Bengali Braille to Text Translator. Chapters- Inception, Elicitation, Scenario-Based Model, Data Model, Class-Based Model, and Behavioral Model.
Created in 4th year of Bachelor of Science in Software Engineering (BSSE) course at Institute of Information Technology, University of Dhaka (IIT, DU).
IMPORTANT label each projects title page as follow Project 1P.docxsheronlewthwaite
IMPORTANT: label each projects title page as follow
Project 1
Project 2
Project 3
Project 4
Project 5
Note: Project 1 has to be completed by July 30th 2019,
Project 2,3,4,5 can be done before or by August 13th 2019
PROJECT 1 (Due on 7/30/2019 )
Scenarios
As a newly assigned project manager for the Ohio Department of Human Services, you are excited about working with technology projects throughout the state. The Ohio Department of Human Services' (ODHS) Office of Network Support (ONS) is responsible for managing the network and software applications for over 15,000 state and county agency employees throughout Ohio's 88 counties. The office coordinates software upgrades and network modifications from an operations center located in the capital, Columbus, with assistance from local technology employees in each of the county seats.
The position is not without its challenges, however. The network infrastructure throughout the state ranges from high quality, high-bandwidth connections in major population centers to older, partially working connections in the poorer, more regional counties. Additionally, budget and resources are a constant issue, with a high turnover rate among existing employees and an emphasis on outsourcing the labor force to several vendors to accomplish the myriad of support and project tasks. Your initial assignment is to examine the viability and costs associated with upgrading the existing e-mail, with the objective of developing the implementation project for the organization. ODHS had adopted the e-mail software called Globalupgrades (a Worldviewupgrades product) as its e-mail standard in 1994 and executed minor upgrades since then. However, the latest version of Globalupgrades, Version 9.0, contains significant enhancements desired by the ODHS user base, and the existing Version 7.0’s support will be phased out in the next year by Worldviewupgrades. The Worldviewupgrades sales representatives have been offering discounts for a Version 9.0 license, but the costs are approximately 20% more than previous licenses. Additionally, several other e-mail product vendors are lobbying state officials for business, some of them offering significant incentives. Generally, these products are viewed as less robust than Globalupgrades, but there are some segments of the user community that are supportive of these other options.
You are reviewing the existing documentation on the current state of the e-mail system, including license agreements and the Worldviewupgrades Globalupgrades 9.0 preliminary proposal. You are also examining the staffing structure and developing ideas on how to accomplish this task. In two weeks, you will need to brief the ONS Director on your planned approach to completing this effort.
Assignment Requirements project 1
3-4 pages (APA FORMAT)
Includes a Title Page (APA format)
1 reference page
List all References in APA FORMAT
1.Develop a high-level project charter for the E-Mail Upgrade Project des ...
EFFICIENT AND RELIABLE PERFORMANCE OF A GOAL QUESTION METRICS APPROACH FOR RE...ecijjournal
Some of the literature survey have been made on the small scale transaction, only few of the transactions
are build on Enterprise Resource Planning and till dated there is not such a methodology or an approach
implemented on the small scale transaction. Several implementations are mainly focus on the large scale
transaction and hence they are handles huge business volume. This paper proposed an approach for reengineering
a small scale transaction by implementing GQM approach. Even though, web technology is most popular and reliable but these paper prove that re-engineering of small scale transaction on standalone application will be effective and reliable than web technology.
EFFICIENT AND RELIABLE PERFORMANCE OF A GOAL QUESTION METRICS APPROACH FOR RE...ecij
Some of the literature survey have been made on the small scale transaction, only few of the transactions are build on Enterprise Resource Planning and till dated there is not such a methodology or an approach implemented on the small scale transaction. Several implementations are mainly focus on the large scale transaction and hence they are handles huge business volume. This paper proposed an approach for reengineering a small scale transaction by implementing GQM approach. Even though, web technology is most popular and reliable but these paper prove that re-engineering of small scale transaction on standalone application will be effective and reliable than web technology.
In this webinar, Prof Hendrik Drachsler will reflect on the process of applying learning analytics solutions within higher education settings, its implications, and the critical lessons learned in the Trusted Learning Research Program. The talk will focus on the experience of edutec.science research collective consisting of researchers from the Netherlands and Germany that contribute to the Trusted Learning Analytics (TLA) research program. The TLA program aims to provide actionable and supportive feedback to students and stands in the tradition of human-centered learning analytics concepts. Thus, the TLA program aims to contribute to unfolding the full potential of each learner. It, therefore, applies sensor technology to support psychomotor as well as web technology to support meta-cognitive and collaborative learning skills with high-informative feedback methods. Prof. Drachsler applies validated measurement instruments from the field of psychometric and investigates to what extent Learning Analytics interventions can reproduce the findings of these instruments. During this webinar, Prof Drachsler will discuss the lessons learned from implementing TLA systems. He will touch on TLA prerequisites like ethics, privacy, and data protection, as well as high informative feedback for psychomotor, collaborative, and meta-cognitive competencies and the ongoing research towards a repository, methods, tools and skills that facilitate the uptake of TLA in Germany and the Netherlands.
Smart Speaker as Studying Assistant by Joao ParganaHendrik Drachsler
The thesis by Joao Pargana followed two main goals, first, a smart speaker application was created to support learners in informal learning processes through a question/answer application. Second, the impact of the application was tested amongst various users by analyzing how adoption and
transition to newer learning procedures can occur.
Dieser Entwurf eines Verhaltenskodex richtet sich an Hochschulen, die mittels Learning Analytics die Qualität des Lernens und Lehrens verbessern wollen. Der Kodex kann als Vorlage zur Erstellung von organisationsspezifischen Verhaltenskodizes dienen. Er sollte an Hochschulen, die Learning Analytics einführen wollen, durch Konsultationen mit allen Interessengruppen überprüft und an die Ziele sowie die bestehende Praxis innerhalb der jeweiligen Hochschulen angepasst werden. Der Kodex wurde auf Grundlage einer Analyse bestehender europäischer Kodizes und der in Deutschland geltenden Rechtsgrundlage vom Innovationsforum Trusted Learning Analytics des hessenweiten Projektes "Digital gestütztes Lehren und Lernen in Hessen" entwickelt.
Abstract (English):
This code of conduct can be used as a template for creating organization-specific codes of conduct in Germany. The Code was developed on the basis of an analysis of existing European codes of conduct and the legal basis for the usage of data in higher education in Germany.
Rödling, S. (2019). Entwicklung einer Applikation zum assoziativen Medien Ler...Hendrik Drachsler
Ziel der vorliegenden Bachelorarbeit ist es, den Einfluss von zusätzlicher am Handgelenk wahr-genommener Vibration in Verbindung mit der visuellen Darstellung eines Lerninhaltes auf denLernerfolg zu messen. Der Lernerfolg wird hierbei durch die Lerngeschwindigkeit sowie denUmfang der Wissenskonsolidierung über die Testreihe definiert. Zu diesem Zweck wurde eine Experimentalstudie zumAssoziativen Lernendurchgeführt. Für die Studie verwendeten 33Probanden eine App, die für die vorliegende Arbeit entwickelt wurde. Im Mittel aller Studiener-gebnisse wurden sowohl für die Lerngeschwindigkeit als auch für die Wissenskonsolidierungbessere Werte erzielt, wenn die Probanden die Möglichkeit hatten, den Lerninhalt sowohl visu-ell als auch haptisch zu erfahren. Die festgestellten Unterschiede des Lernerfolges erreichtenjedoch keine statistische Signifikanz. Die Abweichungen der Ergebnisse nach der Umsetzungder vorgeschlagenen Änderungen am Studiendesign sind abzuwarten. Die Bachelorarbeit ist vor allem für den Bildungsbereich interessant.
The present bachelor thesis aims to measure the influence of vibration perceived at the wrist in connection with the visual representation of learning content on the learning success. The learning success is defined by the learning speed and the extent of knowledge consolidation over the test series. For this purpose, an experimental study on Associative Learning was conducted. For the study, 33 test persons used an app, which was developed for the present work. On average of all study results better values were achieved for both learning speed and knowledge consolidation, if the test persons could experience the learning content both visually and haptically. However, the differences in learning outcomes did not reach statistical significance. The results of the deviations after the implementation of the proposed changes to the study design must be awaited. The Bachelor’s thesis is particularly interesting for the education sector.
E.Leute: Learning the impact of Learning Analytics with an authentic datasetHendrik Drachsler
Nowadays, data sets of the interactions of users and their corresponding demographic data are becoming more and more valuable for companies and academic institutions like universities
when optimizing their key performance indicators. Whether it is to develop a model to predict the optimal learning path for a student or to sell customers additional products, data sets to
train these models are in high demand. Despite the importance and need for big data sets it still has not become apparent to every decision-maker how crucial data sets like these are for the
future success of their operations.
The objective of this thesis is to demonstrate the use of a data set, gathered from the virtual learning environment of a distance learning university, by answering a selection of questions in
Learning Analytics. Therefore, a real-world data set was analyzed and the selected questions were answered by using state-of-the-art machine learning algorithms.
Romano, G. (2019) Dancing Trainer: A System For Humans To Learn Dancing Using...Hendrik Drachsler
Masters thesis by Romano, G., (2019). Dancing is the ability to feel the music and express it in rhythmic movements with the body. But learning how to dance can be challenging because it requires proper coordination and understanding of rhythm and beat. Dancing courses, online courses or learning with free content are ways to learn dancing. However, solutions with human-computer interaction are rare or
missing. The Dancing Trainer (DT) is proposed as a generic solution to fill this gap. For the beginning, only Salsa is implemented, but more dancing styles can be added. The DT uses the Kinect to interact multimodally with the user. Moreover, this work shows that dancing steps can be defined as gestures with the Kinect v2 to build a dancing corpus. An experiment with
25 participants is conducted to determine the user experience, strengths and weaknesses of the DT. The outcome shows that the users liked the system and that basic dancing steps were
learned.
In May 2018, the new General Data Protection Regulation (GDPR) will enter into force in the European Union. This new regulation is considered as the most modern data protection law for Big Data societies of tomorrow. The GDPR will bring major changes to data ownership and the way data can be accessed, processed, stored, and analysed in the European Union. From May 2018 onwards, data subjects gain fundamental rights such as ‘the right to access data’ or ‘the right to be forgotten’. This will force Big Data system designers to follow a privacy-by-design approach for their infrastructures and fundamentally change the way data can be treated in the European Union.
The presentation provides an overview of the Trusted Learning Analytics Programme as it has been recently initiated at the University of Frankfurt and the DIPF research institute in Germany. Educational data is under special focus of the GDPR, as it is considered as highly sensitive like data from a nuclear plant. It shows opportunities and challenges for using educational data for learning analytics purposes under the light of the GDPR 2018.
Fighting level 3: From the LA framework to LA practice on the micro-levelHendrik Drachsler
This presentation explores shortcomings of learning analytics for the wide adoption in educational organisations. It is NOT about ethics and privacy rather than focuses on shortcomings of learning analytics for teachers and students in the classroom (micro-level). We investigated if and to what extend learning analytics dashboards are addressing educational concepts. Map opportunities and challenges for the use of Learning Analytics dashboards for the design of courses, and present an evaluation instrument for the effects of Learning Analytics called EFLA. EFLA can be used to measure the effects of LA tools at the teacher and student side. It is a robust but light (8 items) measurement to quickly investigate the level of adoption of learning analytics in a course (micro-level). The presentation concludes that Learning Analytics is still to much a computer science dicipline that does not fulfill the often claimed position of the middle space between educational and computer science research.
Presentation given at PELARS Policy event, Brussles, 09.11.2016. A follow up op the first LACE Policy event in April 2015. Special focus is on the exploitation and sustainability activities for LACE in the SIG LACE SoLAR.
Dutch Cooking with xAPI Recipes, The Good, the Bad, and the ConsistentHendrik Drachsler
This paper presents the experiences of several Dutch projects in their application of the xAPI standard and different design patterns including the deployment of Learning Record Stores. In this paper we share insights and argue for the formation of an international Special Interest Group on interoperability issues to contribute to the Open Analytics Framework as envisioned by SoLAR and enacted by the Apereo Learning Analytics Initiative. Therefore, we provide an overview of the advantages and disadvantages of implementing the current xAPI standard by presenting projects that applied xAPI in very different ways followed by the lessons learned.
Recommendations for Open Online Education: An Algorithmic StudyHendrik Drachsler
Recommending courses to students in online platforms is studied widely. Almost all studies target closed platforms, that belong to a University or some other educational provider. This makes the course recommenders situation specific. Over the last years, a demand has developed for recommender system that suit open online platforms. Those platforms have some common characteristics, such as the lack of rich user profiles with content metadata. Instead they log user interactions within the platform that can be used for analysis and personalization. In this paper, we investigate how user interactions and activities tracked within open online learning platforms can be used to provide recommendations. We present a study in which we investigate the application of several state-of-the-art recommender algorithms, including a graph-based recommender approach. We use data from the OpenU open online learning platform that is in use by the Open University of the Netherlands. The results show that user-based and memory-based methods perform better than model-based and factorization methods. Particularly, the graph-based recommender system proves to outperform the classical approaches on prediction accuracy of recommendations in terms of recall. We conclude that, if the algorithms are chosen wisely, recommenders can contribute to a better experience of learners in open online courses.
Soude Fazeli, Enayat Rajabi, Leonardo Lezcano, Hendrik Drachsler, Peter Sloep
Privacy and Analytics – it’s a DELICATE Issue. A Checklist for Trusted Learni...Hendrik Drachsler
The widespread adoption of Learning Analytics (LA) and Educational Data Mining (EDM) has somewhat stagnated recently, and in some prominent cases even been reversed following concerns by governments, stakeholders and civil rights groups about privacy and ethics applied to the handling of personal data. In this ongoing discussion, fears and realities are often indistin-guishably mixed up, leading to an atmosphere of uncertainty among potential beneficiaries of Learning Analytics, as well as hesitations among institutional managers who aim to innovate their institution’s learning support by implementing data and analytics with a view on improving student success. In this presentation, we try to get to the heart of the matter, by analysing the most common views and the propositions made by the LA community to solve them. We conclude the paper with an eight-point checklist named DELICATE that can be applied by researchers, policy makers and institutional managers to facilitate a trusted implementation of Learning Analytics.
DELICATE checklist - to establish trusted Learning AnalyticsHendrik Drachsler
The DELICATE checklist contains eight action points that should be considered by managers and decision makers planning the implementation of Learning Analytics / Educational Data Mining solutions either for their own institution or with an external provider.
The eight points are:
1. Determination: Decide on the purpose of learning analytics for your institution. What aspects of learning or learner services are you trying to improve?
2. Explain: Define the scope of data collection and usage. Who has a need to have access to the data or the results? Who manages the datasets? On what criteria?
3. Legitimate: Explain how you operate within the legal frameworks, refer to the essential legislation. Is the data collection excessive, random, or fit for purpose?
4. Involve: Talk to stakeholders and give assurances about the data distribution and use. Give as much control as possible to data subjects (permission architecture), and provide access to their data for the individuals.
5. Consent: Seek consent through clear consent questions. Provide an opt-out option.
6. Anonymise: De-identify individuals as much as possible, aggregate data into meta-models.
7. Technical aspects: Monitor who has access to data, especially in areas with high staff turn-over. Establish data storage to high security standards.
8. External partners: Make sure externals provide highest data security standards. Ensure data is only used for intended purposes and not passed on to third parties.
We hope that the DELICATE checklist will be a helpful instrument for any educational institution to demystify the ethics and privacy discussions around Learning Analytics. As we have tried to show in this article, there are ways to design and provide privacy conform Learning Analytics that can benefit all stakeholders and keep control with the users themselves and within the established trusted relationship between them and the institution.
Updated Flyer of the LACE project with latest tangible outcomes and collaboration possibilities.
LACE connects players in the fields of Learning Analytics (LA) and Educational Data Mining (EDM) in order to support the development of a European community and share emerging best practices.
Objectives
-------------
• Promote knowledge creation and exchange
• Increase the evidence base about Learning Analytics
• Contribute to the definition of future directions
• Build consensus on pressing topics like data interoperability, data sharing, ethics and privacy, and Learning Analytics supported instructional design
Activities
• Organise events to connect organisations that are conducting LA/EDM research
• Create and curate a knowledge base to capture evidence for the effectiveness of Learning Analytics
• Produce reviews to inform the LACE community about latest developments in the field
Presentation given at Serious Request 2015, #SR15, Heerlen.
Within the Open University we started a 12 hours marathon college, to collect money for the charity action of radiostation 3FM. The collected money will go to the red cross and support young people in conflict areas.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
2. Page 2 of 21
LinkedUp Support Action – 317620
LinkedUp Consortium
This document is a part of the LinkedUp Support Action funded by the ICT Programme of the
Commission of the European Communities by the grant number 317620. The following partners are
involved in the project:
Leibniz Universität Hannover (LUH)
Forschungszentrum L3S
Appelstrasse 9a
30169 Hannover
Germany
Contact person: Stefan Dietze
E-mail address: dietze@L3S.de
The Open University
Walton Hall, MK7 6AA
Milton Keynes
United Kingdom
Contact person: Mathieu d'Aquin
E-mail address: m.daquin@open.ac.uk
Open Knowledge Foundation Limited LBG
Panton Street 37,
CB2 1HL Cambridge
United Kingdom
Contact person: Sander van der Waal
E-mail address: sander.vanderwaal@okfn.org
ELSEVIER BV
Radarweg 29,
1043NX AMSTERDAM
The Netherlands
Contact person: Michael Lauruhn
E-mail address: M.Lauruhn@elsevier.com
Open Universiteit Nederland
Valkenburgerweg 177,
6419 AT Heerlen
The Netherlands
Contact person: Hendrik Drachsler
E-mail address: Hendrik.Drachsler@ou.nl
EXACT Learning Solutions SPA
Viale Gramsci 19
50121 Firenze
Italy
Contact person: Elisabetta Parodi
E-mail address: e.parodi@exactls.com
Work package participants
The following partners have taken an active part in the work leading to the elaboration of this
document, even if they might not have directly contributed to the writing of this document or its
parts:
-‐ LUH
-‐ OU
-‐ EXT
-‐ ELSV
Change Log
Version Date
Amended by
Changes
0.1
15.03.2013
Hendrik Drachsler
Initial structure
0.2
25.03.2013
Hendrik Drachsler
Enrichment
0.3
26.03.2013
Wolfgang Greller
Minor corrections
0.3
01.04.2013
Hendrik Drachsler
Minor corrections
0.4
23.04.2013
Slavi Stoyanov
Reviewers feedback incorporated
0.5
25.04.2013
Hendrik Drachsler
Minor corrections
4. Page 4 of 21
LinkedUp Support Action – 317620
Table of Contents
1.
Introduction
...................................................................................................
5
2.
Overview of the first version of the Evaluation Framework
..................
5
3.
Evaluation
of
the
LinkedUp
scoring
sheet
.........................................................
7
5. Conclusions
................................................................................................
12
References
.........................................................................................................
13
Appendix A – The LinkedUp scoring sheet
..................................................
14
6. Page 6 of 21
LinkedUp Support Action – 317620
Figure 1: Comprehensive version of the LinkedUp Evaluation Framework based on the deliverable D2.1 of the LinkedUp
project.
Educational Innovation
‘Educational Innovation’ is based on a list of indicators that innovative educational tools should
support based on an expert survey and a recent report of Institute for Prospective Technological
Studies (IPTS), an EC research institute. In the first version of the EF, judges of the data challenge
will be able to check whether data applications address the set of indicators composing this criterion
In addition, we will ask the judges to provide a short statement for how innovative is the application
and a rating on a scale from 1-5 stars.
Usability
‘Usability’ is a very well known and elaborate concept with clear evaluation indicators. There is also
a wide range of standardised tools that can be applied to measure this criterion. The two most
applicable methods for the evaluation of the LinkedUp challenge are the Open Source Desirability
Kit (Storm, 2012), and the SUS method (Tullis and Stetson, 2004). SUS is often used in carrying out
comparisons of usability between software, it is quickly done, and yields a single benchmarking
score on a scale of 0–100 that provides an objective indication of the usability of a tool. This makes
it highly relevant for the LinkedUp challenge especially in the later stages of the data competition
where more advanced systems are expected to be entered into the competitions.
The Desirability Kit is relatively easy to apply by the judges. However, it provides more a general
description of the user satisfaction with the tool rather than a comparison score. Nevertheless, this