Presentation for the 1st International Workshop on Multimedia Technologies and Distant Learning at ACM Multimedia 2009
Ralf Klamma, Marc Spaniol, Matthias Jarke
Beijing, China, October 23, 2009
Ontology Integration and Interoperability (OntoIOp) – Part 1: The Distributed...Christoph Lange
The document discusses the Distributed Ontology Language (DOL), a proposed standard being developed by ISO for expressing heterogeneous ontologies and links between ontologies. DOL aims to achieve semantic integration and interoperability across knowledge representations. It will have a formal semantics and support multiple serialization formats. The standard is being developed to facilitate communication and reduce complexity for applications involving multiple ontologies.
An ontology driven module for accessing chronic pathology literature- CHRONIO...Riccardo Albertoni
An ontology driven module was developed for accessing chronic pathology literature as part of the CHRONIOUS project. It uses medical terminology like MeSH and disease-specific ontologies for COPD and CKD mapped to MeSH. Documents are processed using NLP and annotated with concepts from the ontologies. Users can search by concept or text to retrieve documents. The system was shown to retrieve relevant documents compared to PubMed and supports ontology evolution and multiple languages. Future work includes notifications for ontology changes and incremental re-indexing of documents.
A Probabilistic Analysis of the Rocchio Algorithm with TFIDF for Text Categor...Hiroshi Ono
This document presents a probabilistic analysis of the Rocchio algorithm, a popular text categorization method, and compares it to a naive Bayes classifier. The analysis provides theoretical insight into Rocchio's heuristics, especially its TFIDF word weighting scheme. It suggests improvements that lead to a probabilistic variant of Rocchio called PrTFIDF. An empirical comparison on six text categorization tasks shows that PrTFIDF and the naive Bayes classifier perform better than the heuristic Rocchio classifier in terms of classification accuracy.
The tutorial has been presented at CAISE 2010. The tutorial discusses the state-of-the-art on research addresseing the quality of data at the conceptual level (conceptual schemas) and of Ontologies
The document discusses personalization in information retrieval, extraction, and access. It describes how current search engines can be improved through deeper analysis of queries and content using natural language processing, information retrieval, and information extraction techniques. Personalization approaches are proposed, including using a user's search history and implicit feedback to learn profiles and improve future search results through re-ranking. Applications discussed include personalized search engines and summarization for mobile devices.
Pal gov.tutorial2.session1.xml basics and namespacesMustafa Jarrar
The document discusses XML (Extensible Markup Language) basics and namespaces. It provides an overview of XML, describing it as a protocol for containing and managing information by allowing users to create their own markup languages. The document also discusses the need for namespaces to avoid conflicts between element names and introduces the syntax for using namespaces, which involves associating namespaces with prefixes.
Ontology Integration and Interoperability (OntoIOp) – Part 1: The Distributed...Christoph Lange
The document discusses the Distributed Ontology Language (DOL), a proposed standard being developed by ISO for expressing heterogeneous ontologies and links between ontologies. DOL aims to achieve semantic integration and interoperability across knowledge representations. It will have a formal semantics and support multiple serialization formats. The standard is being developed to facilitate communication and reduce complexity for applications involving multiple ontologies.
An ontology driven module for accessing chronic pathology literature- CHRONIO...Riccardo Albertoni
An ontology driven module was developed for accessing chronic pathology literature as part of the CHRONIOUS project. It uses medical terminology like MeSH and disease-specific ontologies for COPD and CKD mapped to MeSH. Documents are processed using NLP and annotated with concepts from the ontologies. Users can search by concept or text to retrieve documents. The system was shown to retrieve relevant documents compared to PubMed and supports ontology evolution and multiple languages. Future work includes notifications for ontology changes and incremental re-indexing of documents.
A Probabilistic Analysis of the Rocchio Algorithm with TFIDF for Text Categor...Hiroshi Ono
This document presents a probabilistic analysis of the Rocchio algorithm, a popular text categorization method, and compares it to a naive Bayes classifier. The analysis provides theoretical insight into Rocchio's heuristics, especially its TFIDF word weighting scheme. It suggests improvements that lead to a probabilistic variant of Rocchio called PrTFIDF. An empirical comparison on six text categorization tasks shows that PrTFIDF and the naive Bayes classifier perform better than the heuristic Rocchio classifier in terms of classification accuracy.
The tutorial has been presented at CAISE 2010. The tutorial discusses the state-of-the-art on research addresseing the quality of data at the conceptual level (conceptual schemas) and of Ontologies
The document discusses personalization in information retrieval, extraction, and access. It describes how current search engines can be improved through deeper analysis of queries and content using natural language processing, information retrieval, and information extraction techniques. Personalization approaches are proposed, including using a user's search history and implicit feedback to learn profiles and improve future search results through re-ranking. Applications discussed include personalized search engines and summarization for mobile devices.
Pal gov.tutorial2.session1.xml basics and namespacesMustafa Jarrar
The document discusses XML (Extensible Markup Language) basics and namespaces. It provides an overview of XML, describing it as a protocol for containing and managing information by allowing users to create their own markup languages. The document also discusses the need for namespaces to avoid conflicts between element names and introduces the syntax for using namespaces, which involves associating namespaces with prefixes.
Data integration for Clinical Decision Support based on openEHR Archetypes an...Arturo González Ferrer
This document discusses standards for integrating clinical data from different sources to support clinical decision support systems. It evaluates several standards, including HL7 RIM, CDA, vMR, and openEHR, for representing different types of clinical data and linking data to decision rules. Experiments show openEHR archetypes mapped to the HL7 vMR model provide good support for integrating data from EMRs and PHRs. The document proposes using openEHR archetypes that conform to the HL7 vMR model to address both front-end and back-end integration needs for clinical decision support.
Perspectives of Turning Prague Dependency Treebank into a Knowledge BaseVáclav Novák
The document discusses transforming the Prague Dependency Treebank (PDT) into the MultiNet knowledge representation format. It describes MultiNet and PDT, and identifies missing information needed for the transformation. Key issues include mapping dependency structures and functors to cognitive roles, mapping nodes to concepts, and representing constructs like tense. Additional requirements like ontology are also discussed. The conclusion is that while challenging, MultiNet is suitable for inferences and PDT provides a starting point.
This document discusses object-oriented programming in Python. It covers the basics of Python classes, including how classes are defined and behave as objects. It describes the differences between old-style and new-style classes, with new-style classes being the preferred approach. New-style classes allow subclasses of built-in types, properties, static/class methods, cooperative inheritance, and metaclass programming. The document then delves deeper into specific aspects of classes like descriptors, inheritance, and instances.
Pal gov.tutorial2.session12 2.architectural solutions for the integration issuesMustafa Jarrar
This document discusses two main architectural solutions for data integration issues: application-driven integration and data-driven integration. Application-driven integration uses middleware like web services or publish/subscribe architectures. Data-driven integration uses techniques like data consolidation, data warehousing, or virtual data integration to reconcile data schemas and queries. The document provides examples of architectures for each approach.
Semantically-aware Networks and Services for Training and Knowledge Managemen...Gilbert Paquette
This document discusses semantically-aware networks and services for training and knowledge management. It describes software developed at CICE/LICEF for building ontologies and semantically referencing resources to enable semantic search and personalized recommendations. The TELOS system uses competency descriptors and comparison methods to power rules-based recommender agents that are integrated into learning scenarios to provide adaptive assistance to users. Future work is aimed at experimental validation, improving group recommendations, automation, and integrating other recommendation methods.
The document discusses the development of OpenWN-PT, a Brazilian Portuguese Wordnet. Key points:
- OpenWN-PT is being created as part of a joint project between CPDOC and EMAp to apply formal logical tools to Portuguese text.
- It is based on the Universal Wordnet (UWN) which projects WordNet concepts into over 200 languages using statistical methods. The UWN provides an initial automated version of a Portuguese Wordnet.
- The creators are working to improve the initial UWN-based Portuguese Wordnet by combining it with data from Princeton WordNet, UWN, MENTA, and EuroWordNet to generate a new OpenWN-PT file.
This document provides an outline for a tutorial on data integration and open information systems. The tutorial consists of 16 sessions over a total of approximately 40 hours. It will cover topics such as XML, RDF, OWL, data integration, linked data, and the semantic web. The intended learning outcomes include understanding data models, semantic web languages, integrating and querying heterogeneous data using techniques such as SPARQL and RDF. Students will gain practical skills in tools like Oracle Semantic Technology and Virtuoso for storing and querying RDF data. Attendance is mandatory for all sessions.
This document provides an overview of the Demystifying OWL tutorial. The tutorial will explain description logics and the OWL family of ontology languages. It will cover the makeup of description logics, including the TBox (terminology) and ABox (assertions). The tutorial will also discuss OWL 1 and OWL 2, the open versus closed world assumption, the unique name assumption, and available tools and resources. The goal is to help attendees fully understand the application of semantic web and ontology technologies in model-driven software development.
Identification of Compentences in Self-regulated Learning ProcessesRalf Klamma
The document discusses identifying competencies in self-regulated learning processes. It describes analyzing forums for English language learners to identify goals, expressions, and competencies demonstrated by users. Patterns were found between competencies, learning phases, and social interactions among forum "cliques". The document concludes by proposing a widget-based competence dashboard to provide visibility into a learner's competencies.
The document summarizes information about the Doctoral Consortium event at the Fifth European Conference on Technology Enhanced Learning in Barcelona, Spain from September 28 to October 1, 2010. The objective of the Doctoral Consortium was to provide an opportunity for later stage PhD students to present, discuss, and receive feedback on their research. Application requirements included outlining the research question, problems in the field, current knowledge and solutions, preliminary ideas, proposed approach, results so far, and methodology. Activities included peer reviewing other PhD work, receiving feedback from professors on written work, and presenting work with feedback from discussants.
Online video marketing: strategy, trends, tips, measurement, statistics, insights, and more from YouTube "star" and career marketer, Kevin "Nalts" Nalty
The document summarizes a workshop on business applications of social network analysis that will take place on December 12, 2011 in Bangalore, India. The workshop will include paper presentations, two keynote speeches, and a banquet talk and dinner. It will be collocated with the International Multi-Conference on Society, Cybernetics and Informatics. The keynote speakers will discuss social network analysis and its business applications. Topics to be covered include social networks, computational social science, and using insights from social networks to inform business strategies.
An knowledge based system (KBS) is a type of artificial intelligence program that uses a knowledge base to solve problems within a specialized domain that normally requires human expertise. A KBS consists of a knowledge base containing facts, rules, and heuristics about its domain, an inference engine that applies reasoning to the knowledge base, and a user interface. The knowledge base is developed by a knowledge engineer working with a domain expert to capture their expertise. A KBS can perform tasks like classification, diagnosis and planning by drawing on the captured knowledge through its inference engine.
Development, distribution and use of open source software comprise a market of data (source code, bug reports, documentation, number of downloads, etc.) from projects, developers and users. This large amount of data makes it difficult for people involved to make sense of implicit links between software projects, e.g., dependencies, patterns, licenses. This context raises the question of what techniques and mechanisms can be used to help users and developers to link related pieces of information across software projects. In this paper, we propose a framework for a marketplace enhanced using linked open data (LOD) technology for linking software artifacts within projects as well as across software projects. The marketplace provides the infrastructure for collecting and aggregating software engineering data as well as developing services for mining, statistics, analytics and visualization of software data. Based on cross-linking software artifacts and projects, the marketplace enables developers and users to understand the individual value of components, their relationship to bigger software systems. Improved understanding creates new business opportunities for software companies: users will be better able to analyze and compare projects, developers can increase the visibility of their products, hosts may offer plug-ins and services over the data to paying customers.
Multimedia Processing on Multimedia Semantics and Multimedia ContextRalf Klamma
The 10thWorkshop on Multimedia Metadata (SeMuDaTe‘09)
Yiwei Cao, Ralf Klamma, and Dejan KovachevI
Informatik 5 (Information Systems), RWTH Aachen University
2.12.2009
Graz, Austria
2. Constantin Orasan (UoW) EXPERT IntroductionRIILP
The document introduces the EXPERT ITN project, which aims to train young researchers on improving data-driven machine translation through empirical approaches. The project will support researchers during their training and research, with the goal of producing future leaders in the field. It describes the objectives to improve existing corpus-based translation tools by considering user needs, collecting data, incorporating linguistic processing, and developing hybrid approaches. The project consists of 12 individual research projects across 6 work packages and is led by an academic consortium with involvement from private sector partners.
The document discusses model-driven mashups for personal learning environments. It proposes using service mapping descriptions (SMD) to describe RESTful services and automatically generate mashups by combining multiple services. This allows for scalable, client-side mashups across domains via JSONP. SMD provides a lightweight JSON format for annotating services with inputs, outputs, and invocation details to enable automatic data integration and mediation in mashups.
Virtual Campfire/iNMV Storytelling on the iPhoneYiwei Cao
This document summarizes a workshop on future mobile applications. It discusses the UMIC research cluster, challenges for mobile multimedia management, the Virtual Campfire architecture for mobile multimedia management, and the Virtual Campfire concept. It also summarizes the iNMV application for storytelling on the iPhone and the agenda for the workshop, including presentations on iNMV features, the developing environment, implementation experiences, and installation instructions for workshop participants.
A Methodological Framework for Ontology and Multilingual Termontological Data...Christophe Debruyne
A Methodological Framework for Ontology and Multilingual Termontological Database Co-evolution
C. Debruyne, C. Vasquez, K. Kerremans, and A.D. Burgos
LNCS 7567, p. 220 ff.
Ontologies and Multilingual Termontology Bases (MTB) are two knowledge artifacts with different characteristics and different purposes. Ontologies are used to formally capture a shared view of the world to solve particular interoperability and reasoning tasks. MTBs are general, contain fewer types of relations and their purposes are to relate several term labels within and across different languages to cat- egories. For regions in which the multilingual aspect is vital, not only does one need an ontology for interoperability, the concepts in that ontology need to be comprehensible for everyone whose native tongue is one of the principal languages of that region. Multilinguality pro- vides also a powerful mechanism to perform ontology mapping, con- tent annotation, multilingual querying, etc. We intend to meet these challenges by linking both methods for constructing ontologies and MTBs, creating a virtuous cycle. In this paper, we present our method and tool for ontology and MTB co-evolution.
A Media-Theoretical Approach to Technology Enhanced Learnng in Non-Technical ...Ralf Klamma
1. The document discusses a media theoretical approach to technology enhanced learning (TEL) in non-technical disciplines like cultural sciences.
2. It proposes a system called the Lightweight Application Server (LAS) that provides services like multimedia management, access control, and metadata standards to support media-centric TEL in communities of practice.
3. The services have been applied in a project called Virtual Campfire to allow collaborative creation, localization, and contextualized presentation of multimedia artifacts for non-linear knowledge sharing.
Imran Sarwar Bajwa, [2010], "Context Based Meaning Extraction by Means of Markov Logic", in International Journal of Computer Theory and Engineering - (IJCTE) 2(1) pp:35-38, February 2010
Data integration for Clinical Decision Support based on openEHR Archetypes an...Arturo González Ferrer
This document discusses standards for integrating clinical data from different sources to support clinical decision support systems. It evaluates several standards, including HL7 RIM, CDA, vMR, and openEHR, for representing different types of clinical data and linking data to decision rules. Experiments show openEHR archetypes mapped to the HL7 vMR model provide good support for integrating data from EMRs and PHRs. The document proposes using openEHR archetypes that conform to the HL7 vMR model to address both front-end and back-end integration needs for clinical decision support.
Perspectives of Turning Prague Dependency Treebank into a Knowledge BaseVáclav Novák
The document discusses transforming the Prague Dependency Treebank (PDT) into the MultiNet knowledge representation format. It describes MultiNet and PDT, and identifies missing information needed for the transformation. Key issues include mapping dependency structures and functors to cognitive roles, mapping nodes to concepts, and representing constructs like tense. Additional requirements like ontology are also discussed. The conclusion is that while challenging, MultiNet is suitable for inferences and PDT provides a starting point.
This document discusses object-oriented programming in Python. It covers the basics of Python classes, including how classes are defined and behave as objects. It describes the differences between old-style and new-style classes, with new-style classes being the preferred approach. New-style classes allow subclasses of built-in types, properties, static/class methods, cooperative inheritance, and metaclass programming. The document then delves deeper into specific aspects of classes like descriptors, inheritance, and instances.
Pal gov.tutorial2.session12 2.architectural solutions for the integration issuesMustafa Jarrar
This document discusses two main architectural solutions for data integration issues: application-driven integration and data-driven integration. Application-driven integration uses middleware like web services or publish/subscribe architectures. Data-driven integration uses techniques like data consolidation, data warehousing, or virtual data integration to reconcile data schemas and queries. The document provides examples of architectures for each approach.
Semantically-aware Networks and Services for Training and Knowledge Managemen...Gilbert Paquette
This document discusses semantically-aware networks and services for training and knowledge management. It describes software developed at CICE/LICEF for building ontologies and semantically referencing resources to enable semantic search and personalized recommendations. The TELOS system uses competency descriptors and comparison methods to power rules-based recommender agents that are integrated into learning scenarios to provide adaptive assistance to users. Future work is aimed at experimental validation, improving group recommendations, automation, and integrating other recommendation methods.
The document discusses the development of OpenWN-PT, a Brazilian Portuguese Wordnet. Key points:
- OpenWN-PT is being created as part of a joint project between CPDOC and EMAp to apply formal logical tools to Portuguese text.
- It is based on the Universal Wordnet (UWN) which projects WordNet concepts into over 200 languages using statistical methods. The UWN provides an initial automated version of a Portuguese Wordnet.
- The creators are working to improve the initial UWN-based Portuguese Wordnet by combining it with data from Princeton WordNet, UWN, MENTA, and EuroWordNet to generate a new OpenWN-PT file.
This document provides an outline for a tutorial on data integration and open information systems. The tutorial consists of 16 sessions over a total of approximately 40 hours. It will cover topics such as XML, RDF, OWL, data integration, linked data, and the semantic web. The intended learning outcomes include understanding data models, semantic web languages, integrating and querying heterogeneous data using techniques such as SPARQL and RDF. Students will gain practical skills in tools like Oracle Semantic Technology and Virtuoso for storing and querying RDF data. Attendance is mandatory for all sessions.
This document provides an overview of the Demystifying OWL tutorial. The tutorial will explain description logics and the OWL family of ontology languages. It will cover the makeup of description logics, including the TBox (terminology) and ABox (assertions). The tutorial will also discuss OWL 1 and OWL 2, the open versus closed world assumption, the unique name assumption, and available tools and resources. The goal is to help attendees fully understand the application of semantic web and ontology technologies in model-driven software development.
Identification of Compentences in Self-regulated Learning ProcessesRalf Klamma
The document discusses identifying competencies in self-regulated learning processes. It describes analyzing forums for English language learners to identify goals, expressions, and competencies demonstrated by users. Patterns were found between competencies, learning phases, and social interactions among forum "cliques". The document concludes by proposing a widget-based competence dashboard to provide visibility into a learner's competencies.
The document summarizes information about the Doctoral Consortium event at the Fifth European Conference on Technology Enhanced Learning in Barcelona, Spain from September 28 to October 1, 2010. The objective of the Doctoral Consortium was to provide an opportunity for later stage PhD students to present, discuss, and receive feedback on their research. Application requirements included outlining the research question, problems in the field, current knowledge and solutions, preliminary ideas, proposed approach, results so far, and methodology. Activities included peer reviewing other PhD work, receiving feedback from professors on written work, and presenting work with feedback from discussants.
Online video marketing: strategy, trends, tips, measurement, statistics, insights, and more from YouTube "star" and career marketer, Kevin "Nalts" Nalty
The document summarizes a workshop on business applications of social network analysis that will take place on December 12, 2011 in Bangalore, India. The workshop will include paper presentations, two keynote speeches, and a banquet talk and dinner. It will be collocated with the International Multi-Conference on Society, Cybernetics and Informatics. The keynote speakers will discuss social network analysis and its business applications. Topics to be covered include social networks, computational social science, and using insights from social networks to inform business strategies.
An knowledge based system (KBS) is a type of artificial intelligence program that uses a knowledge base to solve problems within a specialized domain that normally requires human expertise. A KBS consists of a knowledge base containing facts, rules, and heuristics about its domain, an inference engine that applies reasoning to the knowledge base, and a user interface. The knowledge base is developed by a knowledge engineer working with a domain expert to capture their expertise. A KBS can perform tasks like classification, diagnosis and planning by drawing on the captured knowledge through its inference engine.
Development, distribution and use of open source software comprise a market of data (source code, bug reports, documentation, number of downloads, etc.) from projects, developers and users. This large amount of data makes it difficult for people involved to make sense of implicit links between software projects, e.g., dependencies, patterns, licenses. This context raises the question of what techniques and mechanisms can be used to help users and developers to link related pieces of information across software projects. In this paper, we propose a framework for a marketplace enhanced using linked open data (LOD) technology for linking software artifacts within projects as well as across software projects. The marketplace provides the infrastructure for collecting and aggregating software engineering data as well as developing services for mining, statistics, analytics and visualization of software data. Based on cross-linking software artifacts and projects, the marketplace enables developers and users to understand the individual value of components, their relationship to bigger software systems. Improved understanding creates new business opportunities for software companies: users will be better able to analyze and compare projects, developers can increase the visibility of their products, hosts may offer plug-ins and services over the data to paying customers.
Multimedia Processing on Multimedia Semantics and Multimedia ContextRalf Klamma
The 10thWorkshop on Multimedia Metadata (SeMuDaTe‘09)
Yiwei Cao, Ralf Klamma, and Dejan KovachevI
Informatik 5 (Information Systems), RWTH Aachen University
2.12.2009
Graz, Austria
2. Constantin Orasan (UoW) EXPERT IntroductionRIILP
The document introduces the EXPERT ITN project, which aims to train young researchers on improving data-driven machine translation through empirical approaches. The project will support researchers during their training and research, with the goal of producing future leaders in the field. It describes the objectives to improve existing corpus-based translation tools by considering user needs, collecting data, incorporating linguistic processing, and developing hybrid approaches. The project consists of 12 individual research projects across 6 work packages and is led by an academic consortium with involvement from private sector partners.
The document discusses model-driven mashups for personal learning environments. It proposes using service mapping descriptions (SMD) to describe RESTful services and automatically generate mashups by combining multiple services. This allows for scalable, client-side mashups across domains via JSONP. SMD provides a lightweight JSON format for annotating services with inputs, outputs, and invocation details to enable automatic data integration and mediation in mashups.
Virtual Campfire/iNMV Storytelling on the iPhoneYiwei Cao
This document summarizes a workshop on future mobile applications. It discusses the UMIC research cluster, challenges for mobile multimedia management, the Virtual Campfire architecture for mobile multimedia management, and the Virtual Campfire concept. It also summarizes the iNMV application for storytelling on the iPhone and the agenda for the workshop, including presentations on iNMV features, the developing environment, implementation experiences, and installation instructions for workshop participants.
A Methodological Framework for Ontology and Multilingual Termontological Data...Christophe Debruyne
A Methodological Framework for Ontology and Multilingual Termontological Database Co-evolution
C. Debruyne, C. Vasquez, K. Kerremans, and A.D. Burgos
LNCS 7567, p. 220 ff.
Ontologies and Multilingual Termontology Bases (MTB) are two knowledge artifacts with different characteristics and different purposes. Ontologies are used to formally capture a shared view of the world to solve particular interoperability and reasoning tasks. MTBs are general, contain fewer types of relations and their purposes are to relate several term labels within and across different languages to cat- egories. For regions in which the multilingual aspect is vital, not only does one need an ontology for interoperability, the concepts in that ontology need to be comprehensible for everyone whose native tongue is one of the principal languages of that region. Multilinguality pro- vides also a powerful mechanism to perform ontology mapping, con- tent annotation, multilingual querying, etc. We intend to meet these challenges by linking both methods for constructing ontologies and MTBs, creating a virtuous cycle. In this paper, we present our method and tool for ontology and MTB co-evolution.
A Media-Theoretical Approach to Technology Enhanced Learnng in Non-Technical ...Ralf Klamma
1. The document discusses a media theoretical approach to technology enhanced learning (TEL) in non-technical disciplines like cultural sciences.
2. It proposes a system called the Lightweight Application Server (LAS) that provides services like multimedia management, access control, and metadata standards to support media-centric TEL in communities of practice.
3. The services have been applied in a project called Virtual Campfire to allow collaborative creation, localization, and contextualized presentation of multimedia artifacts for non-linear knowledge sharing.
Imran Sarwar Bajwa, [2010], "Context Based Meaning Extraction by Means of Markov Logic", in International Journal of Computer Theory and Engineering - (IJCTE) 2(1) pp:35-38, February 2010
A Real-time Collaboration-enabled Mobile Augmented Reality System with Semant...Dejan Kovachev
This document presents XMMC, a real-time collaboration-enabled mobile augmented reality system with semantic multimedia. XMMC allows experts to collaboratively document cultural heritage sites using multimedia annotations and metadata. It uses an XMPP-based architecture to enable real-time sharing of multimedia and annotations between mobile clients. Concurrent editing of XML metadata is supported using an adaptation of the CEFX+ algorithm. An XMPP-extended augmented reality browser integrates multimedia annotations and metadata into a live video stream. Evaluation shows XMMC supports the collaborative documentation workflow while increasing cultural heritage awareness.
Global knowledge management_pawlowski_2012Jan Pawlowski
The extensive slideset is used for a 5ECTS course on global knowledge management. It covers theoretical aspects as well as practical issues. It is accompanied by a case study on global knowledge management as a practical application of the theoretical concepts. For further information, please contact me.The slides can be used for non-commercial purposes but please inform me how you used them!
The document discusses knowledge-based e-learning environments. It describes how ontologies represent the conceptual knowledge in a domain and how student models track individual learners' knowledge. Personalized texts and intelligent tutoring are generated based on these models. The SINTEC project developed collaborative e-learning tools using semantic web technologies, ontologies, and student modeling to enable intelligent search and adaptive content.
- The document describes a PhD thesis defense about using rewriting logic to define the semantics of concurrent programming languages.
- The thesis proposes K as a framework for programming language definitions in rewriting logic, which aims to be more expressive, modular, and concurrent than existing approaches.
- It demonstrates K and its execution in Maude by defining the semantics of a simple concurrent language called KernelC.
The document provides an overview of the course "Statistical Methods in Computational Linguistics" which covers topics such as basic probability theory, n-gram language modeling, information theory, machine learning techniques for part-of-speech tagging, and statistical machine translation. It discusses the history and reasons for the rise of empirical methods in natural language processing, including the availability of large corpora and computing resources. The course will use Python and its NLTK library for programming exercises and possibly WEKA for small machine learning experiments.
Modeling of Speech Synthesis of Standard Arabic Using an Expert Systemcsandit
This document describes an expert system for speech synthesis of Standard Arabic text. It involves two main stages: 1) creation of a sound database and 2) text-to-speech transformation. The transformation process involves phonetic orthographic transcription of the text and then generating voice signals corresponding to the transcribed phonetic sequence. The expert system uses a knowledge base containing sound data and rewriting rules. It transcribes text using graphemes as basic units and then concatenates sound units from the database to synthesize speech. Tests achieved a 96% success rate in pronouncing sentences correctly. Future work aims to improve prosody and develop fully automatic signal segmentation.
This paper presents an audio personalization framework for mobile devices. The multimedia
models MPEG-21 and MPEG-7 are used to describe metadata information. The metadata which support personalization are stored into each device. The Web Ontology Language (OWL) language is used to produce and manipulate the relative ontological descriptions. The process is distributed according to the MapReduce framework and implemented over the Android platform. It determines a hierarchical system structure consisted of Master and Worker devices. The Master retrieves a list of audio tracks matching specific criteria using SPARQL queries.
Live to e-Learning, a lecture capture and delivery service based on MediaMosaMediaMosa
L2L (Live to e-Learning) a lecture capture and delivery service based on MediaMosa. Presentation by Matteo Bertazzo from CINECA InterUniversity Consortium at the MediaMosa Community day, November 25, 2010
This document presents a distributed framework for performing natural language processing (NLP) on large collections of journal articles and integrating the results with existing structured knowledge bases. The framework uses a scaled NLP pipeline to extract structured annotations from unstructured text. It provides massively parallel access to these structured annotations and integrates them with ontologies and databases in a knowledge base. This allows applications to leverage both the unstructured text and existing structured knowledge for tasks like visualization, natural language understanding, and validation of other methods.
The document discusses a cloud multimedia platform and its applications. It begins with an agenda that covers cloud computing concepts, multimedia in the cloud, case studies, and a summary. Case studies include multimedia processing and metadata, social network analysis in the cloud, and mobile multimedia elastic cloud applications. The summary states that cloud computing provides on-demand scalability, drives new data processing systems, allows fast development of scalable multimedia services, and has benefits for multimedia systems by offloading heavy tasks to cloud services. It asks what types of tasks are reasonable to implement in the cloud.
Use Cases for MXF Metadata and Simplified System Interactiondietervr
Presentation given by Limecraft at the 2011 ECM-EDM Metadata Hands-on Workshop organised by the EBU.
We talked about simplified ways of obtaining and manipulating Material eXchange Format (MXF) metadata using existing toolkits and standards.
This document discusses personalized recommender systems for resource-based learning. It begins with an overview of folksonomy systems and models, then describes the CROKODIL application scenario which extends the folksonomy model. It reviews related work on ranking algorithms in folksonomies and recommender systems in e-learning. The research topic aims to exploit semantic information in folksonomies to rank learning resources using graph-based recommender techniques. The current progress includes a conceptual architecture and approaches using activity hierarchies and semantic tag types to generate recommendations. Future work involves analyzing ranking algorithms, implementing concepts, and evaluating the approaches.
Similar to Knowledge Multimedia Processes in Technology Enhanced Learning (20)
The Legacy of ROLE - Where are we at the workplace?Ralf Klamma
This slide deck discusses the Responsive Open Learning Environments (ROLE) project. It provides an overview of the ROLE conceptualization of self-regulated learning and technical infrastructure. It then summarizes two case studies that implemented ROLE - the BOOST project which used ROLE to support skills training in businesses, and the VIRTUS project which developed a virtual vocational education system using ROLE. It concludes by discussing the future potential for ROLE in areas like mixed reality environments and domain-specific learning.
Gamification of Community Information SystemsRalf Klamma
SAGE Dissemination Workshop, Sousse, Tunesia, April 2017
Ralf Klamma, Mohammad Abduh Arifin
Advanced Community Information Systems (ACIS)RWTH Aachen University, Germany
klamma@dbis.rwth-aachen.de
The Legacy and the Future of Research Networks in Technology-Enhanced LearningRalf Klamma
Ralf Klamma
Orphée Rendevous 2017, Font Romeu, France
Advanced Community Information Systems (ACIS) RWTH Aachen University, Germany
klamma@dbis.rwth-aachen.de
DevOpsUse for Large-Scale Social Requirements Engineering @ SIG WELL - EC-TEL...Ralf Klamma
This document discusses using DevOps and social requirements engineering for large-scale projects. It introduces the Requirements Bazaar platform for enabling communication between users and developers on requirements. Users can create components, comment, vote and post requirements. DevOps promotes collaboration between stakeholders, automation, and alignment of objectives. The WEKIT project aims to improve maintenance of aircraft using improved documentation and collaboration between maintenance personnel.
Learning Analytics: Trends and Issues of the Empirical Research of the Years ...Ralf Klamma
This document summarizes the findings of a study analyzing empirical learning analytics research from 2011-2014. The study found that most research examined log data from university students to visualize learning trajectories and predict success or failure. However, some innovative studies looked at informal learning communities, video/audio data, automated assessment, and error diagnosis. The document recommends that future learning analytics research incorporate more educational and psychological theories for a deeper understanding of the issues.
A Short Swim through the Personal Learning PoolRalf Klamma
This document discusses different models of learning and tools to support learning. It summarizes several learning cycles including self-regulated learning cycles and organizational learning cycles. It then describes several digital tools developed by the author to support learning, including tools for video annotation, digital storytelling, and interactive 3D anatomy models. Finally, it compares different delivery models for informal learning solutions and discusses trends toward more client-side web architectures.
Scaling up digital learning support for smart workforce development in cluste...Ralf Klamma
4th Research Forum on Small and Medium Sized Enterprises, Chur, Switzerland, February 9-10, 2015
Ralf Klamma & Tobias Ley
RWTH Aachen University, Germany & Tallinn University, Estonia
klamma@dbis.rwth-aachen.de & tley@tlu.ee
This slide deck discusses scaling community information systems. It provides background on RWTH Aachen University and the Advanced Community Information Systems group. It then discusses challenges in scaling community systems, including privacy, sustainability, legacy systems, and scaling to other communities/regions. It also presents the las2peer platform and its goals of creating distributed, reliable, and secure systems to support community services and handle information trustworthily.
Technical Challenges for Realizing Learning AnalyticsRalf Klamma
Technical Challenges for Realizing Learning Analytics
Learntec 2015, January 28, 2015, Karlsruhe, Germany,
Ralf Klamma
Advanced Community Informations Systems (ACIS) Group
RWTH Aachen University
Technology-Enhanced Learning at the Workplace – From islands of automation to...Ralf Klamma
Technology-Enhanced Learning at the Workplace – From islands of automation to broad deployment of informal learning in small and medium sized enterprises
Invited Talk - Siegen, January 20, 2015
Ralf Klamma, RWTH Aachen
Advanced Community Information Systems Group (ACIS)
The document provides an annual report for the Advanced Community Information Systems (ACIS) group at RWTH Aachen University from October 2013 to September 2014. It summarizes the group's research projects, achievements, community activities, software demonstrations, publications, and theses completed during this period. The group conducted research on mobile community information systems and technology enhanced learning, involved in community services like editorial boards and conference organization, and engaged in open source software development.
Blueprint for Software Engineering in Technology Enhanced Learning ProjectsRalf Klamma
Blueprint for Software Engineering in Technology Enhanced Learning Projects
Ralf Klamma, Michael Derntl, István Koren, Petru Nicolaescu, Dominik Renzel
RWTH Aachen University Advanced Community Information Systems (ACIS) Aachen, Germany
klamma@dbis.rwth-aachen.de
9th European Conference on Technology Enhanced Learning (EC-TEL 2014)
September 18-21, 2014
Graz, Austria
Navigation Support in Evolving Communities by a Web-based DashboardRalf Klamma
This document discusses the development of a navigation dashboard to support the evolution of open source software communities. It describes related work on social network analysis and text mining of communities. A survey of open source developers found interest in visualizing the overall community and changes over time through metrics and network graphs. A prototype dashboard was created integrating data from code repositories and mailing lists of three bioinformatics projects. Feedback from community members praised the social network graph and saw potential uses for recruitment, funding, and comparing projects. The conclusions identify interest in reflecting community evolution and network visualizations, with an outlook on expanding text analysis and social features to better support open source communities.
The document discusses big data and learning analytics. It notes the changing roles of researchers from isolated scientists to members of networked communities. It raises ethical concerns about privacy with the collection and analysis of large datasets. The document also discusses lessons learned about ensuring transparency, accountability, and community involvement in the use of data and analytics.
Advanced Community Information Systems Group (ACIS) Annual Report 2013Ralf Klamma
Advanced Community Information Systems (ACIS)
Lehrstuhl Informatik 5 – Information Systems
RWTH Aachen University
Ahornstr. 55 | 52056 Aachen | Germany
Community Learning Analytics - Challenges and Opportunities - ICWL 2013 Invit...Ralf Klamma
Community Learning Analytics –Challenges and Opportunities
Invited Talk ICWL 2013, Kensing, Taiwan, October 7, 2013
Ralf Klamma
Advanced Community Information Systems (ACIS)
RWTH Aachen University, Germany
klamma@dbis.rwth-aachen.de
Supporting Professional Communities in the Next Web Ralf Klamma
Keynote
PWM Wissenstag Social Enterprise @ I-KNOW 2013
Wednesday, September 4, 2013 in Graz (Austria)
Ralf Klamma
Advanced Community Information Systems (ACIS)
RWTH Aachen
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
Must Know Postgres Extension for DBA and Developer during MigrationMydbops
Mydbops Opensource Database Meetup 16
Topic: Must-Know PostgreSQL Extensions for Developers and DBAs During Migration
Speaker: Deepak Mahto, Founder of DataCloudGaze Consulting
Date & Time: 8th June | 10 AM - 1 PM IST
Venue: Bangalore International Centre, Bangalore
Abstract: Discover how PostgreSQL extensions can be your secret weapon! This talk explores how key extensions enhance database capabilities and streamline the migration process for users moving from other relational databases like Oracle.
Key Takeaways:
* Learn about crucial extensions like oracle_fdw, pgtt, and pg_audit that ease migration complexities.
* Gain valuable strategies for implementing these extensions in PostgreSQL to achieve license freedom.
* Discover how these key extensions can empower both developers and DBAs during the migration process.
* Don't miss this chance to gain practical knowledge from an industry expert and stay updated on the latest open-source database trends.
Mydbops Managed Services specializes in taking the pain out of database management while optimizing performance. Since 2015, we have been providing top-notch support and assistance for the top three open-source databases: MySQL, MongoDB, and PostgreSQL.
Our team offers a wide range of services, including assistance, support, consulting, 24/7 operations, and expertise in all relevant technologies. We help organizations improve their database's performance, scalability, efficiency, and availability.
Contact us: info@mydbops.com
Visit: https://www.mydbops.com/
Follow us on LinkedIn: https://in.linkedin.com/company/mydbops
For more details and updates, please follow up the below links.
Meetup Page : https://www.meetup.com/mydbops-databa...
Twitter: https://twitter.com/mydbopsofficial
Blogs: https://www.mydbops.com/blog/
Facebook(Meta): https://www.facebook.com/mydbops/
"NATO Hackathon Winner: AI-Powered Drug Search", Taras KlobaFwdays
This is a session that details how PostgreSQL's features and Azure AI Services can be effectively used to significantly enhance the search functionality in any application.
In this session, we'll share insights on how we used PostgreSQL to facilitate precise searches across multiple fields in our mobile application. The techniques include using LIKE and ILIKE operators and integrating a trigram-based search to handle potential misspellings, thereby increasing the search accuracy.
We'll also discuss how the azure_ai extension on PostgreSQL databases in Azure and Azure AI Services were utilized to create vectors from user input, a feature beneficial when users wish to find specific items based on text prompts. While our application's case study involves a drug search, the techniques and principles shared in this session can be adapted to improve search functionality in a wide range of applications. Join us to learn how PostgreSQL and Azure AI can be harnessed to enhance your application's search capability.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
From Natural Language to Structured Solr Queries using LLMsSease
This talk draws on experimentation to enable AI applications with Solr. One important use case is to use AI for better accessibility and discoverability of the data: while User eXperience techniques, lexical search improvements, and data harmonization can take organizations to a good level of accessibility, a structural (or “cognitive” gap) remains between the data user needs and the data producer constraints.
That is where AI – and most importantly, Natural Language Processing and Large Language Model techniques – could make a difference. This natural language, conversational engine could facilitate access and usage of the data leveraging the semantics of any data source.
The objective of the presentation is to propose a technical approach and a way forward to achieve this goal.
The key concept is to enable users to express their search queries in natural language, which the LLM then enriches, interprets, and translates into structured queries based on the Solr index’s metadata.
This approach leverages the LLM’s ability to understand the nuances of natural language and the structure of documents within Apache Solr.
The LLM acts as an intermediary agent, offering a transparent experience to users automatically and potentially uncovering relevant documents that conventional search methods might overlook. The presentation will include the results of this experimental work, lessons learned, best practices, and the scope of future work that should improve the approach and make it production-ready.
"$10 thousand per minute of downtime: architecture, queues, streaming and fin...Fwdays
Direct losses from downtime in 1 minute = $5-$10 thousand dollars. Reputation is priceless.
As part of the talk, we will consider the architectural strategies necessary for the development of highly loaded fintech solutions. We will focus on using queues and streaming to efficiently work and manage large amounts of data in real-time and to minimize latency.
We will focus special attention on the architectural patterns used in the design of the fintech system, microservices and event-driven architecture, which ensure scalability, fault tolerance, and consistency of the entire system.
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Session 1 - Intro to Robotic Process Automation.pdfUiPathCommunity
👉 Check out our full 'Africa Series - Automation Student Developers (EN)' page to register for the full program:
https://bit.ly/Automation_Student_Kickstart
In this session, we shall introduce you to the world of automation, the UiPath Platform, and guide you on how to install and setup UiPath Studio on your Windows PC.
📕 Detailed agenda:
What is RPA? Benefits of RPA?
RPA Applications
The UiPath End-to-End Automation Platform
UiPath Studio CE Installation and Setup
💻 Extra training through UiPath Academy:
Introduction to Automation
UiPath Business Automation Platform
Explore automation development with UiPath Studio
👉 Register here for our upcoming Session 2 on June 20: Introduction to UiPath Studio Fundamentals: https://community.uipath.com/events/details/uipath-lagos-presents-session-2-introduction-to-uipath-studio-fundamentals/
"Scaling RAG Applications to serve millions of users", Kevin GoedeckeFwdays
How we managed to grow and scale a RAG application from zero to thousands of users in 7 months. Lessons from technical challenges around managing high load for LLMs, RAGs and Vector databases.
LF Energy Webinar: Carbon Data Specifications: Mechanisms to Improve Data Acc...DanBrown980551
This LF Energy webinar took place June 20, 2024. It featured:
-Alex Thornton, LF Energy
-Hallie Cramer, Google
-Daniel Roesler, UtilityAPI
-Henry Richardson, WattTime
In response to the urgency and scale required to effectively address climate change, open source solutions offer significant potential for driving innovation and progress. Currently, there is a growing demand for standardization and interoperability in energy data and modeling. Open source standards and specifications within the energy sector can also alleviate challenges associated with data fragmentation, transparency, and accessibility. At the same time, it is crucial to consider privacy and security concerns throughout the development of open source platforms.
This webinar will delve into the motivations behind establishing LF Energy’s Carbon Data Specification Consortium. It will provide an overview of the draft specifications and the ongoing progress made by the respective working groups.
Three primary specifications will be discussed:
-Discovery and client registration, emphasizing transparent processes and secure and private access
-Customer data, centering around customer tariffs, bills, energy usage, and full consumption disclosure
-Power systems data, focusing on grid data, inclusive of transmission and distribution networks, generation, intergrid power flows, and market settlement data
This talk will cover ScyllaDB Architecture from the cluster-level view and zoom in on data distribution and internal node architecture. In the process, we will learn the secret sauce used to get ScyllaDB's high availability and superior performance. We will also touch on the upcoming changes to ScyllaDB architecture, moving to strongly consistent metadata and tablets.
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
High performance Serverless Java on AWS- GoTo Amsterdam 2024Vadym Kazulkin
Java is for many years one of the most popular programming languages, but it used to have hard times in the Serverless community. Java is known for its high cold start times and high memory footprint, comparing to other programming languages like Node.js and Python. In this talk I'll look at the general best practices and techniques we can use to decrease memory consumption, cold start times for Java Serverless development on AWS including GraalVM (Native Image) and AWS own offering SnapStart based on Firecracker microVM snapshot and restore and CRaC (Coordinated Restore at Checkpoint) runtime hooks. I'll also provide a lot of benchmarking on Lambda functions trying out various deployment package sizes, Lambda memory settings, Java compilation options and HTTP (a)synchronous clients and measure their impact on cold and warm start times.
High performance Serverless Java on AWS- GoTo Amsterdam 2024
Knowledge Multimedia Processes in Technology Enhanced Learning
1. CUELC
Knowledge Multimedia Processes
in Technology Enhanced Learning
Ralf Klamma1, Marc Spaniol2, Matthias Jarke1
1RWTH Aachen University, Germany &
2Max Planck Institute for Computer Science
ACM Multimedia WS
Multimedia Technologies for Distant Learning MTDL 2009,
Lehrstuhl Informatik 5
(Informationssysteme) Beijing, China, October 23, 2009
Prof. Dr. M. Jarke
I5-RK-0909-1
2. Agenda
Theoretical background
CUELC
– Cross-media theory of transcription
– SECI model by Nonaka
Knowledge Multimedia Processes
– Transcriptions
– Practiced/Formalized Localizations
– Addressing
Evaluations
Conclusions & Outlook
Lehrstuhl Informatik 5
(Informationssysteme)
Prof. Dr. M. Jarke
I5-RK-0909-2
3. Cross-Media Theory
of Transcription
Addressee-based
Transcription
Transcript
Transcript
CUELC
Pre-Texts
Selection of
Pre-Texts
Integration into the
cultural Archive
Strategies of transcriptivity
Collection of learning materials are re-structured by new media
Design is specific for media and learning communities by default
Strategies of addressing
Social Software promotes the globalization of address spaces
Personalization is mission critical for learning communities
Strategies of localisation
Re-organization of local practices is stimulated by new media
Lehrstuhl Informatik 5
(Informationssysteme)
Need to model learning practice explicitly
Prof. Dr. M. Jarke Jäger, Stanitzek: Transkribieren - Medien/Lektüre 2002
I5-RK-0909-3
4. Multimedia Knowledge Management
for Technology Enhanced Learning
Synthesis:Transcriptivity / SECI Model / Communities of Practice
CUELC Goal: Media theoretic focus on technology enhanced learning
Externalization:
Pre-Texts Transcription Transcript
Socialization: Combination:
practiced Localization formalized Localization
Transcript
&
Context
Lehrstuhl Informatik 5 Internalization:
(Informationssysteme)
Prof. Dr. M. Jarke Addressing
I5-RK-0909-4 EC-TEL 2007 [Spaniol et al.]
5. Transcription:
an IS view
Pre-Texts Inter- & intra-media transcription
Transcript
CUELC
Examples and Requirements on IS :
– Creation and maintenance of multimedia metadata
– Representation of multimedia relationships (hypermedia)
– Extension of archiv for multimedia content
⇒ Implications for the management of multimedia:
1. Deskriptors for multimedia
2. Multimedia graph representation
Lehrstuhl Informatik 5
(Informationssysteme) 3. Extensibility of schemata and descriptors
Prof. Dr. M. Jarke
I5-RK-0909-5
6. Transkription
Media: M ∈{M „Text“ , M „Image“ , M „Audio“ , M „Video“ }n ∪ {P}, n ∈ N 0
CUELC
Media in IS: M P
Transcripts:
t p := (τ (p), µ (p), ι (p))
Transcription:
τ :P → M P
Meta-transcription
(manual): µ : P → M„Text“ ∪ ∅
Meta-transcription
(in IS ): ι : P → M„Text“
Lehrstuhl Informatik 5
(Informationssysteme)
Prof. Dr. M. Jarke
I5-RK-0909-6
7. Formalized Localization:
an IS view
Pre-Texts Inter- & intra-media transcription
Transcript
CUELC
Examples and Requirements on IS:
– Individual categorization of media
– Creation and Maintenance of reference collections
– Fine granular management of access rights
⇒ Implications for the management of metadata:
1. Schemata for classification and vocabularies
2. Variation sets of multimedia
Lehrstuhl Informatik 5
(Informationssysteme)
Prof. Dr. M. Jarke
3. Digital rights management
I5-RK-0909-7
8. Formalized Localization:
Keyword Index/ Category System
Keyword index: λ c,1
m1
S λ 1,2 M
CUELC S := {s} λ 3,2
s1 m2
Access relation: λ 1,d
s3 m3
λ i : S → P (M) s2
sc md
Category index:
K := {k} k := (id, s) root λ c,1
m1
λ 1,2
κ root,2 κ root,1
OK M
(Tree)-Order: λ 3,2
k1 m2
o := (k, k' ,κ (k id , k'id )) λ 1,d
κ 1,3 k3 m3
Access relation : k2
κ 2,c
Lehrstuhl Informatik 5
λ k : ID → P (M)
(Informationssysteme)
Prof. Dr. M. Jarke kc md
I5-RK-0909-8 ID
9. Formalized Localization :
Local Access Relation
Global access relation: root root
κ root,2 κ root,1
κ root,c κ root,1
OK insert kη OK
CUELC
lglobal := {K global , O global , λglobal }
k1 k1
Local access relation:
κ 1,3
κ 1,η
delete kη
k3 kη
l local := {K local , O lodal , λlodal }
κ η,3
k2 k3
rename k2
κ 2,c
kc
Opeations in
κ c,2
kc and kc
category schemata : k2
- Insert root λ c,1
- Delete m1
λ 1,2
κ root,2 κ root,1
- Rename
OK M
λ 3,2
k1 m2
λ 1,d
κ 1,3
Operations in k3 λ 2,3
m3
media relations: k2
- Insert
κ 2,c
insert λ 2,3 md
Lehrstuhl Informatik 5 kc
(Informationssysteme)
- Delete
Prof. Dr. M. Jarke
I5-RK-0909-9
delete λ 2,3
10. Adressing:
an IS View
represent & discuss in context Transcript
CUELC in
context
Examples and Requirements on IS :
– Representations of multimedia content in context
– Options for legitimate peripheral participation
– (asynchronous) cooperation support
⇒ Implications for the management of metadata:
1. Metadata driven adaption of devices
2. Profiles
Lehrstuhl Informatik 5
(Informationssysteme) 3. User preferences and usage histories
Prof. Dr. M. Jarke
I5-RK-0909-10
11. Adressing
Discussions: dtp := (t p ,δ (t p ))
CUELC
δ t : T → M„Text“
Hypermedia documents: h t,l,d := (t u , lv , dtw , µ (h))
p p
u, v, w ∈ N 0
Discussions on dht,l,d := (ht,l,d ,δ h (ht,l,d ))
hypermedia documents:
δ h : H → M„Text“
Context-aware adressing α l :T →I
Lehrstuhl Informatik 5
of multimedia content:
(Informationssysteme)
Prof. Dr. M. Jarke
I5-RK-0909-11
12. Validation of the Concept
CUELC
Self-observation tools
for Communities
Socio-technical Measure,
information system Analyses,
Development Simulate
Collaborative adaptive
learning platforms
CESE: Hypertext Environment for Talmud tractates NMV & MEDINA: Dublin Core & MPEG-7 based Media Tagging
MARS: Transcription of electro-acoustic music ACIS: GIS-Multimedia Management for Cultural Heritage
SOCRATES: Chat for Communities of Aphasics MIST: Non-linear Story-Telling
MECCA: Collaborative Screening of Movies PROLEARN Academy, GRAECULUS, Multimedia Metadata,
Lehrstuhl Informatik 5 VEL 2.0: Virtual Entrepreneurship Lab CUELC, Bamyian Valley: Community Portals
(Informationssysteme)
Prof. Dr. M. Jarke
I5-RK-0909-12 Klamma, Spaniol, Cao: MPEG-7 Compliant Community Hosting, JUKM, Springer 2006
13. MIST – Multimedia Management
• Creation and management:
CUELC - Media collections
- Metadata
- Media variations
• Collaborative indexing based on:
- Free text annotations (à la Flickr)
- Semantic MPEG-7 basetypes
Agent
Event
Concept
Object
Place
Lehrstuhl Informatik 5 Time
(Informationssysteme)
Prof. Dr. M. Jarke State
I5-RK-0909-13
14. MIST – Story Creation
CUELC
Structured of
Structure access
“Mapping“of
to contents of the
(sub)-problems to
Begin-
multimedia archive
Middle-
artifacts
End-Sequences
Illustration of episodic knowledge
– Modeling of non-linear stories based on Movement Oriented Design (MOD)
[Sharda 2005]; Cooperation with Victoria University Melbourne, Australia
Lehrstuhl Informatik 5 – Decomposition of stories according to a problem hierarchy
(Informationssysteme)
Prof. Dr. M. Jarke
I5-RK-0909-14
– Recall of semantic high-quality metadata on multimedia artifacts
15. MIST – Story Consumption
Optional
CUELC successor-
sequences
Currently
selected
artifact
MPEG-7 based
annotations
Free text
annotations
Non-linear access sequences in multimedia stories,
i.e. “success vs. failure“ depending on access sequence
Context information via multimedia annotations
Lehrstuhl Informatik 5
(Informationssysteme)
Prof. Dr. M. Jarke
I5-RK-0909-15
16. M. Spaniol, Y. Cao, R. Klamma, P. Moreno-Ger, B. Fernández Manjón, J. Luis Sierra, G. Toubekis :
From Story-Telling to Educational Gaming: The Bamiyan Valley Case ,
in F. Li, J. Zhao, T. K. Shih R. Lau, Q. Li, D. McLeod (Eds.) Advances in Web Based Learning - ICWL 2008,
<e-Adventure> Editor
Authoring environment
CUELC for point and click
adventure games
Refinement of the raw
educational game
derived from the
non-linear story
Specificationof items
and character
references
Definition of dialogs
⇒ Fully featured adventure game for vocational training
Lehrstuhl Informatik 5
(Informationssysteme)
Prof. Dr. M. Jarke
I5-RK-0909-16
17. Conclusions & Outlook
Cross mediality for collaborative learning platforms supports
CUELC – Evolutinary development of multimedia learning services
– Configuration and maintenance of multimedia learning repositories
Semantic self organization of learning communities
supported by
– Flexible semantic enrichment of complex processes based on
multimedia artifacts
– Re-contextualization of multimedia knowledge in stories & games
Ongoing and future work
– Mobile Storytelling based on a simplified mobile user interface
– Use of templates for supporting inexperienced storytellers
– Mashing up different stories to create and share innovative
knowledge
Lehrstuhl Informatik 5
(Informationssysteme)
Prof. Dr. M. Jarke
I5-RK-0909-17
18. Join us!
http://www.role-project.eu/
CUELC
LinkedIn
http://www.linkedin.com/groupInvitation?gid=1590487
Lehrstuhl Informatik 5
(Informationssysteme)
Prof. Dr. M. Jarke
I5-RK-0909-18