An LMS (Learning Management System) is a digital platform that combines online tools and services to provide enhanced learning experiences. It allows schools to manage administration functions, communication tools, planning resources and digital content repositories in one online environment. Developing interoperable LMS systems ensures digital content and teacher resources can be easily shared across schools and different LMS platforms. Going it alone without an integrated LMS requires schools to manually coordinate and maintain multiple disparate online tools, increasing costs and reducing functionality compared to an off-the-shelf LMS.
The document provides a summary of a lecture on CSCW in times of change and social media. The lecture discusses how CSCW and social media are transforming organizations into networked structures and how personalization of data is enabling personalized paths for consumers. It also explores applications of these changes in domains like science and health, and outlines future challenges in areas like open science, linked data, and mobile technologies.
Recommendations for Open Online Education: An Algorithmic StudyHendrik Drachsler
Recommending courses to students in online platforms is studied widely. Almost all studies target closed platforms, that belong to a University or some other educational provider. This makes the course recommenders situation specific. Over the last years, a demand has developed for recommender system that suit open online platforms. Those platforms have some common characteristics, such as the lack of rich user profiles with content metadata. Instead they log user interactions within the platform that can be used for analysis and personalization. In this paper, we investigate how user interactions and activities tracked within open online learning platforms can be used to provide recommendations. We present a study in which we investigate the application of several state-of-the-art recommender algorithms, including a graph-based recommender approach. We use data from the OpenU open online learning platform that is in use by the Open University of the Netherlands. The results show that user-based and memory-based methods perform better than model-based and factorization methods. Particularly, the graph-based recommender system proves to outperform the classical approaches on prediction accuracy of recommendations in terms of recall. We conclude that, if the algorithms are chosen wisely, recommenders can contribute to a better experience of learners in open online courses.
Soude Fazeli, Enayat Rajabi, Leonardo Lezcano, Hendrik Drachsler, Peter Sloep
Dutch Cooking with xAPI Recipes, The Good, the Bad, and the ConsistentHendrik Drachsler
This document discusses the use of experience API (xAPI) statements and metadata standards for learning analytics. It provides an example of an xAPI statement in JSON format and describes how xAPI works by sending statements about learning activities and experiences to a learning record store. It also summarizes the Dutch xAPI specification for learning activities (DSLA) which includes a registry and repository of xAPI statements to support interoperability. The document advocates for the adoption of xAPI and DSLA to facilitate learning analytics and data sharing across systems in the Netherlands.
Presentation given at Serious Request 2015, #SR15, Heerlen.
Within the Open University we started a 12 hours marathon college, to collect money for the charity action of radiostation 3FM. The collected money will go to the red cross and support young people in conflict areas.
The LACE project connects players in the fields of learning analytics and educational data mining to support the development of a European community and share best practices. The project aims to promote knowledge sharing, increase the evidence base of learning analytics, and contribute to defining its future directions. Key activities include organizing events, creating a knowledge base of learning analytics evidence, and producing reviews on latest developments in the field.
Presentation given at PELARS Policy event, Brussles, 09.11.2016. A follow up op the first LACE Policy event in April 2015. Special focus is on the exploitation and sustainability activities for LACE in the SIG LACE SoLAR.
Privacy and Analytics – it’s a DELICATE Issue. A Checklist for Trusted Learni...Hendrik Drachsler
The widespread adoption of Learning Analytics (LA) and Educational Data Mining (EDM) has somewhat stagnated recently, and in some prominent cases even been reversed following concerns by governments, stakeholders and civil rights groups about privacy and ethics applied to the handling of personal data. In this ongoing discussion, fears and realities are often indistin-guishably mixed up, leading to an atmosphere of uncertainty among potential beneficiaries of Learning Analytics, as well as hesitations among institutional managers who aim to innovate their institution’s learning support by implementing data and analytics with a view on improving student success. In this presentation, we try to get to the heart of the matter, by analysing the most common views and the propositions made by the LA community to solve them. We conclude the paper with an eight-point checklist named DELICATE that can be applied by researchers, policy makers and institutional managers to facilitate a trusted implementation of Learning Analytics.
An LMS (Learning Management System) is a digital platform that combines online tools and services to provide enhanced learning experiences. It allows schools to manage administration functions, communication tools, planning resources and digital content repositories in one online environment. Developing interoperable LMS systems ensures digital content and teacher resources can be easily shared across schools and different LMS platforms. Going it alone without an integrated LMS requires schools to manually coordinate and maintain multiple disparate online tools, increasing costs and reducing functionality compared to an off-the-shelf LMS.
The document provides a summary of a lecture on CSCW in times of change and social media. The lecture discusses how CSCW and social media are transforming organizations into networked structures and how personalization of data is enabling personalized paths for consumers. It also explores applications of these changes in domains like science and health, and outlines future challenges in areas like open science, linked data, and mobile technologies.
Recommendations for Open Online Education: An Algorithmic StudyHendrik Drachsler
Recommending courses to students in online platforms is studied widely. Almost all studies target closed platforms, that belong to a University or some other educational provider. This makes the course recommenders situation specific. Over the last years, a demand has developed for recommender system that suit open online platforms. Those platforms have some common characteristics, such as the lack of rich user profiles with content metadata. Instead they log user interactions within the platform that can be used for analysis and personalization. In this paper, we investigate how user interactions and activities tracked within open online learning platforms can be used to provide recommendations. We present a study in which we investigate the application of several state-of-the-art recommender algorithms, including a graph-based recommender approach. We use data from the OpenU open online learning platform that is in use by the Open University of the Netherlands. The results show that user-based and memory-based methods perform better than model-based and factorization methods. Particularly, the graph-based recommender system proves to outperform the classical approaches on prediction accuracy of recommendations in terms of recall. We conclude that, if the algorithms are chosen wisely, recommenders can contribute to a better experience of learners in open online courses.
Soude Fazeli, Enayat Rajabi, Leonardo Lezcano, Hendrik Drachsler, Peter Sloep
Dutch Cooking with xAPI Recipes, The Good, the Bad, and the ConsistentHendrik Drachsler
This document discusses the use of experience API (xAPI) statements and metadata standards for learning analytics. It provides an example of an xAPI statement in JSON format and describes how xAPI works by sending statements about learning activities and experiences to a learning record store. It also summarizes the Dutch xAPI specification for learning activities (DSLA) which includes a registry and repository of xAPI statements to support interoperability. The document advocates for the adoption of xAPI and DSLA to facilitate learning analytics and data sharing across systems in the Netherlands.
Presentation given at Serious Request 2015, #SR15, Heerlen.
Within the Open University we started a 12 hours marathon college, to collect money for the charity action of radiostation 3FM. The collected money will go to the red cross and support young people in conflict areas.
The LACE project connects players in the fields of learning analytics and educational data mining to support the development of a European community and share best practices. The project aims to promote knowledge sharing, increase the evidence base of learning analytics, and contribute to defining its future directions. Key activities include organizing events, creating a knowledge base of learning analytics evidence, and producing reviews on latest developments in the field.
Presentation given at PELARS Policy event, Brussles, 09.11.2016. A follow up op the first LACE Policy event in April 2015. Special focus is on the exploitation and sustainability activities for LACE in the SIG LACE SoLAR.
Privacy and Analytics – it’s a DELICATE Issue. A Checklist for Trusted Learni...Hendrik Drachsler
The widespread adoption of Learning Analytics (LA) and Educational Data Mining (EDM) has somewhat stagnated recently, and in some prominent cases even been reversed following concerns by governments, stakeholders and civil rights groups about privacy and ethics applied to the handling of personal data. In this ongoing discussion, fears and realities are often indistin-guishably mixed up, leading to an atmosphere of uncertainty among potential beneficiaries of Learning Analytics, as well as hesitations among institutional managers who aim to innovate their institution’s learning support by implementing data and analytics with a view on improving student success. In this presentation, we try to get to the heart of the matter, by analysing the most common views and the propositions made by the LA community to solve them. We conclude the paper with an eight-point checklist named DELICATE that can be applied by researchers, policy makers and institutional managers to facilitate a trusted implementation of Learning Analytics.
This document introduces Pig Latin, a new language designed for analyzing extremely large datasets. Pig Latin aims to fill the gap between the declarative style of SQL and the procedural style of MapReduce. It compiles programs into physical plans executed over Hadoop. The language allows for a flexible data model, user-defined functions, and operates directly on files without requiring data import. Pig Latin is being used by engineers at Yahoo to more easily analyze terabytes of collected data.
This document discusses deep web searching (DWS). It begins with an abstract that explains how the deep web is growing rapidly and there is interest in techniques to efficiently locate deep-web interfaces. The document then discusses text clustering to group documents based on user-inputted keywords. It proposes using a fuzzy-logic model and self-organized mapping (SOM) algorithm to cluster documents. It also discusses using WordNet as a lexical database and the system architecture, data flow diagram, and implementation of the deep web searching system.
2004 national science foundation knowledge management and network collaborati...Christopher Thorn
The document discusses collaboration tools and knowledge management for the SCALE project. It describes basic requirements like document versioning and tools to organize work. It also mentions databases for student and teacher data. A section on project workspaces explains how they integrate project work habits, provide a knowledge base and intelligent search. Examples of educational customers and use cases are provided at the end.
This is an introduction to the reusable technology solutions developed by the rapid innovation projects of the UK OER Programme during 2012. Bidders were asked to address problems identified through the Programme, and 15 UK university-based projects were awarded between £13,000 and £25,000 each over 6 months. They have developed a range of solutions to enhance the digital infrastructure to support open content in an educational context. Projects worked in an open innovation way, blogging as they went, working with peers and users, and the outputs are all open source, documented and reusable. Links are provided to each project output.
Slides created by JISC: Programme Manager Amber Thomas, Programme Office Alicja Shah, Technical Advisory JISC Cetis particularly Martin Hawksey. Dandelion Clock sourced through flickr and attributed on the front slide.
Academic Resources Architecture Framework Planning using ERP in Cloud ComputingIRJET Journal
This document discusses an academic resources architecture framework for planning and using enterprise resource planning (ERP) systems in cloud computing. The framework is designed to meet the needs of schools, colleges, and universities by automating administrative tasks and streamlining processes. The framework includes three main service models - software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS). The framework aims to improve transparency, productivity, and control through automation, leading to higher overall efficiency for educational institutions. It also discusses using cloud-based e-learning and learning analytics to improve current e-learning systems that lack appropriate infrastructure and integrated applications.
The webinar discussed reasons why enterprise content management (ECM) projects often fail and provided recommendations for success. It covered topics such as how lack of planning, communication, innovation and experience can lead to failure; the importance of properly aligning technology to requirements; and tips for a successful project like being agile, iterative, flexible, empowering developers, maintaining control, and asking for expert guidance. The webinar emphasized that both technology and process are important for ECM project success.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
This document discusses adopting aspect-oriented programming (AOP) in enterprise-wide computing. It provides a brief history of AOP, from its inception at Xerox PARC in the 1990s to the development of AspectJ in the late 1990s. It then reviews related work studying the benefits and challenges of using AOP, such as improved modularity and separation of concerns but also increased complexity. Many studies found quantitative benefits to maintenance from AOP but challenges in adoption. The document concludes by discussing uses of AOP in enterprises, noting both benefits like modularizing cross-cutting concerns, but also challenges such as difficulties aspectizing concurrency and failures.
This document provides reviews of 3 research papers on distributed systems. The reviews were created by following a structured format including the paper title, authors, main idea, results, impact, evidence, prior work, and ideas for future work. For the first paper, the summary discusses analyzing the cost and resource optimization of running real-life applications on an open source cloud. The second paper proposes a software testing framework called IVRIDIO to provide test-first performance as a cloud service. The third paper presents a formal approach to developing fault tolerant distributed systems using refinement techniques.
LavaCon 2011: Content Life Cycle Strategic CompassClearPath, LLC
The document discusses the content life cycle (CLC) and how it can help identify business requirements for how content is treated throughout its lifecycle. It provides examples of CLC models and recommends holding a workshop to map out the current content process, including the roles, products, and tools involved. The workshop aims to optimize the content workflow and inform the selection of a content management system. The results of one such workshop identified business needs, removed content silos, defined an end-to-end CLC, and focused on standardizing roles, products, and transitioning authoring technologies to better support publishing to multiple formats and audiences.
The document discusses the content life cycle (CLC) and how it can help identify business requirements for how content is treated throughout its lifecycle. It provides examples of CLC models and recommends holding a workshop to map out the current content process, including the roles, products, and tools involved. The workshop aims to optimize the content workflow and inform the selection of a content management system. The results of one such workshop identified business needs, removed content silos, defined an end-to-end CLC, and focused on standardizing roles, products, and transitioning authoring technologies to better support publishing to multiple formats and audiences.
A collection of user interface design patterns for workflow infor¬ma¬tion systems is presented that contains forty three resource patterns classified in seven categories. These categories and their corre¬sponding patterns have been logically identified from the task life cycle based on offering and allocation operations. Each Workflow User Interface Pattern (WUIP) is characterized by properties expressed in the PLML markup language for expressing patterns and augmented by additional attributes and models at¬tached to the pattern: the abstract user interface and the corresponding task model. These models are specified in a User Interface Description Langua¬ge. All WUIPs are stored in a library and can be retrieved within a workflow editor that links each workflow pattern to its corresponding WUIP, thus giving rise to a user interface for each workflow pattern
This document discusses the limitations of scripting for automating complex business processes and the benefits of shifting to dynamic, intelligent process automation software. It outlines how scripting becomes more difficult and costly to develop and maintain as processes increase in complexity. Automation software provides greater capabilities for adapting to changing conditions, centralized management, and cost savings through reduced development and maintenance costs compared to scripting. Examples demonstrate how automation software can leverage existing scripts while avoiding issues like code abandonment and reducing costs.
Laran Evans is a software developer and technical leader seeking new opportunities. He has over 15 years of experience leading development teams and managing complex software projects. His background includes roles managing custom application development, overseeing a Kuali financial system implementation, and leading the development of several modules for Cornell University's financial applications.
Duc M. Le is a PhD candidate in computer science at USC studying software architecture and mining software repositories. He has experience in software design, development, and data mining. His research focuses on analyzing architectural changes in open source software systems and predicting potential bugs. He has worked as an intern at several companies including NEC Labs, Veritas, and Samsung Research America.
This document provides an overview of tools for developing applications using Resource Description Framework (RDF) and Topic Maps technologies. It classifies these tools into three categories: storage, editing, and visualization. The document aims to compare these tools on various parameters to help researchers and users select the most appropriate one for their needs. It argues that while these technologies can enrich web content with semantic information, RDF and Topic Maps differ in their approaches and architectures, which can hinder interoperability. The comparison of tools presented in this paper seeks to address the interoperability problem between the two technologies and provide insight into how their tools can be used together.
Modern Enterprise Software Systems entail many
challenges such as rapidly changing business scenario,
increase in complexity, and shorter time to market and
providing business agility without compromising on the
quality. Ensuring productivity, quality, consistency, cost
effective, reduced cycle time have become a mandate for the
teams dealing with modern enterprise software systems.
Fluent Interfaces is a powerful technique, which help in
taming the programming complexities, reducing boilerplate
code, increasing the quality and thereby improving the
productivity and cycle time. In this paper we will describe
some of our explorations in Fluent Interfaces and why we feel
the notion of Fluent Interfaces is a useful technique for
enterprise software system. We are currently focusing on two
things – a technique for determining fluency of an API and
secondly a methodology for designing a fluent interface. We
will also share some of the benefits and limitations that we
observed during our experimentation. We conclude this paper
with a note on the current work that we are doing and the
future directions.
A COMPOSITE DESIGN PATTERN FOR SERVICE INJECTION AND COMPOSITION OF WEB SERVI...ijwscjournal
In this paper we present a Service Injection and composition Design Pattern for Unstructured Peer-to-Peer
networks, which is designed with Aspect-oriented design patterns, and amalgamation of the Strategy,
Worker Object, and Check-List Design Patterns used to design the Self-Adaptive Systems. It will apply selfreconfiguration planes dynamically without the interruption or intervention of the administrator for
handling service failures at the servers. When a client requests for a complex service, Service Composition
should be done to fulfil the request. If a service is not available in the memory, it will be injected as
Aspectual Feature Module code. We used Service Oriented Architecture (SOA) with Web Services in Java
to Implement the composite Design Pattern. As far as we know, there are no studies on composition of
design patterns for Peer-to-peer computing domain. The pattern is described using a java-like notation for
the classes and interfaces. A simple UML class and Sequence diagrams are depicted.
A COMPOSITE DESIGN PATTERN FOR SERVICE INJECTION AND COMPOSITION OF WEB SERVI...ijwscjournal
In this paper we present a Service Injection and composition Design Pattern for Unstructured Peer-to-Peer networks, which is designed with Aspect-oriented design patterns, and amalgamation of the Strategy, Worker Object, and Check-List Design Patterns used to design the Self-Adaptive Systems. It will apply selfreconfiguration planes dynamically without the interruption or intervention of the administrator for handling service failures at the servers. When a client requests for a complex service, Service Composition should be done to fulfil the request. If a service is not available in the memory, it will be injected as Aspectual Feature Module code. We used Service Oriented Architecture (SOA) with Web Services in Java to Implement the composite Design Pattern. As far as we know, there are no studies on composition of design patterns for Peer-to-peer computing domain. The pattern is described using a java-like notation for the classes and interfaces. A simple UML class and Sequence diagrams are depicted.
A COMPOSITE DESIGN PATTERN FOR SERVICE INJECTION AND COMPOSITION OF WEB SERVI...ijwscjournal
In this paper we present a Service Injection and composition Design Pattern for Unstructured Peer-to-Peer networks, which is designed with Aspect-oriented design patterns, and amalgamation of the Strategy, Worker Object, and Check-List Design Patterns used to design the Self-Adaptive Systems. It will apply selfreconfiguration planes dynamically without the interruption or intervention of the administrator for handling service failures at the servers. When a client requests for a complex service, Service Composition should be done to fulfil the request. If a service is not available in the memory, it will be injected as Aspectual Feature Module code. We used Service Oriented Architecture (SOA) with Web Services in Java to Implement the composite Design Pattern. As far as we know, there are no studies on composition of design patterns for Peer-to-peer computing domain. The pattern is described using a java-like notation for the classes and interfaces. A simple UML class and Sequence diagrams are depicted.
This document discusses web accessibility and the challenges of implementing accessibility standards. It summarizes key aspects of web accessibility including:
- Laws requiring public websites to comply with accessibility standards
- The WCAG 2.0 guidelines which define principles, guidelines, and success criteria for accessible websites
- Tools like WAI-ARIA that add attributes to make websites accessible to assistive technologies
- A study that measured developers' awareness and use of accessibility standards, finding only partial compliance and awareness
- Challenges to wider adoption including lack of awareness, unclear responsibilities, and perceptions that standards are too time-consuming
This document summarizes Session 3 of a web accessibility workshop. It discusses alternative input devices such as alternative keyboards, pointing devices, switches, and on-screen keyboards. It also covers other assistive technologies like braille embossers and displays, screen magnification software, text-to-speech programs, speech recognition, and word processors for individuals with disabilities. Examples and images are provided for many of the assistive technologies.
More Related Content
Similar to Timeliner poster on CSCW 2012 conference
This document introduces Pig Latin, a new language designed for analyzing extremely large datasets. Pig Latin aims to fill the gap between the declarative style of SQL and the procedural style of MapReduce. It compiles programs into physical plans executed over Hadoop. The language allows for a flexible data model, user-defined functions, and operates directly on files without requiring data import. Pig Latin is being used by engineers at Yahoo to more easily analyze terabytes of collected data.
This document discusses deep web searching (DWS). It begins with an abstract that explains how the deep web is growing rapidly and there is interest in techniques to efficiently locate deep-web interfaces. The document then discusses text clustering to group documents based on user-inputted keywords. It proposes using a fuzzy-logic model and self-organized mapping (SOM) algorithm to cluster documents. It also discusses using WordNet as a lexical database and the system architecture, data flow diagram, and implementation of the deep web searching system.
2004 national science foundation knowledge management and network collaborati...Christopher Thorn
The document discusses collaboration tools and knowledge management for the SCALE project. It describes basic requirements like document versioning and tools to organize work. It also mentions databases for student and teacher data. A section on project workspaces explains how they integrate project work habits, provide a knowledge base and intelligent search. Examples of educational customers and use cases are provided at the end.
This is an introduction to the reusable technology solutions developed by the rapid innovation projects of the UK OER Programme during 2012. Bidders were asked to address problems identified through the Programme, and 15 UK university-based projects were awarded between £13,000 and £25,000 each over 6 months. They have developed a range of solutions to enhance the digital infrastructure to support open content in an educational context. Projects worked in an open innovation way, blogging as they went, working with peers and users, and the outputs are all open source, documented and reusable. Links are provided to each project output.
Slides created by JISC: Programme Manager Amber Thomas, Programme Office Alicja Shah, Technical Advisory JISC Cetis particularly Martin Hawksey. Dandelion Clock sourced through flickr and attributed on the front slide.
Academic Resources Architecture Framework Planning using ERP in Cloud ComputingIRJET Journal
This document discusses an academic resources architecture framework for planning and using enterprise resource planning (ERP) systems in cloud computing. The framework is designed to meet the needs of schools, colleges, and universities by automating administrative tasks and streamlining processes. The framework includes three main service models - software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS). The framework aims to improve transparency, productivity, and control through automation, leading to higher overall efficiency for educational institutions. It also discusses using cloud-based e-learning and learning analytics to improve current e-learning systems that lack appropriate infrastructure and integrated applications.
The webinar discussed reasons why enterprise content management (ECM) projects often fail and provided recommendations for success. It covered topics such as how lack of planning, communication, innovation and experience can lead to failure; the importance of properly aligning technology to requirements; and tips for a successful project like being agile, iterative, flexible, empowering developers, maintaining control, and asking for expert guidance. The webinar emphasized that both technology and process are important for ECM project success.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
This document discusses adopting aspect-oriented programming (AOP) in enterprise-wide computing. It provides a brief history of AOP, from its inception at Xerox PARC in the 1990s to the development of AspectJ in the late 1990s. It then reviews related work studying the benefits and challenges of using AOP, such as improved modularity and separation of concerns but also increased complexity. Many studies found quantitative benefits to maintenance from AOP but challenges in adoption. The document concludes by discussing uses of AOP in enterprises, noting both benefits like modularizing cross-cutting concerns, but also challenges such as difficulties aspectizing concurrency and failures.
This document provides reviews of 3 research papers on distributed systems. The reviews were created by following a structured format including the paper title, authors, main idea, results, impact, evidence, prior work, and ideas for future work. For the first paper, the summary discusses analyzing the cost and resource optimization of running real-life applications on an open source cloud. The second paper proposes a software testing framework called IVRIDIO to provide test-first performance as a cloud service. The third paper presents a formal approach to developing fault tolerant distributed systems using refinement techniques.
LavaCon 2011: Content Life Cycle Strategic CompassClearPath, LLC
The document discusses the content life cycle (CLC) and how it can help identify business requirements for how content is treated throughout its lifecycle. It provides examples of CLC models and recommends holding a workshop to map out the current content process, including the roles, products, and tools involved. The workshop aims to optimize the content workflow and inform the selection of a content management system. The results of one such workshop identified business needs, removed content silos, defined an end-to-end CLC, and focused on standardizing roles, products, and transitioning authoring technologies to better support publishing to multiple formats and audiences.
The document discusses the content life cycle (CLC) and how it can help identify business requirements for how content is treated throughout its lifecycle. It provides examples of CLC models and recommends holding a workshop to map out the current content process, including the roles, products, and tools involved. The workshop aims to optimize the content workflow and inform the selection of a content management system. The results of one such workshop identified business needs, removed content silos, defined an end-to-end CLC, and focused on standardizing roles, products, and transitioning authoring technologies to better support publishing to multiple formats and audiences.
A collection of user interface design patterns for workflow infor¬ma¬tion systems is presented that contains forty three resource patterns classified in seven categories. These categories and their corre¬sponding patterns have been logically identified from the task life cycle based on offering and allocation operations. Each Workflow User Interface Pattern (WUIP) is characterized by properties expressed in the PLML markup language for expressing patterns and augmented by additional attributes and models at¬tached to the pattern: the abstract user interface and the corresponding task model. These models are specified in a User Interface Description Langua¬ge. All WUIPs are stored in a library and can be retrieved within a workflow editor that links each workflow pattern to its corresponding WUIP, thus giving rise to a user interface for each workflow pattern
This document discusses the limitations of scripting for automating complex business processes and the benefits of shifting to dynamic, intelligent process automation software. It outlines how scripting becomes more difficult and costly to develop and maintain as processes increase in complexity. Automation software provides greater capabilities for adapting to changing conditions, centralized management, and cost savings through reduced development and maintenance costs compared to scripting. Examples demonstrate how automation software can leverage existing scripts while avoiding issues like code abandonment and reducing costs.
Laran Evans is a software developer and technical leader seeking new opportunities. He has over 15 years of experience leading development teams and managing complex software projects. His background includes roles managing custom application development, overseeing a Kuali financial system implementation, and leading the development of several modules for Cornell University's financial applications.
Duc M. Le is a PhD candidate in computer science at USC studying software architecture and mining software repositories. He has experience in software design, development, and data mining. His research focuses on analyzing architectural changes in open source software systems and predicting potential bugs. He has worked as an intern at several companies including NEC Labs, Veritas, and Samsung Research America.
This document provides an overview of tools for developing applications using Resource Description Framework (RDF) and Topic Maps technologies. It classifies these tools into three categories: storage, editing, and visualization. The document aims to compare these tools on various parameters to help researchers and users select the most appropriate one for their needs. It argues that while these technologies can enrich web content with semantic information, RDF and Topic Maps differ in their approaches and architectures, which can hinder interoperability. The comparison of tools presented in this paper seeks to address the interoperability problem between the two technologies and provide insight into how their tools can be used together.
Modern Enterprise Software Systems entail many
challenges such as rapidly changing business scenario,
increase in complexity, and shorter time to market and
providing business agility without compromising on the
quality. Ensuring productivity, quality, consistency, cost
effective, reduced cycle time have become a mandate for the
teams dealing with modern enterprise software systems.
Fluent Interfaces is a powerful technique, which help in
taming the programming complexities, reducing boilerplate
code, increasing the quality and thereby improving the
productivity and cycle time. In this paper we will describe
some of our explorations in Fluent Interfaces and why we feel
the notion of Fluent Interfaces is a useful technique for
enterprise software system. We are currently focusing on two
things – a technique for determining fluency of an API and
secondly a methodology for designing a fluent interface. We
will also share some of the benefits and limitations that we
observed during our experimentation. We conclude this paper
with a note on the current work that we are doing and the
future directions.
A COMPOSITE DESIGN PATTERN FOR SERVICE INJECTION AND COMPOSITION OF WEB SERVI...ijwscjournal
In this paper we present a Service Injection and composition Design Pattern for Unstructured Peer-to-Peer
networks, which is designed with Aspect-oriented design patterns, and amalgamation of the Strategy,
Worker Object, and Check-List Design Patterns used to design the Self-Adaptive Systems. It will apply selfreconfiguration planes dynamically without the interruption or intervention of the administrator for
handling service failures at the servers. When a client requests for a complex service, Service Composition
should be done to fulfil the request. If a service is not available in the memory, it will be injected as
Aspectual Feature Module code. We used Service Oriented Architecture (SOA) with Web Services in Java
to Implement the composite Design Pattern. As far as we know, there are no studies on composition of
design patterns for Peer-to-peer computing domain. The pattern is described using a java-like notation for
the classes and interfaces. A simple UML class and Sequence diagrams are depicted.
A COMPOSITE DESIGN PATTERN FOR SERVICE INJECTION AND COMPOSITION OF WEB SERVI...ijwscjournal
In this paper we present a Service Injection and composition Design Pattern for Unstructured Peer-to-Peer networks, which is designed with Aspect-oriented design patterns, and amalgamation of the Strategy, Worker Object, and Check-List Design Patterns used to design the Self-Adaptive Systems. It will apply selfreconfiguration planes dynamically without the interruption or intervention of the administrator for handling service failures at the servers. When a client requests for a complex service, Service Composition should be done to fulfil the request. If a service is not available in the memory, it will be injected as Aspectual Feature Module code. We used Service Oriented Architecture (SOA) with Web Services in Java to Implement the composite Design Pattern. As far as we know, there are no studies on composition of design patterns for Peer-to-peer computing domain. The pattern is described using a java-like notation for the classes and interfaces. A simple UML class and Sequence diagrams are depicted.
A COMPOSITE DESIGN PATTERN FOR SERVICE INJECTION AND COMPOSITION OF WEB SERVI...ijwscjournal
In this paper we present a Service Injection and composition Design Pattern for Unstructured Peer-to-Peer networks, which is designed with Aspect-oriented design patterns, and amalgamation of the Strategy, Worker Object, and Check-List Design Patterns used to design the Self-Adaptive Systems. It will apply selfreconfiguration planes dynamically without the interruption or intervention of the administrator for handling service failures at the servers. When a client requests for a complex service, Service Composition should be done to fulfil the request. If a service is not available in the memory, it will be injected as Aspectual Feature Module code. We used Service Oriented Architecture (SOA) with Web Services in Java to Implement the composite Design Pattern. As far as we know, there are no studies on composition of design patterns for Peer-to-peer computing domain. The pattern is described using a java-like notation for the classes and interfaces. A simple UML class and Sequence diagrams are depicted.
Similar to Timeliner poster on CSCW 2012 conference (20)
This document discusses web accessibility and the challenges of implementing accessibility standards. It summarizes key aspects of web accessibility including:
- Laws requiring public websites to comply with accessibility standards
- The WCAG 2.0 guidelines which define principles, guidelines, and success criteria for accessible websites
- Tools like WAI-ARIA that add attributes to make websites accessible to assistive technologies
- A study that measured developers' awareness and use of accessibility standards, finding only partial compliance and awareness
- Challenges to wider adoption including lack of awareness, unclear responsibilities, and perceptions that standards are too time-consuming
This document summarizes Session 3 of a web accessibility workshop. It discusses alternative input devices such as alternative keyboards, pointing devices, switches, and on-screen keyboards. It also covers other assistive technologies like braille embossers and displays, screen magnification software, text-to-speech programs, speech recognition, and word processors for individuals with disabilities. Examples and images are provided for many of the assistive technologies.
The document summarizes key points from a workshop on web accessibility:
1. The workshop covered principles of accessible design including people first, clear purpose, solid structure, and universal usability.
2. Attendees participated in exercises on responsive design and discussed their experience testing websites' responsiveness.
3. Guidelines for accessible web development were reviewed, such as using proper headings, lists and form labels, providing alt text for images, and ensuring adequate color contrast.
This document provides an overview of a web accessibility workshop held on March 3rd, 2016. The workshop covered several topics related to web accessibility including target user groups, empathy exercises using the web with limited abilities, frameworks for accessible web design, and preparing for WCAG 2.0 testing. The workshop was led by Vladimir Tomberg, PhD from Tallinn University. Participants were provided login credentials to use the Mac lab and instructions for completing in-class and home assignments. Today's workshop sessions included discussions of target user groups who benefit from accessible design, an empathy exercise using the web with limited abilities, an introduction to the principles of accessible web design, and an overview of WCAG 2.0 guidelines for web content accessibility testing
This document summarizes a lecture on universal design principles. It discusses 10 key principles: equitable use, flexibility in use, simple and intuitive use, perceptible information, tolerance for error, low physical effort, size and space for approach and use, a community of users of mixed abilities, and equitable use. For each principle, the document provides definitions, examples, and strategies for applying the principle in design. The goal of universal design is to create products, environments and systems that can be used by people of all abilities to the greatest extent possible without need for adaptation.
This document contains a lecture on design for all. It discusses various human characteristics related to mobility and dexterity such as reach and stretch, dexterity, locomotion, and motor capability. It provides examples of inclusive design for talking books, accessible trains, automatic doors, and single-head mixer taps. It concludes by assigning homework to design personas for ideas and implement a low-fidelity prototype due by the next lecture.
This document outlines the agenda and activities for the second lecture of a design course. It includes:
1. Student presentations of empathy maps from observation tasks.
2. Formation of design teams to identify user needs and generate design ideas. Teams will select the most interesting needs and come up with 10 potential solutions for each need, considering various constraints.
3. Presentation and selection of the best design idea by each team, incorporating audience feedback. Teams will report their ideas and the lecture will conclude with a survey.
This document provides an overview of a lecture on Design for All. It begins with introductions from the lecturer and a request for students to introduce themselves. The lecturer then explains that the course is about awareness, design thinking, and tools/methods rather than just graphic or web design. Evaluation criteria are also outlined which include workshops, essays, and a design project. Recommended resources like Slideshare and Pinterest are shared. The agenda includes definitions, why Design for All is important, personal human characteristics, and homework assignments.
Customer journey maps provide a framework to improve the customer experience by mapping out key interactions and touchpoints across various stages of the customer journey. They can help increase conversion rates, retention, and generate better ideas by accounting for the user's feelings, motivations and questions at different points. While originally used for customers, journey maps could also benefit learning design by mapping out a learner's experience through a course or activity and identifying problems at transition points to improve the overall experience.
This document discusses universal design principles and strategies through a presentation given by Vladimir Tomberg at Tallinn University. It begins by introducing the speaker and their background in inclusive design. It then discusses how typical designs often only consider a narrow subset of users. The presentation explores the range of human abilities and provides examples of inclusive design solutions. It promotes designing with diverse users in mind through techniques like empathic modeling and accessibility testing. Students found value in experientially understanding the challenges faced by users with different abilities. The goal of universal design is to create equitable experiences for all people.
Exploring Different Routes from LMS towards PLE: a Dialectical PerspectiveVladimir Tomberg
This document discusses different approaches to personal learning environments (PLEs) as alternatives to learning management systems (LMSs) from a dialectical perspective. It presents a classification of PLEs including desktop-based, social media-based, and widget-based. It then describes three design experiments exploring PLE approaches: LePress which implemented assessment workflows in blogs, EduFeedr which provided course coordination and awareness using blogs, and Dippler which integrated various services into a digital learning ecosystem. Each approach is analyzed in terms of advantages and disadvantages. The conclusion discusses how a dialectical analysis can help overcome binary oppositions to create a synthesized approach using the best aspects of existing systems and innovative alternatives.
This document discusses universal design principles for inclusive design. It begins with introducing the speaker and discussing how most designs are made for a narrow target user, usually young males. It emphasizes that good design should address the wide variety of human abilities. It then outlines several key principles of universal design like equitable use, flexibility, perceptible information, tolerance for error, and low physical effort. Examples are given to illustrate each principle, such as curb cuts that benefit people of all abilities. The document stresses that universal design requires understanding diverse audiences and their varied needs.
Teaching Design for All Through Empathic Modeling: a Case Study in Tallinn Un...Vladimir Tomberg
This document describes a case study of using empathic modeling to teach design for all principles to HCI students. Students simulated various disabilities and obstacles as they navigated campus. This helped them gain awareness and empathy for users' experiences. Feedback showed the exercise was engaging and helped students understand how to design inclusively. The study concludes empathic modeling is a good approach for teaching design for all in HCI courses.
Integration data models, Learning Layers project meeting in BremenVladimir Tomberg
Report on process of building common semantic core for data from several Learning Layers applications for an integrated solution supported by Social Semantic Server
This document summarizes Session 4 of a Web Accessibility Workshop. It covers WAI ARIA, including an introduction to ARIA and how it addresses accessibility issues with dynamic content. The core ARIA components are then discussed, including abstract roles, widget roles, document structure roles, and document landmark roles. Examples are provided throughout to illustrate how ARIA adds semantics and information to make interfaces understandable to assistive technologies.
Слайды к моему короткому выступлению на круглом столе конференции ПрофсоUX в Санкт-Петербурге, 26 апреля 2014 года. Круглый стол был посвещен UX образованию.
This document summarizes session three of a web accessibility workshop. It discusses alternative input devices such as keyboards designed for one-handed use, foot pedals, pointing devices using eye tracking or head movements, and touch screens. It also covers other assistive technologies like braille embossers and refreshable braille displays, screen reading software, text-to-speech and speech recognition programs, and word processors that provide auditory feedback. The session included demonstrations of assistive technology used in different operating systems.
This document summarizes a workshop on web accessibility. It includes:
- An agenda for the workshop that covers presenting homework, frameworks for accessible web design, responsiveness exercises, discussions on designing accessible web applications, and demonstrations of tools.
- Principles for accessible user experience design that include putting people first, having a clear purpose, solid structure, easy interaction, helpful wayfinding, clean presentation, plain language, accessible media, and universal usability.
- Guidelines for accessible web design such as using proper headings, lists and reading order, providing sufficient color contrast, including alternative text for images, and ensuring usability of links, forms and navigation.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Driving Business Innovation: Latest Generative AI Advancements & Success Story
Timeliner poster on CSCW 2012 conference
1. A Conceptual Model
for Collaborative Scientific Writing
David Lamas, Vladimir Tomberg, Mart Laanpere
drl@tlu.ee, vtomberg@tlu.ee, martl@tlu.ee
Having different everyday tools and workflows,
Timeliner is an Internet based tool Challenges faced by ourselves but also The problem we address by offering
designed to support collaborative sci- elaborated upon by both our local Timeliner is about supporting collabo-
how researchers can effectively collaborate
entific writing. The need for Timeliner
rises from daily challenges regarding
research community as well as by mem-
bers of our international research pro-
rative scientific writing with minimal dis-
ruption of concurrent practices and
online in the process of scientific writing?
collaborative scientific writing pro- jects' teams. workflows.
cesses.
Data Layer
INITIAL CONCEPTS CONNECTING RESOURCES
Collaborative writing is common and often A final step in our design process was to
unavoidable in academic work as written doc- design Timeliner's system architecture as a
API API API API API API API API
uments, such as articles, reports and presen- mash-up of external data and functionalities
Milestones
tations are mostly produced collaboratively. provided by third-party services (see figure). Metadata Layer Timeliner Multi API Engine
API
We consider that collaborative writing is the This means that Timeliner has almost no con-
process of two or more people working cern about storing data but rather focus on 9/1/2003 - 10/10/2003 10/15/2003 - 10/9/2005
Rewiev and Approval Process
Writing Abstract
together to create a complex document, irre- the processing of metadata related to exter-
11/13/2004 6/30/2005 11/4/2005
10/12/2003
spectively of locus or synchronicity. nal data. By using different types of metadata Abstract
Deadline Camera ready Publication
Visualisation Layer
as glue, Timeliner can bring together, into a
As showed on the concept map we interpret
new writing ecosystem, resources that were 6/1/2003 11/30/2005
the process of collaborative writing as a pro-
separated before.
ject.
Users Layer
Local resources
Researcher
SHARING TIMELINES FIRST UI PROTOTYPE
Instead of organizing users into groups we Tasks
We have designed first software prototype
User A
propose the concept of sharing resources Documents
for Timeliner using concept of resources asso-
in associated timelines. If two or more ciated with specific writing project and
researchers are ready to start collaboration, Tasks
arranged around project timeline. At the top
User B
they should share their timelines with each Documents
current status of target document is dis-
other. played. The researcher can create tasks,
By marking own resources as public and Visible
assign them to coworkers, and bind to them
private the users can precisely define what
shared resources.
resources
resources they want to share. The tasks, milestones and resources can be
Shared resource Private resource
The process of sharing timelines between tagged and annotated. All activities are
two users is illustrated on figure. Gray lines tracked in diary, which also has interface for
mark resources to be shared; private participants’ chat.
resources are painted in black color.
Narva mnt. 25, T-511
Centre for Educational Technology 10120, Tallinn
Estonia
Phone: (+372) 6409 355
Tallinn University, Estonia e-mail: ktoming@tlu.ee