MedPath Designer: A Process-based Modeling Language for Designing Care PathwaysObeo
MedPath is a domain-specific language created by Intmed, a Brazilian health tech company, to model and design care pathways using Model-Based Systems Engineering (MBSE) and the Sirius modeling tool. MedPath allows experts to collaboratively design pathways that are then interpreted and executed through MedPath's dashboard and system. The pathways can be rapidly updated and shared to guide doctors and improve patient care. Intmed has used MedPath to model over 200 care protocols for various medical specialties and conditions, including COVID-19. Evaluations found Sirius to be a good fit for quickly constructing and evolving their DSL and integrating pathways with legacy systems through interpretation.
Mahesh Joshi has studied computer science and engineering, obtaining degrees from universities in India and the US. He is currently pursuing a Masters in Language Technologies at Carnegie Mellon University. His research focuses on natural language processing techniques such as word sense disambiguation and abbreviation expansion, especially in medical texts. He has published papers in conferences and released several related software tools.
What do Practitioners Expect from the Meta-modeling Tools? A SurveyObeo
Modeling languages are defined with a meta-model, which are specified using the meta-modeling tools that produce the editors for specifying models in accordance with the meta-models. While many different meta-modeling tools have been available today, it is not yet clear what the expectations of practitioners are from the meta-modeling tools and what sort of challenges that practitioners face with. So, we designed and conducted a survey, which was responded by 103 practitioners from 24 different countries. The survey participants represent the different profiles of the population who differ in terms of the work industries, the problem domains, job positions, and years of experiences. Our survey investigates three important research questions, which essentially focus on the usage frequencies of the existing meta-modeling tools, practitioners’ expectations from the meta-modeling tools, and any challenges that practitioners face with. The survey questionnaire considers the notation, semantics, editor services, model-transformation, validation, testing, and composability requirements for meta-modeling tools.
The survey results lead to many interesting findings regarding the practical use of meta-modeling tools from different viewpoints. The survey also reveals many important challenges in each type of requirements. We strongly believe that the survey results are expected to be useful for anyone who consider developing their own DSMLs (domain-specific modeling languages) in understanding the top-used meta-modeling tools for different domains. Also, the tool vendors could use the survey results in learning the expectations of practitioners from the meta-modeling tools and any challenges encountered.
Assoc.Prof.Dr. Mert Ozkaya, Yeditepe University
The document discusses research on integrating semantic web technologies into intelligent web applications. It presents an approach using document-level semantic annotations. A logical architecture and platform called H-DOSE is introduced that uses ontologies and web services. Case studies applying the approach to content management, e-learning and personalization systems are described. The research demonstrates the feasibility of semantically enhancing existing web applications.
Integrated research data management in the Structural SciencesManjulaPatel
A presentation given by Manjula Patel (UKOLN, University of Bath) at the I2S2 workshop "Scaling Up to Integrated Research Data Management", IDCC 2010, 6th December 2010, Chicago.
http://www.ukoln.ac.uk/projects/I2S2/events/IDCC-2010-ScalingUp-Wksp/
Studying Software Engineering Patterns for Designing Machine Learning SystemsHironori Washizaki
Hironori Washizaki, Hiromu Uchida, Foutse Khomh and Yann-Gaël Guéhéneuc, “Studying Software Engineering Patterns for Designing Machine Learning Systems,” The 10th International Workshop on Empirical Software Engineering in Practice (IWESEP 2019), Tokyo, Japan, on December 13-14, 2019.
The School of Computing at Blekinge Institute of Technology:
- Provides education and research in areas such as computer science, software engineering, and communication systems.
- Is the largest unit at the university with about 130 faculty and staff conducting research in fields like game development, software engineering, and intelligent transport systems.
- Receives over 10 million Euros annually with about half dedicated to education and half to research activities across its research laboratories.
MedPath Designer: A Process-based Modeling Language for Designing Care PathwaysObeo
MedPath is a domain-specific language created by Intmed, a Brazilian health tech company, to model and design care pathways using Model-Based Systems Engineering (MBSE) and the Sirius modeling tool. MedPath allows experts to collaboratively design pathways that are then interpreted and executed through MedPath's dashboard and system. The pathways can be rapidly updated and shared to guide doctors and improve patient care. Intmed has used MedPath to model over 200 care protocols for various medical specialties and conditions, including COVID-19. Evaluations found Sirius to be a good fit for quickly constructing and evolving their DSL and integrating pathways with legacy systems through interpretation.
Mahesh Joshi has studied computer science and engineering, obtaining degrees from universities in India and the US. He is currently pursuing a Masters in Language Technologies at Carnegie Mellon University. His research focuses on natural language processing techniques such as word sense disambiguation and abbreviation expansion, especially in medical texts. He has published papers in conferences and released several related software tools.
What do Practitioners Expect from the Meta-modeling Tools? A SurveyObeo
Modeling languages are defined with a meta-model, which are specified using the meta-modeling tools that produce the editors for specifying models in accordance with the meta-models. While many different meta-modeling tools have been available today, it is not yet clear what the expectations of practitioners are from the meta-modeling tools and what sort of challenges that practitioners face with. So, we designed and conducted a survey, which was responded by 103 practitioners from 24 different countries. The survey participants represent the different profiles of the population who differ in terms of the work industries, the problem domains, job positions, and years of experiences. Our survey investigates three important research questions, which essentially focus on the usage frequencies of the existing meta-modeling tools, practitioners’ expectations from the meta-modeling tools, and any challenges that practitioners face with. The survey questionnaire considers the notation, semantics, editor services, model-transformation, validation, testing, and composability requirements for meta-modeling tools.
The survey results lead to many interesting findings regarding the practical use of meta-modeling tools from different viewpoints. The survey also reveals many important challenges in each type of requirements. We strongly believe that the survey results are expected to be useful for anyone who consider developing their own DSMLs (domain-specific modeling languages) in understanding the top-used meta-modeling tools for different domains. Also, the tool vendors could use the survey results in learning the expectations of practitioners from the meta-modeling tools and any challenges encountered.
Assoc.Prof.Dr. Mert Ozkaya, Yeditepe University
The document discusses research on integrating semantic web technologies into intelligent web applications. It presents an approach using document-level semantic annotations. A logical architecture and platform called H-DOSE is introduced that uses ontologies and web services. Case studies applying the approach to content management, e-learning and personalization systems are described. The research demonstrates the feasibility of semantically enhancing existing web applications.
Integrated research data management in the Structural SciencesManjulaPatel
A presentation given by Manjula Patel (UKOLN, University of Bath) at the I2S2 workshop "Scaling Up to Integrated Research Data Management", IDCC 2010, 6th December 2010, Chicago.
http://www.ukoln.ac.uk/projects/I2S2/events/IDCC-2010-ScalingUp-Wksp/
Studying Software Engineering Patterns for Designing Machine Learning SystemsHironori Washizaki
Hironori Washizaki, Hiromu Uchida, Foutse Khomh and Yann-Gaël Guéhéneuc, “Studying Software Engineering Patterns for Designing Machine Learning Systems,” The 10th International Workshop on Empirical Software Engineering in Practice (IWESEP 2019), Tokyo, Japan, on December 13-14, 2019.
The School of Computing at Blekinge Institute of Technology:
- Provides education and research in areas such as computer science, software engineering, and communication systems.
- Is the largest unit at the university with about 130 faculty and staff conducting research in fields like game development, software engineering, and intelligent transport systems.
- Receives over 10 million Euros annually with about half dedicated to education and half to research activities across its research laboratories.
El documento contiene 14 algoritmos que resuelven diferentes problemas matemáticos y lógicos, como determinar el mayor de tres valores, calcular el área y volumen de un cilindro, ordenar números de menor a mayor, y calcular una calificación final promediando resultados parciales. Cada algoritmo presenta los pasos a seguir de manera secuencial y utilizando estructuras de control como si/entonces y mientras.
The ABC Vision Diaspora Edition August 2012ABC Bank Kenya
The document discusses challenges Kenyans in the diaspora face when investing in businesses in Kenya from abroad. It provides an example of Joseph Kariuki who struggled setting up a business in South Sudan while living there. He had to wait until returning home to Kenya to invest. The document also discusses how ABC Bank is helping diaspora Kenyans overcome these challenges by providing banking services and investment opportunities to enable them to more easily invest in Kenya from abroad, such as opportunities in the dairy sector, building materials, and the property sector.
Personal branding involves developing an authentic depiction of your skills, values, and strengths to present a consistent brand across social media and traditional methods. Building an effective personal brand requires self-examination to understand how others see you and how you want to be seen. It also involves mastering basic social media skills like having a professional photo and bio, sharing accomplishments and interests, and engaging with connections on platforms like Facebook. Maintaining relevance, seizing opportunities, and staying passionate helps strengthen your personal brand over time.
Este documento describe diferentes procedimientos para la recopilación y análisis de información de una comunidad, incluyendo coloquios, observatorios, evaluaciones comunitarias, consultas a expertos, y debates abiertos. Cada procedimiento implica pasos como la formación de un equipo, identificación de interlocutores, recolección de datos, análisis y elaboración de informes y recomendaciones. El objetivo general es involucrar a la comunidad y expertos para comprender mejor una realidad y desarrollar propuestas de futuro.
The combination of the documentary and ancillary texts is effective due to strong stylistic linkages between the products.
The double page article in Radio Times magazine develops similar stylistic devices to the documentary, using matching imagery, quotes, and repetition of keywords.
The radio trailer also maintains connections with the documentary through use of the same music genre, voiceover tone, and voxpop extracts, while promoting the airing details.
By distributing the article in Radio Times and trailer on BBC Radio 5 Live, the ancillary texts are able to reach audiences that align with the documentary's target demographic of ages 20-55, increasing awareness and viewership of the main product.
Este documento trata sobre la resolución de conflictos en el entorno laboral. Explica que los conflictos son inevitables y pueden ser funcionales o disfuncionales, dependiendo de cómo se manejen. Describe los tipos de conflictos y sus fuentes comunes. Reconoce que escuchar bien es fundamental para resolver conflictos, y presenta cinco estilos para hacerlo: forzar, evadir, ceder, llegar a acuerdos y colaborar. Recomienda usar el estilo de colaboración cuando sea posible, pues permite que todas las partes ganen.
SWAD es un sistema web que permite gestionar asignaturas, estudiantes y profesores en las universidades, ofreciendo funcionalidades como la configuración de asignaturas, comunicación entre usuarios, envío de trabajos, y evaluación de estudiantes. Actualmente se usa en la Universidad de Granada en España y la Universidad Nacional de Asunción en Paraguay, con miles de asignaturas, estudiantes y profesores.
El documento contiene 14 algoritmos que resuelven diferentes problemas matemáticos y lógicos, como determinar el mayor de tres valores, calcular el área y volumen de un cilindro, ordenar números de menor a mayor, y calcular una calificación final promediando resultados parciales. Cada algoritmo presenta los pasos a seguir de manera secuencial y utilizando estructuras de control como si/entonces y mientras.
La dinámica de sistemas estudia sistemas complejos representándolos mediante modelos matemáticos. Fue creada por Jay Forrester para modelar industrias, ciudades y el mundo. Permite investigar fenómenos complejos y cercanos a la realidad mediante realimentación. Tiene potencial didáctico para la educación al permitir estudiar teorías, fenómenos complejos y que los estudiantes prueben sus propias ideas.
A proposal for the inclusion of accessibility criteria in the publishing work...adaptabit
The document proposes including accessibility criteria in the publishing workflow of images in biomedical academic articles. It discusses how visual content is important but often inaccessible. It then outlines a behavior change wheel model to intervene at different points in the submission process, such as educating authors, improving tools, and introducing validation steps. Checklists are provided to help make figures accessible by including detailed descriptions and explanations for labels, colors, adjustments, scales, and more. The overall goal is to ensure images are born digitally accessible.
This document discusses profiling linked open data. It outlines the research background, plan, and preliminary results of profiling linked open data. The research aims to automatically generate new statistics and knowledge patterns to provide dataset summaries and inspect data quality. Preliminary results include profiling Italian public administration websites for compliance with open data policies and automatically classifying over 1,000 linked data sets into 8 topics with over 80% accuracy. Future work involves enriching the framework with additional statistics and applying it to unstructured microdata.
This document summarizes research on automatically classifying and extracting non-functional requirements (NFRs) from text files using supervised machine learning. The researchers created a dataset of NFR keywords by analyzing NFR catalogs identified through a systematic mapping study. The keywords were categorized into security, performance, and usability. They then tested a supervised learning approach on a existing dataset containing 625 software specifications. The approach achieved accuracy rates between 85-98% for classifying NFRs into the security, performance, and usability categories. The research thus provides a way to generate labeled datasets for training machine learning models to automatically classify NFRs mentioned in text documents.
1. The document summarizes findings from a UK program that funded 29 pilot projects exploring open educational resources (OERs).
2. The projects used diverse technologies to manage and share OERs, including eLearning platforms, repositories, and web 2.0 applications.
3. While many standards and formats were used, the choices often reflected the standards embedded in the systems selected by each project.
1. The document summarizes the findings of a study on the choices made by 29 UK pilot projects in describing and sharing open educational resources.
2. The projects used a diverse range of existing technologies, including eLearning platforms, repositories, and web 2.0 applications, to manage and share resources.
3. The descriptive standards and packaging formats used were often embedded within the chosen systems rather than selected independently.
This presentation gives details on technologies and approaches towards exploiting Linked Data by building LD applications. In particular, it gives an overview of popular existing applications and introduces the main technologies that support implementation and development. Furthermore, it illustrates how data exposed through common Web APIs can be integrated with Linked Data in order to create mashups.
This document provides an open elective list for the VIII semester of B.Tech programs for the 2021-22 academic year at the Dr. A.P.J. Abdul Kalam Technical University in Uttar Pradesh, India. It includes 10 courses for Open Elective-III and 9 courses for Open Elective-IV, covering topics such as cloud computing, biomedical signal processing, entrepreneurship, and data warehousing. The document also provides detailed syllabi for 5 of the courses, describing the topics and proposed lectures for each unit.
Revolutionizing Laboratory Instrument Data for the Pharmaceutical Industry:...OSTHUS
The Allotrope Foundation is a consortium of major pharmaceutical companies and a partner network whose goal is to address challenges in the pharmaceutical industry by providing a set of public, non-proprietary standards for using and integrating analytical laboratory data. Current challenges in data management within the pharmaceutical industry often center around inconsistent or incomplete data and metadata and proprietary data formats. Because of a lack of standardization, several operations (e.g. integration of instruments/applications, transfer of methods or results, archiving for regulatory purposes) require unnecessary efforts. Further, higher level aggregation of data, e.g. regulatory filings, that are derived from multiple sources of laboratory data are costly to create. These unnecessary costs impact operations within a company’s laboratories, between partnering companies, and between a company and contract research organizations (CROs). Finally, the accelerating transition of laboratories from hybrid (paper + electronic) to purely electronic data streams, coupled with an ever-increasing regulatory scrutiny of electronic data management practices, further require a comprehensive solution. This talk will discuss how The Allotrope Foundation is providing a new framework for data standards through collaboration between numerous stakeholders.
I shall provide a summary of JISC work in the area of ‘Big Data’. My primary focus will be on how to manage the huge amount of research data produced in UK Universities. I shall cover the history of JISC interventions to improve research data management and look at next steps. I shall touch on some other areas of work like ‘Digging into Data’ and web archiving which also deal with ‘big data’.
Automatic Classification of Springer Nature Proceedings with Smart Topic MinerFrancesco Osborne
The document summarizes research on automatically classifying Springer Nature proceedings using the Smart Topic Miner (STM). STM extracts topics from publications, maps them to a computer science ontology, selects relevant topics using a greedy algorithm, and infers tags. It was tested on 8 Springer Nature editors who found STM accurately classified 75-90% of proceedings and improved their work. However, STM is currently limited to computer science and occasional noisy results were found in books with few chapters. Future work aims to expand STM to characterize topic evolution over time and directly support author tagging.
This document describes an ISO 15926 reference data engineering methodology developed by TechInvestLab. It uses the ISO 24744 standard to describe work products, the reference data lifecycle stages of identification, characterization, mapping, transfer and verification, processes, roles and tools. The methodology focuses on the tasks of a reference data engineer, including developing and evaluating data for inclusion in reference data libraries. It was developed informally but provides a checklist for reference data projects. However, lessons learned are that it could be improved by rewriting using the OMG Essence methodology language to separate the abstract kernel from concrete practices.
One Standard to rule them all?: Descriptive Choices for Open EducationR. John Robertson
One Standard to rule them all?: Descriptive Choices for Open Education, OCWC2010 Hanoi, May 5-7 2010
R. John Robertson1, Lorna Campbell1, Phil Barker2, Li Yuan3, and Sheila MacNeill1 1Centre for Academic Practice and Learning Enhancement, University of Strathclyde, 2Institute for Computer Based Learning, Heriot-Watt University 3Institute for Cybernetic Education, University of Bolton
The webinar will be based on LODE-BD Recommendations - Linked Open Data (LOD)-enabled bibliographical data- which aims at providing bibliographic data providers of open repositories with a set of recommendations that will support the selection of appropriate encoding strategies for producing meaningful Linked Open Data (LOD)-enabled bibliographical data (LODE-BD).
El documento contiene 14 algoritmos que resuelven diferentes problemas matemáticos y lógicos, como determinar el mayor de tres valores, calcular el área y volumen de un cilindro, ordenar números de menor a mayor, y calcular una calificación final promediando resultados parciales. Cada algoritmo presenta los pasos a seguir de manera secuencial y utilizando estructuras de control como si/entonces y mientras.
The ABC Vision Diaspora Edition August 2012ABC Bank Kenya
The document discusses challenges Kenyans in the diaspora face when investing in businesses in Kenya from abroad. It provides an example of Joseph Kariuki who struggled setting up a business in South Sudan while living there. He had to wait until returning home to Kenya to invest. The document also discusses how ABC Bank is helping diaspora Kenyans overcome these challenges by providing banking services and investment opportunities to enable them to more easily invest in Kenya from abroad, such as opportunities in the dairy sector, building materials, and the property sector.
Personal branding involves developing an authentic depiction of your skills, values, and strengths to present a consistent brand across social media and traditional methods. Building an effective personal brand requires self-examination to understand how others see you and how you want to be seen. It also involves mastering basic social media skills like having a professional photo and bio, sharing accomplishments and interests, and engaging with connections on platforms like Facebook. Maintaining relevance, seizing opportunities, and staying passionate helps strengthen your personal brand over time.
Este documento describe diferentes procedimientos para la recopilación y análisis de información de una comunidad, incluyendo coloquios, observatorios, evaluaciones comunitarias, consultas a expertos, y debates abiertos. Cada procedimiento implica pasos como la formación de un equipo, identificación de interlocutores, recolección de datos, análisis y elaboración de informes y recomendaciones. El objetivo general es involucrar a la comunidad y expertos para comprender mejor una realidad y desarrollar propuestas de futuro.
The combination of the documentary and ancillary texts is effective due to strong stylistic linkages between the products.
The double page article in Radio Times magazine develops similar stylistic devices to the documentary, using matching imagery, quotes, and repetition of keywords.
The radio trailer also maintains connections with the documentary through use of the same music genre, voiceover tone, and voxpop extracts, while promoting the airing details.
By distributing the article in Radio Times and trailer on BBC Radio 5 Live, the ancillary texts are able to reach audiences that align with the documentary's target demographic of ages 20-55, increasing awareness and viewership of the main product.
Este documento trata sobre la resolución de conflictos en el entorno laboral. Explica que los conflictos son inevitables y pueden ser funcionales o disfuncionales, dependiendo de cómo se manejen. Describe los tipos de conflictos y sus fuentes comunes. Reconoce que escuchar bien es fundamental para resolver conflictos, y presenta cinco estilos para hacerlo: forzar, evadir, ceder, llegar a acuerdos y colaborar. Recomienda usar el estilo de colaboración cuando sea posible, pues permite que todas las partes ganen.
SWAD es un sistema web que permite gestionar asignaturas, estudiantes y profesores en las universidades, ofreciendo funcionalidades como la configuración de asignaturas, comunicación entre usuarios, envío de trabajos, y evaluación de estudiantes. Actualmente se usa en la Universidad de Granada en España y la Universidad Nacional de Asunción en Paraguay, con miles de asignaturas, estudiantes y profesores.
El documento contiene 14 algoritmos que resuelven diferentes problemas matemáticos y lógicos, como determinar el mayor de tres valores, calcular el área y volumen de un cilindro, ordenar números de menor a mayor, y calcular una calificación final promediando resultados parciales. Cada algoritmo presenta los pasos a seguir de manera secuencial y utilizando estructuras de control como si/entonces y mientras.
La dinámica de sistemas estudia sistemas complejos representándolos mediante modelos matemáticos. Fue creada por Jay Forrester para modelar industrias, ciudades y el mundo. Permite investigar fenómenos complejos y cercanos a la realidad mediante realimentación. Tiene potencial didáctico para la educación al permitir estudiar teorías, fenómenos complejos y que los estudiantes prueben sus propias ideas.
A proposal for the inclusion of accessibility criteria in the publishing work...adaptabit
The document proposes including accessibility criteria in the publishing workflow of images in biomedical academic articles. It discusses how visual content is important but often inaccessible. It then outlines a behavior change wheel model to intervene at different points in the submission process, such as educating authors, improving tools, and introducing validation steps. Checklists are provided to help make figures accessible by including detailed descriptions and explanations for labels, colors, adjustments, scales, and more. The overall goal is to ensure images are born digitally accessible.
This document discusses profiling linked open data. It outlines the research background, plan, and preliminary results of profiling linked open data. The research aims to automatically generate new statistics and knowledge patterns to provide dataset summaries and inspect data quality. Preliminary results include profiling Italian public administration websites for compliance with open data policies and automatically classifying over 1,000 linked data sets into 8 topics with over 80% accuracy. Future work involves enriching the framework with additional statistics and applying it to unstructured microdata.
This document summarizes research on automatically classifying and extracting non-functional requirements (NFRs) from text files using supervised machine learning. The researchers created a dataset of NFR keywords by analyzing NFR catalogs identified through a systematic mapping study. The keywords were categorized into security, performance, and usability. They then tested a supervised learning approach on a existing dataset containing 625 software specifications. The approach achieved accuracy rates between 85-98% for classifying NFRs into the security, performance, and usability categories. The research thus provides a way to generate labeled datasets for training machine learning models to automatically classify NFRs mentioned in text documents.
1. The document summarizes findings from a UK program that funded 29 pilot projects exploring open educational resources (OERs).
2. The projects used diverse technologies to manage and share OERs, including eLearning platforms, repositories, and web 2.0 applications.
3. While many standards and formats were used, the choices often reflected the standards embedded in the systems selected by each project.
1. The document summarizes the findings of a study on the choices made by 29 UK pilot projects in describing and sharing open educational resources.
2. The projects used a diverse range of existing technologies, including eLearning platforms, repositories, and web 2.0 applications, to manage and share resources.
3. The descriptive standards and packaging formats used were often embedded within the chosen systems rather than selected independently.
This presentation gives details on technologies and approaches towards exploiting Linked Data by building LD applications. In particular, it gives an overview of popular existing applications and introduces the main technologies that support implementation and development. Furthermore, it illustrates how data exposed through common Web APIs can be integrated with Linked Data in order to create mashups.
This document provides an open elective list for the VIII semester of B.Tech programs for the 2021-22 academic year at the Dr. A.P.J. Abdul Kalam Technical University in Uttar Pradesh, India. It includes 10 courses for Open Elective-III and 9 courses for Open Elective-IV, covering topics such as cloud computing, biomedical signal processing, entrepreneurship, and data warehousing. The document also provides detailed syllabi for 5 of the courses, describing the topics and proposed lectures for each unit.
Revolutionizing Laboratory Instrument Data for the Pharmaceutical Industry:...OSTHUS
The Allotrope Foundation is a consortium of major pharmaceutical companies and a partner network whose goal is to address challenges in the pharmaceutical industry by providing a set of public, non-proprietary standards for using and integrating analytical laboratory data. Current challenges in data management within the pharmaceutical industry often center around inconsistent or incomplete data and metadata and proprietary data formats. Because of a lack of standardization, several operations (e.g. integration of instruments/applications, transfer of methods or results, archiving for regulatory purposes) require unnecessary efforts. Further, higher level aggregation of data, e.g. regulatory filings, that are derived from multiple sources of laboratory data are costly to create. These unnecessary costs impact operations within a company’s laboratories, between partnering companies, and between a company and contract research organizations (CROs). Finally, the accelerating transition of laboratories from hybrid (paper + electronic) to purely electronic data streams, coupled with an ever-increasing regulatory scrutiny of electronic data management practices, further require a comprehensive solution. This talk will discuss how The Allotrope Foundation is providing a new framework for data standards through collaboration between numerous stakeholders.
I shall provide a summary of JISC work in the area of ‘Big Data’. My primary focus will be on how to manage the huge amount of research data produced in UK Universities. I shall cover the history of JISC interventions to improve research data management and look at next steps. I shall touch on some other areas of work like ‘Digging into Data’ and web archiving which also deal with ‘big data’.
Automatic Classification of Springer Nature Proceedings with Smart Topic MinerFrancesco Osborne
The document summarizes research on automatically classifying Springer Nature proceedings using the Smart Topic Miner (STM). STM extracts topics from publications, maps them to a computer science ontology, selects relevant topics using a greedy algorithm, and infers tags. It was tested on 8 Springer Nature editors who found STM accurately classified 75-90% of proceedings and improved their work. However, STM is currently limited to computer science and occasional noisy results were found in books with few chapters. Future work aims to expand STM to characterize topic evolution over time and directly support author tagging.
This document describes an ISO 15926 reference data engineering methodology developed by TechInvestLab. It uses the ISO 24744 standard to describe work products, the reference data lifecycle stages of identification, characterization, mapping, transfer and verification, processes, roles and tools. The methodology focuses on the tasks of a reference data engineer, including developing and evaluating data for inclusion in reference data libraries. It was developed informally but provides a checklist for reference data projects. However, lessons learned are that it could be improved by rewriting using the OMG Essence methodology language to separate the abstract kernel from concrete practices.
One Standard to rule them all?: Descriptive Choices for Open EducationR. John Robertson
One Standard to rule them all?: Descriptive Choices for Open Education, OCWC2010 Hanoi, May 5-7 2010
R. John Robertson1, Lorna Campbell1, Phil Barker2, Li Yuan3, and Sheila MacNeill1 1Centre for Academic Practice and Learning Enhancement, University of Strathclyde, 2Institute for Computer Based Learning, Heriot-Watt University 3Institute for Cybernetic Education, University of Bolton
The webinar will be based on LODE-BD Recommendations - Linked Open Data (LOD)-enabled bibliographical data- which aims at providing bibliographic data providers of open repositories with a set of recommendations that will support the selection of appropriate encoding strategies for producing meaningful Linked Open Data (LOD)-enabled bibliographical data (LODE-BD).
The technology of object oriented databases was introduced to system developers in
the late 1980’s. Object DBMSs add database functionality to object programming languages. A
major benefit of this approach is the unification of the application and database development into
a seamless data model and language environment. As a result, applications require less code, use
more natural data modeling, and code bases are easier to maintain.
This document provides an introduction to a course on data warehousing and data mining. It outlines the reference books and additional materials for the course. It describes a semester project where students will develop an application for an organization and collect necessary data. It discusses the approach of the course, which will develop an understanding of relational database concepts, data warehouses, online analytical processing, data mining, and their applications. It emphasizes that knowledge is power and intelligence in today's data-driven economy.
This document presents a case study on applying a data analytics approach to conducting a systematic literature review on master data management. It outlines the steps taken, including defining review questions, searching multiple databases and sources, combining and preprocessing the data, and performing descriptive and text analyses. The analyses addressed questions about trends in publications over time, primary databases, publication types, and frequent keywords. This provided insights into the progress and topics within the master data management research domain. The presented structured approach aims to improve the replicability of systematic literature reviews.
This document outlines the course structure and content for a Data Science course. The 5 modules cover: 1) introductions to data science concepts and statistical inference using R; 2) exploratory data analysis and machine learning algorithms; 3) feature generation/selection and additional machine learning algorithms; 4) recommendation systems and dimensionality reduction; 5) mining social network graphs and data visualization. The course aims to teach students to define data science fundamentals, demonstrate the data science process, explain necessary machine learning algorithms, illustrate data analysis techniques, and follow ethics in data visualization.
The MIDESS Project explored sharing digital content like images between university repositories. It tested standards like OAI-PMH and METS for exchanging metadata and objects. While these standards allow some interoperability, repositories implemented them differently, preventing full sharing. The project highlighted ongoing issues around information architecture, repository functionality for multimedia, and integrating repositories into broader systems.
Feb 26 NISO Training Thursday
Crafting a Scientific Data Management Plan
About the Training
Addressing a data management plan for the first time can be an intimidating exercise. Join NISO for a hands-on workshop that will guide you through the elements of creating a data management plan, including gathering necessary information, identifying needed resources, and navigating potential pitfalls. Participants explore the important components of a data management plan and critique excerpts of sample plans provided by the instructors.
This session is meant to be a guided, step-by-step session that will follow the February 18 NISO Virtual Conference, Scientific Data Management: Caring for Your Institution and its Intellectual Wealth.
About the Instructors
Kiyomi D. Deards, MSLIS, Assistant Professor, University of Nebraska-Lincoln Libraries
Jennifer Thoegersen, Data Curation Librarian, University of Nebraska-Lincoln Libraries
Towards a harmonization of metadata application profiles for agricultural lea...Gauri Salokhe
Metadata interoperability allows the exchange and preservation of crucial learning and teaching information, as well as its future reuse among a large number of different systems and repositories. This paper introduces work around metadata interoperability that has taken place in the context of the Agricultural Learning Repositories Task Force (AgLR-TF), an international community of the stakeholders that are involved in agricultural learning repositories. It particularly focuses on a review and assessment of metadata application profiles that are currently implemented in agricultural learning repositories. The results of this study can be found useful by who are designing, implementing and operating agricultural learning repositories, facilitating thus metadata interoperability in this application field.
Application of recently developed FAIR metrics to the ELIXIR Core Data ResourcesPistoia Alliance
The FAIR (Findable, Accessible, Interoperable and Reusable) principles aim to maximize the discovery and reuse of digital resources. Using recently developed software and metrics to assess FAIRness and supported through an ELIXIR Implementation Study, Michel worked with a subset of ELIXIR Core Data Resources to apply these technologies. In this webinar, he will discuss their approach, findings, and lessons learned towards the understanding and promotion of the FAIR principles.
Similar to Csun pse-006-presentation-2013 v2.1 (20)
La formació actual ha de ser prou flexible per a incloure tots els perfils d’alumnat, entre ells els estudiants amb necessitats educatives especials. Amb el projecte campus multimodal s’aprofita la plataforma Moodle per oferir als estudiants dues eines de suport a la lectura ClaroRead y ReadSpeaker. Es fa una prova pilot amb un miler d’estudiants durant un semestre i a partir d’enquestes i registres d’ús es valora la seva utilitat. Els resultats confirmen l’existència de problemes generalitzats de lecto-escriptura entre l’alumnat universitari i la utilitat de la tecnologia de parla per aminimitzar-los.
[text complet a: http://www.cidui.org/revistacidui/index.php/cidui/article/view/644]
La dona i la tecnologia. Participació a la taula rodona «Dones i documentació: obrint nous camins» del Centenari de la Facultat de Biblioteconomia i Documentació [http://bd.ub.edu/noticies/centenari-bid-taula-rodona-sobre-dones-i-documentacio]
Documentos multimodales: posibilidades y valoracionadaptabit
Este documento discute las posibilidades y valoración de los documentos digitales multimodales. Explica que los documentos multimodales que incorporan texto, imágenes, sonido y video pueden facilitar una mejor atención, memorización y comprensión en los lectores. También analiza criterios como la usabilidad, accesibilidad y coste de diferentes herramientas para crear documentos multimodales.
This paper presents a research concerning the conversion of non-accessible web pages containing mathematical formulae into accessible versions through an OCR (Optical Character Recognition) tool. The objective of this research is twofold. First, to establish criteria for evaluating the potential accessibility of mathematical web sites, i.e. the feasibility of converting non-accessible (non-MathML) math sites into accessible ones (Math-ML). Second, to propose a data model and a mechanism to publish evaluation results, making them available to the educational community who may use them as a quality measurement for selecting learning material.
Results show that the conversion using OCR tools is not viable for math web pages mainly due to two reasons: many of these pages are designed to be interactive, making difficult, if not almost impossible, a correct conversion; formula (either images or text) have been written without taking into account standards of math writing, as a consequence OCR tools do not properly recognize math symbols and expressions. In spite of these results, we think the proposed methodology to create and publish evaluation reports may be rather useful in other accessibility assessment scenarios.
In Science, Technology, Engineering, and Mathematics (STEM) academic literature, mathematical formulae, diagrams and other two-dimensional structures are a critical information source (Sojka et al.). Even for many sighted students “math education poses a serious roadblock in entering technical disciplines” (Karshmer et al.). The outputs of mathematics literature could create even greater barriers to visually impaired students (Smeureanu et al.) and students with learning disabilities (Lewis and al.), due to the technical notations they include, the large number of visual resources used (such as diagrams, graphs and charts) and the inclusion of visual concepts, such as spatial concepts. Currently, the inclusion of visual information in academic research papers is a widespread practice. Efforts to convert academic literature in mathematics to accessible formats after their publication have been made (Sojka et al.). However, most research literature is not currently supported by a publishing process that produces accessible outputs of scientific documents (Gardner et al.).
A solution for making the mathematics in electronic documents accessible is to provide alternative textual descriptions to critical graphical information (Webb), as the textual information can be rendered in speech by screen readers or in Braille. This solution “corresponds to the standard accessibility approach” (Cooper et al.) proposed by the Web Content Accessibility Guidelines WCAG 1.0 and WCAG 2.0 (W3C).
Several proposals exist on making standard statistical graphics accessible. Demir (Demir et al.) and Ferres (Ferres et al.) have applied statistical and natural language processing techniques for the generation of spoken descriptions of statistical graphics. Doush (Doush et al.) has proposed a multi-modal approach for accessing charts in Excel for visually impaired users.
The National Center for Accessible Media (NCAM) has created guidelines on how to textually describe diagrams and other standard graphics within Digital Talking Books, with the aim of making them more accessible by for students or scientists who are blind or visually impaired.
In this paper we aimed to review publishing practices, policies and submission guidelines concerning the accessibility of visual content in a sample of ten mathematics academic journals in mathematics. We checked the application of the accessibility policy in one article for from each journal. In particular, we focused our analysis on the alternative textual means of accessing the underlying semantics of figures. As noted by Cooper (Cooper et al.), the design of appropriate image textual descriptions of images is a challenging task and “this becomes more challenging as the complexity of the mathematics increases”. In order to address this issue, Splendiani (Splendiani et al.(a)) suggests that “the function of the text alternative can be accomplished by any textual description.
Com comunicar-nos de forma accessible per Internetadaptabit
Conferència a Dixit "Com comunicar-nos de forma accessible per Internet". Accés al powerpoint original [http://bd.ub.edu/grups/adaptabit/dixit/DixitRiberaMaig2013.pptx] i a la versió web [http://bd.ub.edu/grups/adaptabit/dixit/html/index.htm]
The simplified electron and muon model, Oscillating Spacetime: The Foundation...RitikBhardwaj56
Discover the Simplified Electron and Muon Model: A New Wave-Based Approach to Understanding Particles delves into a groundbreaking theory that presents electrons and muons as rotating soliton waves within oscillating spacetime. Geared towards students, researchers, and science buffs, this book breaks down complex ideas into simple explanations. It covers topics such as electron waves, temporal dynamics, and the implications of this model on particle physics. With clear illustrations and easy-to-follow explanations, readers will gain a new outlook on the universe's fundamental nature.
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
Physiology and chemistry of skin and pigmentation, hairs, scalp, lips and nail, Cleansing cream, Lotions, Face powders, Face packs, Lipsticks, Bath products, soaps and baby product,
Preparation and standardization of the following : Tonic, Bleaches, Dentifrices and Mouth washes & Tooth Pastes, Cosmetics for Nails.
Thinking of getting a dog? Be aware that breeds like Pit Bulls, Rottweilers, and German Shepherds can be loyal and dangerous. Proper training and socialization are crucial to preventing aggressive behaviors. Ensure safety by understanding their needs and always supervising interactions. Stay safe, and enjoy your furry friends!
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
1. IEEE LOM is not an option:
lessons to learn
Miquel Centelles, Mireia Ribera, Marina Salse
Ensenyament – Assignatura
Grup Adaptabit: Working group on digital accessibility
for teaching, research – 20xx
Curs 20xx and teaching innovation
Docent: Nom Cognoms
Departament of Librarianship and Information Science
University of Barcelona
2. Summary
Rationale
Objectives
Methodology
Data analysis
Discussion
Further steps
2/27/2013 28th Annual International Technology and
Persons with Disabilities Conference 2
3. Rationale:
context of the research
A project on creating accessible
teaching resources within the University
of Barcelona.
We want to recommend teachers a
metadata model covering accessibility
aspects of resources and processes.
2/27/2013 28th Annual International Technology and
Persons with Disabilities Conference 3
4. Rationale:
why our (first) interest in IEEE LOM
Adoption of SCORM: Many Learning
Management Systems support
SCORM, and SCORM uses IEEE
LOM metadata.
Adoption in LMS: LOM as a major
development of eLearning systems
(such as LMS) and is widely used in
such systems, notably for example in
Europe.
Adoption of profiles: LOM has been
widely profiled for particular domains.
28th Annual International Technology and
2/27/2013 Persons with Disabilities Conference 4
5. Rationale:
why our (first) interest in IEEE LOM
2/27/2013 28th Annual International Technology and
Persons with Disabilities Conference 5
6. Rationale:
known IEEE LOM drawbacks
Its abstract model is not aligned with basic
standards for semantic interoperability, such as
Resource Description Framework (RDF).
The adaptation of the standard to the web of data
is suffering from delays in two key processes:
The IEEE LOM mapping to Dublin Core (DCMI)
abstract model.
The elaboration and publication of an official RDF
vocabulary.
2/27/2013 28th Annual International Technology and
Persons with Disabilities Conference 6
7. Objectives
of the research
1. Identification of application profiles
based on IEEE LOM.
2. Descriptive review of IEEE LOM
application profiles (AP).
3. Descriptive review of AP implementation
on Learning Resource Repositories
(LRR).
2/27/2013 28th Annual International Technology and
Persons with Disabilities Conference 7
8. Methodology:
on application profiles
Application profiles gathering:
Literature search through key
actors, European projects, and bibliographic
databases.
Complement with questionnaires and
interviews to AP holders.
2/27/2013 28th Annual International Technology and
Persons with Disabilities Conference 8
9. Methodology:
on application profiles
Application profiles selection:
It must be based (mostly) on IEEE LOM, of
course
It must be currently active
No restrictions on:
• the practice community
• the scope of application profiles (topics…)
• the country of origin
2/27/2013 28th Annual International Technology and
Persons with Disabilities Conference 9
10. Methodology:
on application profiles
Key data of findings:
32 different application profiles
3 have a world wide scope
11 are focused on Europe
4 are focused on USA
the remaining 17 are focused on
different, specific countries
2/27/2013 28th Annual International Technology and
Persons with Disabilities Conference 10
11. Methodology:
on LRRs
One LRR is selected for each IEEE LOM
application profile:
It must offer openly accessible resources
It could belong to one unique institution, or to
several
If several LRRs, selection based on:
• University over lower studies
• Broad content over specialized
2/27/2013 28th Annual International Technology and
Persons with Disabilities Conference 11
12. Methodology:
on LRRs
10 samples of metadata records are obtained from
each LRR:
Search period: 29th August-8th October 2012.
Search strategy (descending order):
• 1st Criteria: first learning resources published during 2012
• 2nd Criteria: learning resources of the type “Lecture”
• 3rd Criteria: keyword “education”
Finally, we got search results concerning 24 APs
2/27/2013 28th Annual International Technology and
Persons with Disabilities Conference 12
13. Methodology:
10 samples of records of each LRR
28th Annual International Technology and
2/27/2013 Persons with Disabilities Conference 13
14. Data analysis:
2 different purposes
APs versus base standard IEEE LOM
Metadata records versus APs
2/27/2013 28th Annual International Technology and
Persons with Disabilities Conference 14
15. Data analysis:
different evidence levels
Not all AP provides the same quantity and quality
of evidences for the analysis.
All of them: documentation about schema and data
values
Other evidences, depending on each AP:
• Full evidence level: records in XML binding.
• Medium evidence level: records in some human readable
format (not XML).
• Low evidence level: no metadata records (8 APs), mostly due
to LRR out of order during the test period.
2/27/2013 28th Annual International Technology and
Persons with Disabilities Conference 15
16. Data analysis:
APs vs. base standard
# of simple data elements in AP versus 58
total in the base Standard
# of mandatory simple data elements in AP
# of non allowed modifications within AP:
2/27/2013 28th Annual International Technology and
Persons with Disabilities Conference 16
17. Data analysis:
APs vs. base standard
2/27/2013 28th Annual International Technology and
Persons with Disabilities Conference 17
18. Data analysis:
APs vs. base standard
Non allowed modifications:
1. Altering the relative location of an existing data
element (e.g. moving a parent element to a child
one)
2. Creating a new element that mimics the semantic
intent of an existing element
3. Changing the meaning of an existing element
4. Changing the name of an element
5. Extending a schema other than at a specified
extension point
2/27/2013 28th Annual International Technology and
Persons with Disabilities Conference 18
19. Data analysis:
APs vs. base standard
Non allowed modifications (cont.):
6. Extending cardinality of an element
7. Adding new items in a controlled vocabulary list
8. Modifying the value space and data type of data
elements from the base schema.
9. Defining data types or value spaces for
aggregate data elements in the base schema
2/27/2013 28th Annual International Technology and
Persons with Disabilities Conference 19
20. Number of simple data elements
included in AP respect base schema
120.00%
100.00%
80.00%
60.00%
40.00%
20.00%
0.00%
28th Annual International Technology and
2/27/2013 Persons with Disabilities Conference 20
21. Number of simple data elements
included in AP respect base schema
14 APs include less than the 58 elements in
the base standard (44%)
12 APs include all the 58 data elements in
the base standard (37%)
6 APs include more than the 58 elements in
the base standard (19%)
28th Annual International Technology and
2/27/2013 Persons with Disabilities Conference 21
22. Number of mandatory simple data
elements stated by AP
35
30
25
20
15
10
5
0
28th Annual International Technology and
2/27/2013 Persons with Disabilities Conference 22
23. Number of mandatory simple data
elements stated by AP
25 APs state mandatory (simple) data elements
(78%):
At top: Biosci Education Network (BEN) states 30
mandatory elements
At bottom: LOM-FR states 3 mandatory elements
7 APs don’t state any mandatory (simple) data
elements (22%)
28th Annual International Technology and
2/27/2013 Persons with Disabilities Conference 23
24. AP is conformant with base schema?
28th Annual International Technology and
2/27/2013 Persons with Disabilities Conference 24
25. AP is conformant with base schema?
4 APs are fully conformant with the base schema (12%)
25 APs are not fully conformant with the base schema
(78%)
The less respected restriction: Adding new items in a
controlled vocabulary list (18/25)
The most respected restriction: Defining data types or value
spaces for aggregate data elements in the base schema
(2/25)
In 3 cases, solid conclusions can not be made based on
available sources (9%)
28th Annual International Technology and
2/27/2013 Persons with Disabilities Conference 25
26. Data analysis:
Metadata records versus APs
Our questions are:
Metadata records respect mandatory
conditions of simple data elements in the AP?
Metadata records in the LRR apply controlled
vocabularies established by the AP?
Metadata records in the LRR respect
requirement related to value spaces and data
types in the AP?
2/27/2013 28th Annual International Technology and
Persons with Disabilities Conference 26
27. LRR follows mandatory conditions?
28th Annual International Technology and
2/27/2013 Persons with Disabilities Conference 27
28. LRR follows mandatory conditions?
• Not applicable in 5 LRRs (21%)
• Mandatory conditions are followed in 5
LRRs (21%)
• Mandatory conditions are not followed in
11 LRRs (46%)
• In 3 cases, solid conclusions can not be
made based on available sources (12%)
28th Annual International Technology and
2/27/2013 Persons with Disabilities Conference 28
29. LRR applies specified controlled
vocabulary?
28th Annual International Technology and
2/27/2013 Persons with Disabilities Conference 29
30. LRR applies specified controlled
vocabularies?
• Specified controlled vocabularies are
applied in 19 LRRs (79%)
• Specified controlled vocabularies are not
applied in 1 LRR (4%)
• In 4 cases, solid conclusions can not be
made based on available sources (17%)
28th Annual International Technology and
2/27/2013 Persons with Disabilities Conference 30
31. LRR apply data types and values
restrictions?
28th Annual International Technology and
2/27/2013 Persons with Disabilities Conference 31
32. LRR applies data types and values
restrictions?
• Data types and values restrictions are
applied in 11 LRRs (46%)
• Data types and values restrictions are not
applied in 1 LRR (4%)
• In 12 cases, solid conclusions can not be
made based on available sources (50%)
28th Annual International Technology and
2/27/2013 Persons with Disabilities Conference 32
33. Discussion:
disappointing results
Most AP are not conformant with IEEE
LOM base standard.
Implementation of AP on LRR don’t even
follow the application profile conditions.
Availability of interchange formats
(XML, JSON... not to say RDF) for
metadata records is not a broad practice.
2/27/2013 28th Annual International Technology and
Persons with Disabilities Conference 33
34. Discussion:
main conformance black holes
Extension of controlled vocabularies with
new words created adhoc.
Modifications in value spaces and data
types of data elements.
Definition of data types or value spaces for
aggregated data elements.
2/27/2013 28th Annual International Technology and
Persons with Disabilities Conference 34
35. Discussion:
lessons learned
Keep them simple. Metadata is an
“overhead” task which should be minimum
and as automatic as possible.
Force conformance through XML
schemas, semantic web vocabularies or
other applied constraints
Set a standard for the display of records
and their reusability.
2/27/2013 28th Annual International Technology and
Persons with Disabilities Conference 35
36. Further steps:
new standards in competition
Based on those drawbacks, we have decided
to move on new alternatives for metadata
base schema:
ISO/IEC 19788 Metadata for learning resources
(MLR) standard
…or…
Learning Resources Metadata Initiative
(LRMI), which uses microdata and is led by
significant companies.
2/27/2013 28th Annual International Technology and
Persons with Disabilities Conference 36
37. Further steps:
we’ll keep monitoring
Nevertheless, we’ll keep on completing the map
of IEEE LOM based AP.
In order to:
Monitoring the evolution and adaptation of IEEE
LOM APs to the semantic web.
Monitoring the solutions which LRRs adopt to
manage mentioned challenges.
Monitoring the evolution of IEEE LOM standard in
relation with the raising of “new” learning resources
metadata standards.
2/27/2013 28th Annual International Technology and
Persons with Disabilities Conference 37
38. Further steps:
collaborations?
We ask you to give us information about
IEEE LOM APs and LRR using
them, answering the questionnarie for this
purpose available at:
Adaptabit http://bd.ub.edu/adaptabit/
We will offer you the publication of all the
data about IEEE LOM APs as open data.
2/27/2013 28th Annual International Technology and
Persons with Disabilities Conference 38
39. Thanks for your attention!
Questions,
Opinions,
Suggestions…
miquel.centelles@ub.edu
2/27/2013 28th Annual International Technology and
Persons with Disabilities Conference 39