My keynote at the Ontologies Come of Age workshop at the International Semantic Web Conference in Bonn Germany. This workshop was named after a paper I wrote about a decade ago.
Exploiting classical bibliometrics of CSCW: classification, evaluation, limit...António Correia
Existing mechanisms are inefficient for a single human to classify and analyze large amounts of publications. The document proposes augmenting human intelligence with computational mechanisms and a crowd-enabled model to help address limitations in manually analyzing scientific literature at scale. It involves classifying publications through human intelligence tasks like annotation, classification, and evaluation to help map research trends and identify gaps. The goal is to validate using human and computational cooperation to efficiently analyze bibliographic data involving many humans interacting on a massive level.
The document is an abstract for a PhD student conference that describes Thomas Daniel Ullmann's proposed PhD thesis. The thesis aims to develop a framework for mash-up learning environments that allows users to reflect on resources to make informed decisions. Mash-ups combine separate data sources and APIs to create new applications. The goal is to provide reflection functionality in a mash-up environment by including manually and automatically added indicators to foster reflection on resources and topics.
Student Achievement Review (initially presented during Inauguration Function of the Ohio Center of Excellence in Knowledge-Enabled Computing at Wright State (Kno.e.sis)) - updated since
Center overview: http://bit.ly/coe-k
Invitation: http://bit.ly/COE-invite
The Ohio Center of Excellence in Knowledge-enabled Computing at Wright State University:
1) Shares the second position globally in impact on the World Wide Web and has the largest academic research group in the US working on semantic web, social media, big data, and health applications.
2) Has exceptional student success with internships and jobs at top companies and a total of 100 researchers including 15 highly cited faculty and 45 PhD students, largely funded through $2M+ annually in research funding.
3) Provides world-class resources for multidisciplinary projects across information technology and domains like biomedicine, with collaboration from industry partners like Google and IBM.
This document discusses analyzing alumni networks through network analysis techniques. It notes that alumni outcomes will likely factor into future university rankings. Traditional rankings do not capture the network properties of alumni connections. The document then examines characteristics of intra-university, inter-university, and university-company networks. Comparing networks with and without including the university node shows the university can impact alumni connectivity. Developing strong alumni networks can significantly benefit both alumni and their universities.
Slides e humanities presentation, 27jan2011Nick Jankowski
The document discusses plans for a project to create enhanced publications from four academic books. It defines enhanced publications as those supplemented with additional materials like data, images, and links. The project aims to develop web platforms bringing together content from the books, make relationships between concepts explicit, and create instructional materials about enhanced publications. Challenges include preserving dynamic digital objects and convincing publishers of the value of enhanced formats.
Exploiting classical bibliometrics of CSCW: classification, evaluation, limit...António Correia
Existing mechanisms are inefficient for a single human to classify and analyze large amounts of publications. The document proposes augmenting human intelligence with computational mechanisms and a crowd-enabled model to help address limitations in manually analyzing scientific literature at scale. It involves classifying publications through human intelligence tasks like annotation, classification, and evaluation to help map research trends and identify gaps. The goal is to validate using human and computational cooperation to efficiently analyze bibliographic data involving many humans interacting on a massive level.
The document is an abstract for a PhD student conference that describes Thomas Daniel Ullmann's proposed PhD thesis. The thesis aims to develop a framework for mash-up learning environments that allows users to reflect on resources to make informed decisions. Mash-ups combine separate data sources and APIs to create new applications. The goal is to provide reflection functionality in a mash-up environment by including manually and automatically added indicators to foster reflection on resources and topics.
Student Achievement Review (initially presented during Inauguration Function of the Ohio Center of Excellence in Knowledge-Enabled Computing at Wright State (Kno.e.sis)) - updated since
Center overview: http://bit.ly/coe-k
Invitation: http://bit.ly/COE-invite
The Ohio Center of Excellence in Knowledge-enabled Computing at Wright State University:
1) Shares the second position globally in impact on the World Wide Web and has the largest academic research group in the US working on semantic web, social media, big data, and health applications.
2) Has exceptional student success with internships and jobs at top companies and a total of 100 researchers including 15 highly cited faculty and 45 PhD students, largely funded through $2M+ annually in research funding.
3) Provides world-class resources for multidisciplinary projects across information technology and domains like biomedicine, with collaboration from industry partners like Google and IBM.
This document discusses analyzing alumni networks through network analysis techniques. It notes that alumni outcomes will likely factor into future university rankings. Traditional rankings do not capture the network properties of alumni connections. The document then examines characteristics of intra-university, inter-university, and university-company networks. Comparing networks with and without including the university node shows the university can impact alumni connectivity. Developing strong alumni networks can significantly benefit both alumni and their universities.
Slides e humanities presentation, 27jan2011Nick Jankowski
The document discusses plans for a project to create enhanced publications from four academic books. It defines enhanced publications as those supplemented with additional materials like data, images, and links. The project aims to develop web platforms bringing together content from the books, make relationships between concepts explicit, and create instructional materials about enhanced publications. Challenges include preserving dynamic digital objects and convincing publishers of the value of enhanced formats.
Letter to CORE workshop participants, jankowski, 11sept2010Nick Jankowski
The document is an email from Nick Jankowski informing participants about an upcoming workshop on scientific publishing. It provides details about the workshop, including its date, time, and location. It requests that participants review sample materials on publishing procedures and policies of various academic journals, including New Media & Society, and submit an example of a published or presented paper. The email aims to prepare participants for discussion on scholarly publishing practices at the upcoming workshop.
Laura Scholarly Communication Dfid 10 September Final2CRUOnline
The document discusses opportunities for scholarly communication through information and communication technologies (ICT). It notes that academics want their work to be read, visible, cited, and to make an impact. However, dissemination of knowledge has traditionally been out of academics' control. New trends include electronic dissemination through various models like repositories and journals. Content is increasingly multimedia. Scholars will rely on integrated electronic environments containing a variety of scholarly works. Universities are taking more control over dissemination. Academics are encouraged to develop online presences and exploit opportunities to share knowledge.
Rethinking concepts in virtual worlds and education researchEduserv
A presentation by Diane Carr and Martin Oliver at the Where next for Virtual Worlds in UK higher and further education event held in London in January 2010.
The document provides an overview of funding and active projects at Kno.e.sis as of December 2015. Key details include total extramural funds exceeding $8.3 million with the majority obtained that year from competitive NSF and NIH sources. Active projects focus on areas such as context-aware harassment detection on social media, monitoring drug trends on social media, disaster management using social and physical sensing, and modeling social behavior for healthcare utilization in depression. The summary highlights student and faculty involvement and accomplishments across multiple funded projects.
Representation Patterns for Cultural Heritage Resources Richard Urban
Full-sized available via http://diginole.lib.fsu.edu/cgi/preview.cgi?article=1015&context=slis_faculty_publications
The universe of available cultural heritage metadata schemas grows more complex every day. Existing schemas are optimized for use in the library, archive, or museum domains and to fit the needs of shared services and applications. Emerging Linked Data approaches introduce additional challenges for metadata designers and creators responsible for implementing these standards. In other domains, design patterns are used to clearly articulate problems, their contexts, and available solutions. This poster introduces preliminary research to identify such patterns in cultural heritage metadata standards using content analysis and a participatory design methodology.
Slides, ljubljana presentation, enhanced publications, jankowski, 10 june2011Nick Jankowski
The document discusses a project to enhance scholarly publications in the humanities and social sciences through hybrid forms of publication. The project aims to 1) enhance four published books with supplementary online materials like links, blogs, and visualizations, and 2) develop a database and series of topic-related enhanced publications. Key challenges addressed are preserving dynamic online content, interrelating publication components, and gaining acceptance from publishers and authors.
This document summarizes a presentation on using bibliographic couplings to analyze the structure of a large public university. The presentation analyzed co-citation networks between university scholars and the papers they cited to identify overlapping intellectual communities across disciplines. It identified key individuals who bridge communities and act as knowledge conduits. The analysis found that the network of engaged scholars bears little resemblance to the existing academic units, suggesting opportunities to restructure the university to better support interdisciplinary work.
Context-Aware Harassment Detection on Social Media
is an inter-disciplinary project among the Ohio Center of Excellence in Knowledge-enabled Computing (Kno.e.sis), the Department of Psychology, and Center for Urban and Public Affairs (CUPA) at Wright State University. The aim of this project is to develop comprehensive and reliable context-aware techniques (using machine learning, text mining, natural language processing, and social network analysis) to glean information about the people involved and their interconnected network of relationships, and to determine and evaluate potential harassment and harassers. An interdisciplinary team of computer scientists, social scientists, urban and public affairs professionals, educators, and the participation of college and high schools students in the research will ensure wide impact of scientific research on the support for safe social interactions.
Keynote presentation for the International Semantic Web Conference in Athens Greece, on November 9, 2023. The talk addresses the generative AI explosion and its potential impacts on the Semantic Web and Knowledge Graph communities and, in fact, may spark a research Renaissance.
Abstract:
We are living in an age of rapidly advancing technology. History may view this period as one in which generative artificial intelligence is seen as reshaping the landscape and narrative of many technology-based fields of research and application. Times of disruptions often present both opportunities and challenges. We will discuss some areas that may be ripe for consideration in the field of Semantic Web research and semantically-enabled applications. Semantic Web research has historically focused on representation and reasoning and enabling interoperability of data and vocabularies. At the core are ontologies along with ontology-enabled (or ontology-compatible) knowledge stores such as knowledge graphs. Ontologies are often manually constructed using a process that (1) identifies existing best practice ontologies (and vocabularies) and (2) generates a plan for how to leverage these ontologies by aligning and augmenting them as needed to address requirements. While semi-automated techniques may help, there is typically a significant portion of the work that is often best done by humans with domain and ontology expertise. This is an opportune time to rethink how the field generates, evolves, maintains, and evaluates ontologies. We consider how hybrid approaches, i.e., those that leverage generative AI components along with more traditional knowledge representation and reasoning approaches to create improved processes. The effort to build a robust ontology that meets a use case can be large. Ontologies are not static however and they need to evolve along with knowledge evolution and expanded usage. There is potential for hybrid approaches to help identify gaps in ontologies and/or refine content. Further, ontologies need to be documented with term definitions and their provenance. Opportunities exist to consider semi-automated techniques for some types of documentation, provenance, and decision rationale capture for annotating ontologies. The area of human-AI collaboration for population and verification presents a wide range of areas of research collaboration and impact. Ontologies need to be populated with class and relationship content. Knowledge graphs and other knowledge stores need to be populated with instance data in order to be used for question answering and reasoning. Population of large knowledge graphs can be time consuming. Generative AI holds the promise to create candidate knowledge graphs that are compatible with the ontology schema. The knowledge graph should contain provenance information identifying how the content was populated and its source and correctness and currency should be checked. A human-AI assistant approach is presented.
The Stanford Workshop focused on creating plans to expedite a shift in how knowledge and information resources are managed and discovered through linked data. The goal was to identify capabilities and design new tools, processes, and systems that move beyond current metadata practices to link related resources and provide improved navigation and discovery through open feedback. A number of organizations from around the world participated in the workshop to discuss these issues.
Here are the key points about using content-based filtering techniques:
- Content-based filtering relies on analyzing the content or description of items to recommend items similar to what the user has liked in the past. It looks for patterns and regularities in item attributes/descriptions to distinguish highly rated items.
- The item content/descriptions are analyzed automatically by extracting information from sources like web pages, or entered manually from product databases.
- It focuses on objective attributes about items that can be extracted algorithmically, like text analysis of documents.
- However, personal preferences and what makes an item appealing are often subjective qualities not easily extracted algorithmically, like writing style or taste.
- So while content-based filtering can
Stuart Weibel discusses missing pieces in establishing globally interoperable metadata systems. Experts identify the most important missing pieces as tools to support metadata reuse across domains and widespread adoption of common metadata approaches. Conceptual issues include a lack of research on decentralized data management and choosing vocabularies. Organizational impediments include economic models that inhibit data sharing and the difficulty of getting organizations to adopt common standards due to inertia and lack of clear paths forward. Addressing these issues would improve prospects for interoperable metadata systems.
The document provides an overview of the speaker's background and research interests in digital ecosystems modelling. It discusses how socio-technical systems can be modeled using a systems approach rather than just mathematics. It also touches on ideas like knowledge ecosystems, complex systems emergence, and the importance of shared vocabularies. The goal is to engage the audience and potentially find opportunities for collaboration.
This document provides information about a proposed workshop on knowledge acquisition from distributed, autonomous, and semantically heterogeneous data sources to be held at the 2005 IEEE International Conference on Data Mining. The workshop aims to bring together researchers from areas like machine learning, data mining, knowledge representation, databases, and selected application domains to address challenges in performing knowledge discovery from multiple distributed data sources that may have semantic differences. Topics of interest include learning from distributed data, making data sources self-describing through ontologies, learning ontologies and mappings between schemas, and handling semantic heterogeneity. The workshop will include invited talks and presentations of contributed papers, and targets researchers, students, and practitioners interested in knowledge acquisition from distributed data.
The document describes Earthster Core Ontology (ECO), a domain ontology for Life Cycle Assessment (LCA). ECO aims to provide a vocabulary for core LCA concepts to publish LCA data on the web in a semantically interoperable way. It defines concepts like Process, Quantified Effect, and Elementary Flow. ECO is still under development with feedback from the LCA community. Its goals are to extend existing LCA data structures, link data sources, and allow for flexible extension over time as the field evolves.
Software Ecosystem Evolution. It's complex!Tom Mens
This document discusses software ecosystems and their complex evolution. It defines a software ecosystem as interdependent software projects that evolve together. Research analyzes ecosystems using ideas from biology and complex systems across disciplines. Ecosystems are huge networks of thousands of interdependent parts and contributors that are difficult to manage and grow superlinearly over time. They exhibit properties of complex networks like following power laws and being small worlds. Simple models can explain ecosystem growth patterns.
This document discusses disruptive changes in libraries due to new technologies and user behaviors. It notes the shift to electronic resources like e-journals, e-books, and born-digital content, which require new library processes. Open science and open educational resources are also discussed. Surveys found that younger users value quick search results over assistance from librarians. The future of libraries is uncertain as their roles evolve in research and learning. Options for shared library services are presented to help libraries adapt to these changes in a sustainable way.
Trust and Accountability: experiences from the FAIRDOM Commons Initiative.Carole Goble
Presented at Digital Life 2018, Bergen, March 2018. In the Trust and Accountability session.
In recent years we have seen a change in expectations for the management and availability of all the outcomes of research (models, data, SOPs, software etc) and for greater transparency and reproduciblity in the method of research. The “FAIR” (Findable, Accessible, Interoperable, Reusable) Guiding Principles for stewardship [1] have proved to be an effective rallying-cry for community groups and for policy makers.
The FAIRDOM Initiative (FAIR Data Models Operations, http://www.fair-dom.org) supports Systems Biology research projects with their research data, methods and model management, with an emphasis on standards and sensitivity to asset sharing and credit anxiety. Our aim is a FAIR Research Commons that blends together the doing of research with the communication of research. The Platform has been installed by over 30 labs/projects and our public, centrally hosted FAIRDOMHub [2] supports the outcomes of 90+ projects. We are proud to support projects in Norway’s Digital Life programme.
2018 is our 10th anniversary. Over the past decade we learned a lot about trust between researchers, between researchers and platform developers and curators and between both these groups and funders. We have experienced the Tragedy of the Commons but also seen shifts in attitudes.
In this talk we will use our experiences in FAIRDOM to explore the political, economic, social and technical, social practicalities of Trust.
[1] Wilkinson et al (2016) The FAIR Guiding Principles for scientific data management and stewardship Scientific Data 3, doi:10.1038/sdata.2016.18
[2] Wolstencroft, et al (2016) FAIRDOMHub: a repository and collaboration environment for sharing systems biology research Nucleic Acids Research, 45(D1): D404-D407. DOI: 10.1093/nar/gkw1032
Presentation at EMTACL10, http://www.ntnu.no/ub/emtacl/
Guus van den Brekel
Central medical library, UMCG
Virtual Research Networks: towards Research 2.0
In the next few years, the further development of social, educational and research networks – with its extensive collaborative possibilities – will be dictating how users will search for, manage and exchange information. The network – evolved by technology – is changing the user's behaviour and that will affect the future of information services. Many envision a possible leading role for libraries in collaboration and community building services.
Users are not only heavily using new tools, but are also creating and shaping their own preferred tools.
Today's students are incorporating Web 2.0 skills in daily life, in their social and learning environments.
Tomorrow's research staff will expect to be able to use their preferred tools and resources within their work environment.
Today's ánd tomorrow's libraries should support students and staff in the learning and research process by integrating library services and resources into their environments.
Letter to CORE workshop participants, jankowski, 11sept2010Nick Jankowski
The document is an email from Nick Jankowski informing participants about an upcoming workshop on scientific publishing. It provides details about the workshop, including its date, time, and location. It requests that participants review sample materials on publishing procedures and policies of various academic journals, including New Media & Society, and submit an example of a published or presented paper. The email aims to prepare participants for discussion on scholarly publishing practices at the upcoming workshop.
Laura Scholarly Communication Dfid 10 September Final2CRUOnline
The document discusses opportunities for scholarly communication through information and communication technologies (ICT). It notes that academics want their work to be read, visible, cited, and to make an impact. However, dissemination of knowledge has traditionally been out of academics' control. New trends include electronic dissemination through various models like repositories and journals. Content is increasingly multimedia. Scholars will rely on integrated electronic environments containing a variety of scholarly works. Universities are taking more control over dissemination. Academics are encouraged to develop online presences and exploit opportunities to share knowledge.
Rethinking concepts in virtual worlds and education researchEduserv
A presentation by Diane Carr and Martin Oliver at the Where next for Virtual Worlds in UK higher and further education event held in London in January 2010.
The document provides an overview of funding and active projects at Kno.e.sis as of December 2015. Key details include total extramural funds exceeding $8.3 million with the majority obtained that year from competitive NSF and NIH sources. Active projects focus on areas such as context-aware harassment detection on social media, monitoring drug trends on social media, disaster management using social and physical sensing, and modeling social behavior for healthcare utilization in depression. The summary highlights student and faculty involvement and accomplishments across multiple funded projects.
Representation Patterns for Cultural Heritage Resources Richard Urban
Full-sized available via http://diginole.lib.fsu.edu/cgi/preview.cgi?article=1015&context=slis_faculty_publications
The universe of available cultural heritage metadata schemas grows more complex every day. Existing schemas are optimized for use in the library, archive, or museum domains and to fit the needs of shared services and applications. Emerging Linked Data approaches introduce additional challenges for metadata designers and creators responsible for implementing these standards. In other domains, design patterns are used to clearly articulate problems, their contexts, and available solutions. This poster introduces preliminary research to identify such patterns in cultural heritage metadata standards using content analysis and a participatory design methodology.
Slides, ljubljana presentation, enhanced publications, jankowski, 10 june2011Nick Jankowski
The document discusses a project to enhance scholarly publications in the humanities and social sciences through hybrid forms of publication. The project aims to 1) enhance four published books with supplementary online materials like links, blogs, and visualizations, and 2) develop a database and series of topic-related enhanced publications. Key challenges addressed are preserving dynamic online content, interrelating publication components, and gaining acceptance from publishers and authors.
This document summarizes a presentation on using bibliographic couplings to analyze the structure of a large public university. The presentation analyzed co-citation networks between university scholars and the papers they cited to identify overlapping intellectual communities across disciplines. It identified key individuals who bridge communities and act as knowledge conduits. The analysis found that the network of engaged scholars bears little resemblance to the existing academic units, suggesting opportunities to restructure the university to better support interdisciplinary work.
Context-Aware Harassment Detection on Social Media
is an inter-disciplinary project among the Ohio Center of Excellence in Knowledge-enabled Computing (Kno.e.sis), the Department of Psychology, and Center for Urban and Public Affairs (CUPA) at Wright State University. The aim of this project is to develop comprehensive and reliable context-aware techniques (using machine learning, text mining, natural language processing, and social network analysis) to glean information about the people involved and their interconnected network of relationships, and to determine and evaluate potential harassment and harassers. An interdisciplinary team of computer scientists, social scientists, urban and public affairs professionals, educators, and the participation of college and high schools students in the research will ensure wide impact of scientific research on the support for safe social interactions.
Keynote presentation for the International Semantic Web Conference in Athens Greece, on November 9, 2023. The talk addresses the generative AI explosion and its potential impacts on the Semantic Web and Knowledge Graph communities and, in fact, may spark a research Renaissance.
Abstract:
We are living in an age of rapidly advancing technology. History may view this period as one in which generative artificial intelligence is seen as reshaping the landscape and narrative of many technology-based fields of research and application. Times of disruptions often present both opportunities and challenges. We will discuss some areas that may be ripe for consideration in the field of Semantic Web research and semantically-enabled applications. Semantic Web research has historically focused on representation and reasoning and enabling interoperability of data and vocabularies. At the core are ontologies along with ontology-enabled (or ontology-compatible) knowledge stores such as knowledge graphs. Ontologies are often manually constructed using a process that (1) identifies existing best practice ontologies (and vocabularies) and (2) generates a plan for how to leverage these ontologies by aligning and augmenting them as needed to address requirements. While semi-automated techniques may help, there is typically a significant portion of the work that is often best done by humans with domain and ontology expertise. This is an opportune time to rethink how the field generates, evolves, maintains, and evaluates ontologies. We consider how hybrid approaches, i.e., those that leverage generative AI components along with more traditional knowledge representation and reasoning approaches to create improved processes. The effort to build a robust ontology that meets a use case can be large. Ontologies are not static however and they need to evolve along with knowledge evolution and expanded usage. There is potential for hybrid approaches to help identify gaps in ontologies and/or refine content. Further, ontologies need to be documented with term definitions and their provenance. Opportunities exist to consider semi-automated techniques for some types of documentation, provenance, and decision rationale capture for annotating ontologies. The area of human-AI collaboration for population and verification presents a wide range of areas of research collaboration and impact. Ontologies need to be populated with class and relationship content. Knowledge graphs and other knowledge stores need to be populated with instance data in order to be used for question answering and reasoning. Population of large knowledge graphs can be time consuming. Generative AI holds the promise to create candidate knowledge graphs that are compatible with the ontology schema. The knowledge graph should contain provenance information identifying how the content was populated and its source and correctness and currency should be checked. A human-AI assistant approach is presented.
The Stanford Workshop focused on creating plans to expedite a shift in how knowledge and information resources are managed and discovered through linked data. The goal was to identify capabilities and design new tools, processes, and systems that move beyond current metadata practices to link related resources and provide improved navigation and discovery through open feedback. A number of organizations from around the world participated in the workshop to discuss these issues.
Here are the key points about using content-based filtering techniques:
- Content-based filtering relies on analyzing the content or description of items to recommend items similar to what the user has liked in the past. It looks for patterns and regularities in item attributes/descriptions to distinguish highly rated items.
- The item content/descriptions are analyzed automatically by extracting information from sources like web pages, or entered manually from product databases.
- It focuses on objective attributes about items that can be extracted algorithmically, like text analysis of documents.
- However, personal preferences and what makes an item appealing are often subjective qualities not easily extracted algorithmically, like writing style or taste.
- So while content-based filtering can
Stuart Weibel discusses missing pieces in establishing globally interoperable metadata systems. Experts identify the most important missing pieces as tools to support metadata reuse across domains and widespread adoption of common metadata approaches. Conceptual issues include a lack of research on decentralized data management and choosing vocabularies. Organizational impediments include economic models that inhibit data sharing and the difficulty of getting organizations to adopt common standards due to inertia and lack of clear paths forward. Addressing these issues would improve prospects for interoperable metadata systems.
The document provides an overview of the speaker's background and research interests in digital ecosystems modelling. It discusses how socio-technical systems can be modeled using a systems approach rather than just mathematics. It also touches on ideas like knowledge ecosystems, complex systems emergence, and the importance of shared vocabularies. The goal is to engage the audience and potentially find opportunities for collaboration.
This document provides information about a proposed workshop on knowledge acquisition from distributed, autonomous, and semantically heterogeneous data sources to be held at the 2005 IEEE International Conference on Data Mining. The workshop aims to bring together researchers from areas like machine learning, data mining, knowledge representation, databases, and selected application domains to address challenges in performing knowledge discovery from multiple distributed data sources that may have semantic differences. Topics of interest include learning from distributed data, making data sources self-describing through ontologies, learning ontologies and mappings between schemas, and handling semantic heterogeneity. The workshop will include invited talks and presentations of contributed papers, and targets researchers, students, and practitioners interested in knowledge acquisition from distributed data.
The document describes Earthster Core Ontology (ECO), a domain ontology for Life Cycle Assessment (LCA). ECO aims to provide a vocabulary for core LCA concepts to publish LCA data on the web in a semantically interoperable way. It defines concepts like Process, Quantified Effect, and Elementary Flow. ECO is still under development with feedback from the LCA community. Its goals are to extend existing LCA data structures, link data sources, and allow for flexible extension over time as the field evolves.
Software Ecosystem Evolution. It's complex!Tom Mens
This document discusses software ecosystems and their complex evolution. It defines a software ecosystem as interdependent software projects that evolve together. Research analyzes ecosystems using ideas from biology and complex systems across disciplines. Ecosystems are huge networks of thousands of interdependent parts and contributors that are difficult to manage and grow superlinearly over time. They exhibit properties of complex networks like following power laws and being small worlds. Simple models can explain ecosystem growth patterns.
This document discusses disruptive changes in libraries due to new technologies and user behaviors. It notes the shift to electronic resources like e-journals, e-books, and born-digital content, which require new library processes. Open science and open educational resources are also discussed. Surveys found that younger users value quick search results over assistance from librarians. The future of libraries is uncertain as their roles evolve in research and learning. Options for shared library services are presented to help libraries adapt to these changes in a sustainable way.
Trust and Accountability: experiences from the FAIRDOM Commons Initiative.Carole Goble
Presented at Digital Life 2018, Bergen, March 2018. In the Trust and Accountability session.
In recent years we have seen a change in expectations for the management and availability of all the outcomes of research (models, data, SOPs, software etc) and for greater transparency and reproduciblity in the method of research. The “FAIR” (Findable, Accessible, Interoperable, Reusable) Guiding Principles for stewardship [1] have proved to be an effective rallying-cry for community groups and for policy makers.
The FAIRDOM Initiative (FAIR Data Models Operations, http://www.fair-dom.org) supports Systems Biology research projects with their research data, methods and model management, with an emphasis on standards and sensitivity to asset sharing and credit anxiety. Our aim is a FAIR Research Commons that blends together the doing of research with the communication of research. The Platform has been installed by over 30 labs/projects and our public, centrally hosted FAIRDOMHub [2] supports the outcomes of 90+ projects. We are proud to support projects in Norway’s Digital Life programme.
2018 is our 10th anniversary. Over the past decade we learned a lot about trust between researchers, between researchers and platform developers and curators and between both these groups and funders. We have experienced the Tragedy of the Commons but also seen shifts in attitudes.
In this talk we will use our experiences in FAIRDOM to explore the political, economic, social and technical, social practicalities of Trust.
[1] Wilkinson et al (2016) The FAIR Guiding Principles for scientific data management and stewardship Scientific Data 3, doi:10.1038/sdata.2016.18
[2] Wolstencroft, et al (2016) FAIRDOMHub: a repository and collaboration environment for sharing systems biology research Nucleic Acids Research, 45(D1): D404-D407. DOI: 10.1093/nar/gkw1032
Presentation at EMTACL10, http://www.ntnu.no/ub/emtacl/
Guus van den Brekel
Central medical library, UMCG
Virtual Research Networks: towards Research 2.0
In the next few years, the further development of social, educational and research networks – with its extensive collaborative possibilities – will be dictating how users will search for, manage and exchange information. The network – evolved by technology – is changing the user's behaviour and that will affect the future of information services. Many envision a possible leading role for libraries in collaboration and community building services.
Users are not only heavily using new tools, but are also creating and shaping their own preferred tools.
Today's students are incorporating Web 2.0 skills in daily life, in their social and learning environments.
Tomorrow's research staff will expect to be able to use their preferred tools and resources within their work environment.
Today's ánd tomorrow's libraries should support students and staff in the learning and research process by integrating library services and resources into their environments.
The Personalization Challenge: Context and Culture Metadata for Mobile Learning
In this keynote, we addressed m-learning adaptation based on a standardized context description. The context description contains cultural, organization and individual factors as a base for adaptable and adaptive systems. This is used on the openscout project which is about adaptation of learning resources also in the international context.
The Semantic Web is a vision of information that is understandable by computers. Although there is great exploitable potential, we are still in "Generation Zero'' of the Semantic Web, since there are few real-world compelling applications. The heterogeneity, the volume of data and the lack of standards are problems that could be addressed through some nature inspired methods. The paper presents the most important aspects of the Semantic Web, as well as its biggest issues; it then describes some methods inspired from nature - genetic algorithms, artificial neural networks, swarm intelligence, and the way these techniques can be used to deal with Semantic Web problems.
SPARC Repositories conference in Baltimore - Nov 2010Jisc
1. The document discusses the reasons for and vision of creating a global network of repositories to openly share knowledge and data.
2. Key reasons for a global network include enabling open access to information, supporting science through linked data, and aligning with universities' responsibilities to the public.
3. The ideal vision is to build socio-technical infrastructure similar to what was created in the 1880s to support electricity, in order to manage and share linked, open, and trusted data globally through repository networks.
The document discusses personal information management (PIM) tools and strategies. It describes how PIM has been an issue since information became available and outlines some common PIM tools like email, calendars, computer desktop organization, and websites. It also discusses the implications of increased digital information storage, such as challenges around saving, organizing, and retrieving personal information across multiple tools and locations.
Joe Corneli is a PhD student at the Knowledge Media Institute studying semantic adaptivity and social networking in personal learning environments. His research focuses on developing a unified methodology to map activity patterns in social contexts to better support the learning process. He plans to implement his ideas using tools like Etherpad for analyzing live social interactions, RDF for managing relationship data, and WordNet for clustering and annotating content to help simplify information and connect resources for learners. By the end of his PhD, he hopes to build a "PLE IDE" tool to offer personalized support for learners and developers.
Semantic Web: Technolgies and Applications for Real-WorldAmit Sheth
Amit Sheth and Susie Stephens, "Semantic Web: Technolgies and Applications for Real-World," Tutorial at 2007 World Wide Web Conference, Banff, Canada.
Tutorial discusses technologies and deployed real-world applications through 2007.
Tutorial description at: http://www2007.org/tutorial-T11.php
This document provides an overview of ontologies and the semantic web. It defines ontologies as formal specifications of conceptualizations that are shared between people and computers. Ontologies provide a common vocabulary and conceptual structure to facilitate understanding between humans and machines. They allow different systems and communities to work together by providing shared definitions of concepts and relationships. The development of ontologies and the semantic web aims to make web resources more computer-readable and enable machines to better understand and process online information.
A Review on Evolution and Versioning of Ontology Based Information Systemsiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This document provides a review of existing approaches to ontology evolution and versioning. It begins by defining ontologies and discussing why evolution and versioning are needed as ontologies are used in information systems. It then outlines some existing solutions for ontology evolution and version management, noting different languages used to conceptualize ontologies. Challenges of ontology versioning and evolution are discussed. Increased usage of ontologies in different domains is reviewed. Finally, some available tools for ontology change management are mentioned.
Similar to 20111022 ontologiescomeofageocas germanymcguinnessfinal (20)
Keynote presentation for Mobilizing Computable Biomedical Knowledge Conference 2021. Looking in particular at emerging trends of cognitive assistants, personal health knowledge graphs, and meta descriptions for knowledge resources. Examples taken from RPI-IBM project on Health Empowerment by Analysis, Learning, and Semantics and NIEHS project with RPI-MSSM-Columbia on Human Health Exposure Analysis Repository Data Center.
Towards an Environmental Health Sciences Ontology:CHEAR to HHEAR and BeyondDeborah McGuinness
The National Institute of Environmental Health Sciences (NIEHS) supported a Children's Health Exposure Analysis Repository(CHEAR) program that needed to integrate data across exposure science and health. We led the data science effort of this program and design the CHEAR ontology to support data integration and to leverage a wide range of existing ontologies and vocabularies. We are refactoring the ontology to support human health (instead of just aiming to support child health, and broadening support a broad range of environmental health sciences applications.
The document discusses the use of ontologies and taxonomies to enhance findability, accessibility, interoperability, and reuse of data and resources. It provides definitions for taxonomy, ontology, knowledge engineering, and artificial intelligence. It describes how ontologies can specify terminology, concepts, and relationships in a domain to provide a rich description. The document also discusses ontology development processes and gives examples of how ontologies can enable semantic search, data integration, and interpretation across different studies and data sources.
Ontologies For the Modern Age - McGuinness' Keynote at ISWC 2017Deborah McGuinness
Ontologies are seeing a resurgence of interest and usage as big data proliferates, machine learning advances, and integration of data becomes more paramount. The previous models of sometimes labor-intensive, centralized ontology construction and maintenance do not mesh well in today’s interdisciplinary world that is in the midst of a big data, information extraction, and machine learning explosion. In this talk, we will provide some historical perspective on ontologies and their usage, and discuss a model of building and maintaining large collaborative, interdisciplinary ontologies along with the data repositories and data services that they empower. We will give a few examples of heterogeneous semantic data resources made more interconnected and more powerful by ontology-supported infrastructures, discuss a vision for ontology-enabled future research and provide some examples in a large health empowerment joint effort between RPI and IBM Watson Health.
Automating Semantic Metadata Collection in the Field with Mobile ApplicationDeborah McGuinness
Presentation at Mobile Deployment of Semantic Technologies Workshop at the International Semantic Web Conference. Abstract: In the past few decades, the field of ecology has grown from a collection of disparate researchers who collected data on their local phenomenon by hand, to large ecosystems-oriented projects partially fueled by automated sensor networks and a diversity of models and experiments. These modern projects rely on sharing and integrating data to answer questions of increasing scale and complexity. Interpreting and sharing the big data sets generated by these projects relies on information about how the data was collected and what the data is about, typically stored as metadata. Metadata ensures that the data can be interpreted and shared accurately and efficiently. Traditional paper-based metadata collection methods are slow, error-prone, and non-standardized, making data sharing difficult and inefficient. Semantic technologies offer opportunities for better data management in ecology, but also may pose a challenging learning curve to already busy researchers. This paper presents a mobile application for recording semantic metadata about sensor network deployments and experimental settings in real time, in the field, and without expecting prior knowledge of semantics from the users. This application enables more efficient and less error-prone in-situ metadata collection, and generates structured and shareable metadata.
This document discusses using linked open data and semantic technologies to support next generation science. It provides background on the increasing availability of open data and opportunities for citizen science contributions. Semantic technologies can help integrate and link diverse scientific data sources. Linked data principles allow disparate datasets to be connected through shared identifiers and relationships. Examples are provided of existing projects that use semantic approaches to enable scientific data discovery, analysis and collaboration across domains like population health, water quality monitoring and climate change. Overall, the document argues that semantic technologies are mature and can help scientists address large, distributed problems by facilitating data integration and knowledge sharing.
This talk introduces Linked Data and Semantic Web by using two examples - population sciences grid and semantAqua - a semantically enabled environmental monitoring. It shows a few tools and the semantic methodology and opens discussion for LOD and team science
The Semantic Travel Concierge - a vision of the potential of semantic technologies for the travel industry. Deborah L. McGuinness Keynote at the Opentravel Alliance Advisory Forum - Miami, Fla, April 11, 2012.
The document discusses the evolving landscape of semantic technologies and their applications to scientific domains like eScience. It introduces the Tetherless World Constellation, a research group applying semantic web techniques. Examples are given of projects applying semantics to areas like virtual observatories and provenance capture. The value of semantic technologies is discussed for integration, discovery, and validation of scientific data and models. Modular ontologies and semantically-enabled frameworks are presented as important directions for reuse and collaboration.
Ontologies for the Real World by Deborah L. McGuinness. Invited talk for the 2011 Future Worlds Microsoft Faculty Summit in the Semantic Knowledge for Commodity Computing.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
1. Ontologies Come of Age: The Next Generation OCAS October 24, 2011 Bonn, Germany Deborah L. McGuinness Tetherless World Senior Constellation Chair Professor of Computer and Cognitive Science Rensselaer Polytechnic Institute Troy, NY, USA
2.
3. What is an Ontology? Catalog/ ID General Logical constraints Terms/ glossary Thesauri “ narrower term” relation Formal is-a Frames (properties) Informal is-a Formal instance Value Restrs. Disjointness, Inverse, part-of… Ontologies Come of Age McGuinness , 2001, and From AAAI Panel 99 – McGuinness, Welty, Uschold, Gruninger, Lehmann Plus basis of Ontologies Come of Age – McGuinness, 2003
4.
5.
6.
7.
8.
9.
10.
11. Semantic Web Methodology Originally developed for VSTO, now in SSIII, SESDI, SESF, OOI … McGuinness, Fox, West, Garcia, Cinquini, Benedict, Middleton The Virtual Solar-Terrestrial Observatory: A Deployed Semantic Web Application Case Study for Scientific Research. Proc. 19 Conf. on Innovative Applications of Artificial Intelligence (IAAI-07), http ://www.vsto.org
18. Ontologies for the Real World Deborah L. McGuinness Tetherless World Senior Constellation Chair Professor of Computer and Cognitive Science Rensselaer Polytechnic Institute
How to: In the “Regulation” box, check the “CA Regulation”, and Click “Go” Results: We can see that there are more polluted water sites, polluting facilities based on CA Regulation”.
during January 2000
Many Benefits: Reduced query formation from 8 to 3 steps and reduced choices at each stage Allowed scientists to get data from instruments they never knew of before (e.g., photometers in example) Supported augmentation and validation of data Useful and related data provided without having to be an expert to ask for it Integration and use (e.g. plotting) based on inference Ask and answer questions not possible before But Needed Provenance (SPCDIS, PML), reusability & modularity (SESF) Deborah McGuinness, Peter Fox, Luca Cinquini, Patrick West, Jose Garcia, James L. Benedict, and Don Middleton. The Virtual Solar-Terrestrial Observatory: A Deployed Semantic Web Application Case Study for Scientific Research. In the Proceedings of the Nineteenth Conference on Innovative Applications of Artificial Intelligence (IAAI-07). Vancouver, British Columbia, Canada, July 22-26, 2007. Peter Fox, Deborah L. McGuinness, Luca Cinquini, Patrick West, Jose Garcia, James L. Benedict, and Don Middleton. Ontology-supported Scientific Data Frameworks: The Virtual Solar-Terrestrial Observatory Experience. In Computers and Geosciences - Elsevier. Volume 35, Issue 4 (2009).
Many Benefits: Reduced query formation from 8 to 3 steps and reduced choices at each stage Allowed scientists to get data from instruments they never knew of before (e.g., photometers in example) Supported augmentation and validation of data Useful and related data provided without having to be an expert to ask for it Integration and use (e.g. plotting) based on inference Ask and answer questions not possible before But Needed Provenance (SPCDIS, PML), reusability & modularity (SESF) Deborah McGuinness, Peter Fox, Luca Cinquini, Patrick West, Jose Garcia, James L. Benedict, and Don Middleton. The Virtual Solar-Terrestrial Observatory: A Deployed Semantic Web Application Case Study for Scientific Research. In the Proceedings of the Nineteenth Conference on Innovative Applications of Artificial Intelligence (IAAI-07). Vancouver, British Columbia, Canada, July 22-26, 2007. Peter Fox, Deborah L. McGuinness, Luca Cinquini, Patrick West, Jose Garcia, James L. Benedict, and Don Middleton. Ontology-supported Scientific Data Frameworks: The Virtual Solar-Terrestrial Observatory Experience. In Computers and Geosciences - Elsevier. Volume 35, Issue 4 (2009).