BEXIS 2 is a data life cycle management platform designed for collaborative research projects. It focuses on active data generated during a project's lifetime, primarily tabular data. BEXIS 2 aims to facilitate data integration and reuse through various features for end users and administrators. It also leverages semantic web techniques like ontologies to help address issues stemming from data heterogeneity and improve data discovery. The talk discusses BEXIS 2's key capabilities and how semantics can support better integration and linking of research data with related publications and analyses.
8th International Conference on Computer Science, Information Technology (CSI...Zac Darcy
8th International Conference on Computer Science, Information Technology (CSITEC 2022) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Computer Science, Engineering and Information Technology. The Conference looks for significant contributions to all major fields of the Computer Science, Engineering and Information Technology in theoretical and practical aspects.
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the following areas.
It is the first in a series of talks to explain various aspects of software development in the context of BExIS 2.The idea of this series of talks is to gather the knowledge needed to develop the BEXIS 2 data management platform. So the talks are geared towards software engineers and developers.
The first talk is focused on the general functional and non-functional requirements of a scientific data management system.
SDN/NFV is following the same path Linux and the Internet did...
Mentioned during the Open Networking Summit 2014
Santa Clara March 4th
Re-engineering Engineering
Vinod Khosla
Kleiner Perkins Caufield & Byers
vkhosla@kpcb.com
Sept 2000
Knowledge Architecture: Graphing Your KnowledgeNeo4j
Ask any project manager and they will tell you the importance of reviewing lessons learned prior to starting a new project. The lesson learned databases are filled with nuggets of valuable information to help project teams increase the likelihood of project success. Why then do most lesson learned databases go unused by project teams? In my experience, they are difficult to search through and require hours of time to review the result set.
Recently I had a project engineer ask me if we could search our lessons learned using a list of 22 key terms the team was interested in. Our current keyword search engine would require him to enter each term individually, select the link, and save the document for review. Also, there was no way to search only the database, the query would search our entire corpus, close to 20 million URLs. This would not do. I asked our search team if they would run a special query against the lesson database only, using the terms provided. They returned a spreadsheet with a link to each document containing the terms. The engineer had his work cut out for him: over 1100 documents were on the list;.
I started thinking there had to be a better way. I had been experimenting with topic modeling, in particular to assist our users in connecting seemingly disparate documents through an easier visualization mechanism. Something better than a list of links on multiple pages. I gathered my toolbox: R/RStudio, for the topic modeling and exploring the data; Neo4j, for modeling and visualizing the topics; and Linkurious, a web front end for our users to search and visualize the graph database.
Accumulo Summit 2014: Addressing big data challenges through innovative archi...Accumulo Summit
The ability to collect and analyze large amounts of data is a growing problem within the scientific community. The growing gap between data and users calls for innovative tools that address the challenges faced by big data volume, velocity and variety. The Massachusetts Institute of Technology, Lincoln Laboratory (MIT LL) is not immune to these challenges and has developed a set of tools that address many of these challenges.
Big data volume stresses the storage, memory, and compute capacity of a computing system and requires access to a computing cloud. Choosing the right cloud is problem specific. Currently, there are four multi-billion dollar ecosystems that dominate the cloud computing environment: enterprise clouds, big data clouds, SQL database clouds, and supercomputing clouds. Each cloud ecosystem has its own hardware, software, conferences, and business markets. The broad nature of business big data challenges make it unlikely that one cloud ecosystem can meet its needs and solutions are likely to require the tools and techniques from more than one cloud ecosystem. The MIT Supercloud was developed to address this challenge. To our knowledge, the MIT SuperCloud is the only deployed cloud system that allows all four ecosystems to co-exist without sacrificing performance or functionality.
The velocity of big data velocity stresses the rate at which data can be absorbed and meaningful answers produced. Led by the NSA, a Common Big Data Architecture (CBDA) was developed for the U.S. government based on the Google Big Table NoSQL approach and is now in wide use. MIT/LL played a leading role in developing the CBDA and is a leader in adapting the CBDA to a variety of big data challenges.
Big data variety may present the largest challenge and greatest opportunities. The promise of big data is the ability to correlate diverse and heterogeneous data to form new insights. The centerpiece of the CBDA is the NSA developed Apache Accumulo database (capable of millions of entries/second) and the MIT/LL developed D4M schema. These technologies allow vast quantities of highly diverse data (text, computer logs, and social media data, etc.) to be automatically ingested into a common schema that enables rapid query and correlation of every element.
The talk will concentrate on how we utilize the aforementioned technologies in our mission to apply advanced technology to problems of national security.
Κnowledge Architecture: Combining Strategy, Data Science and Information Arch...Connected Data World
"The most important contribution management needs to make in the 21st Century is to increase the productivity of knowledge work and the knowledge worker", said Peter F. Drucker in 1999, and time has proven him right.
Even NASA is no exception, as it faces a number of challenges. NASA has hundreds of millions of documents, reports, project data, lessons learned, scientific research, medical analysis, geospatial data, IT logs, and all kinds of other data stored nation-wide.
The data is growing in terms of variety, velocity, volume, value and veracity. NASA needs to provide accessibility to engineering data sources, whose visibility is currently limited. To convert data to knowledge a convergence of Knowledge Management, Information Architecture and Data Science is necessary.
This is what David Meza, Acting Branch Chief - People Analytics, Sr. Data Scientist at NASA, calls "Knowledge Architecture": the people, processes, and technology of designing, implementing, and applying the intellectual infrastructure of organizations.
8th International Conference on Computer Science, Information Technology (CSI...Zac Darcy
8th International Conference on Computer Science, Information Technology (CSITEC 2022) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Computer Science, Engineering and Information Technology. The Conference looks for significant contributions to all major fields of the Computer Science, Engineering and Information Technology in theoretical and practical aspects.
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the following areas.
It is the first in a series of talks to explain various aspects of software development in the context of BExIS 2.The idea of this series of talks is to gather the knowledge needed to develop the BEXIS 2 data management platform. So the talks are geared towards software engineers and developers.
The first talk is focused on the general functional and non-functional requirements of a scientific data management system.
SDN/NFV is following the same path Linux and the Internet did...
Mentioned during the Open Networking Summit 2014
Santa Clara March 4th
Re-engineering Engineering
Vinod Khosla
Kleiner Perkins Caufield & Byers
vkhosla@kpcb.com
Sept 2000
Knowledge Architecture: Graphing Your KnowledgeNeo4j
Ask any project manager and they will tell you the importance of reviewing lessons learned prior to starting a new project. The lesson learned databases are filled with nuggets of valuable information to help project teams increase the likelihood of project success. Why then do most lesson learned databases go unused by project teams? In my experience, they are difficult to search through and require hours of time to review the result set.
Recently I had a project engineer ask me if we could search our lessons learned using a list of 22 key terms the team was interested in. Our current keyword search engine would require him to enter each term individually, select the link, and save the document for review. Also, there was no way to search only the database, the query would search our entire corpus, close to 20 million URLs. This would not do. I asked our search team if they would run a special query against the lesson database only, using the terms provided. They returned a spreadsheet with a link to each document containing the terms. The engineer had his work cut out for him: over 1100 documents were on the list;.
I started thinking there had to be a better way. I had been experimenting with topic modeling, in particular to assist our users in connecting seemingly disparate documents through an easier visualization mechanism. Something better than a list of links on multiple pages. I gathered my toolbox: R/RStudio, for the topic modeling and exploring the data; Neo4j, for modeling and visualizing the topics; and Linkurious, a web front end for our users to search and visualize the graph database.
Accumulo Summit 2014: Addressing big data challenges through innovative archi...Accumulo Summit
The ability to collect and analyze large amounts of data is a growing problem within the scientific community. The growing gap between data and users calls for innovative tools that address the challenges faced by big data volume, velocity and variety. The Massachusetts Institute of Technology, Lincoln Laboratory (MIT LL) is not immune to these challenges and has developed a set of tools that address many of these challenges.
Big data volume stresses the storage, memory, and compute capacity of a computing system and requires access to a computing cloud. Choosing the right cloud is problem specific. Currently, there are four multi-billion dollar ecosystems that dominate the cloud computing environment: enterprise clouds, big data clouds, SQL database clouds, and supercomputing clouds. Each cloud ecosystem has its own hardware, software, conferences, and business markets. The broad nature of business big data challenges make it unlikely that one cloud ecosystem can meet its needs and solutions are likely to require the tools and techniques from more than one cloud ecosystem. The MIT Supercloud was developed to address this challenge. To our knowledge, the MIT SuperCloud is the only deployed cloud system that allows all four ecosystems to co-exist without sacrificing performance or functionality.
The velocity of big data velocity stresses the rate at which data can be absorbed and meaningful answers produced. Led by the NSA, a Common Big Data Architecture (CBDA) was developed for the U.S. government based on the Google Big Table NoSQL approach and is now in wide use. MIT/LL played a leading role in developing the CBDA and is a leader in adapting the CBDA to a variety of big data challenges.
Big data variety may present the largest challenge and greatest opportunities. The promise of big data is the ability to correlate diverse and heterogeneous data to form new insights. The centerpiece of the CBDA is the NSA developed Apache Accumulo database (capable of millions of entries/second) and the MIT/LL developed D4M schema. These technologies allow vast quantities of highly diverse data (text, computer logs, and social media data, etc.) to be automatically ingested into a common schema that enables rapid query and correlation of every element.
The talk will concentrate on how we utilize the aforementioned technologies in our mission to apply advanced technology to problems of national security.
Κnowledge Architecture: Combining Strategy, Data Science and Information Arch...Connected Data World
"The most important contribution management needs to make in the 21st Century is to increase the productivity of knowledge work and the knowledge worker", said Peter F. Drucker in 1999, and time has proven him right.
Even NASA is no exception, as it faces a number of challenges. NASA has hundreds of millions of documents, reports, project data, lessons learned, scientific research, medical analysis, geospatial data, IT logs, and all kinds of other data stored nation-wide.
The data is growing in terms of variety, velocity, volume, value and veracity. NASA needs to provide accessibility to engineering data sources, whose visibility is currently limited. To convert data to knowledge a convergence of Knowledge Management, Information Architecture and Data Science is necessary.
This is what David Meza, Acting Branch Chief - People Analytics, Sr. Data Scientist at NASA, calls "Knowledge Architecture": the people, processes, and technology of designing, implementing, and applying the intellectual infrastructure of organizations.
David Meza's slides from his talk at Connected Data London. David is a Chief Knowledge Architect at NASA, Johnson Space Centre. His keynote talk proposed how combining strategy, data science and information architecture can help transform data to knowledge.
The Climate Tagger - a tagging and recommender service for climate informatio...Martin Kaltenböck
The Climate Tagger - a tagging- and recommender service for climate information based on PoolParty Semantic Suite - slides of the talk by Sukaina Bharwani (Stockholm Environment Institute, SEI Oxford) and Martin Kaltenböck (Semantic Web Company, SWC Vienna) at the Taxonomy Boot Camp London 2016 (TBC London) taking place on 19.10.2016
Enabling the physical world to the Internet and potential benefits for agricu...Andreas Kamilaris
The Internet of Things (IoT) allows physical devices that live inside smart homes, offices, roads, electricity networks and city infrastructures to seamlessly communicate through the Internet while the forthcoming Web of Things (WoT) ensures interoperability at the application level through standardized Web technologies and protocols. In this presentation, we explain the concepts of the IoT and the WoT and their potential through various applications in the aforementioned domains. Then, we examine how the IoT/WoT can be used in the agri-food industry in order to enable novel smart farming technologies and applications,considering the recent technological opportunities for big data analysis.
A Pragmatic Perspective on Software VisualizationArie van Deursen
Slides of the keynote presentation at the 5th International IEEE/ACM Symposium on Software Visualization, SoftVis 2010. Salt Lake City, USA, October 2010.
http://wiki.knoesis.org/index.php/MaterialWays
http://www.knoesis.org/?q=research/semMat
http://wiki.knoesis.org/index.php/MaterialWays
Abstract
The sharing, discovery, and application of materials science and engineering data and documents are possible only if domain scientists are able and willing to do so. We need to overcome technological challenges such as the development of convenient computational tools and repositories conducive to easy exchange, curation, attribution, and analysis of data, and cultural challenges such as proper protection, control, and credit for sharing data. Our thesis and value proposition is that associating machine-processable semantics with materials science and engineering data and documents can provide a solid foundation for overcoming challenges associated with data discovery, integration, and interoperability caused by data heterogeneity. Specifically, easy to use and low upfront cost lightweight semantics in the form of file-level annotation can enable document discovery and sharing, while deeper data-level annotation using standardized ontologies can benefit semantic search and summarization. Machine processability achieved through fine-grained semantic annotation, extraction, and translation can enable data integration, interoperability and reasoning, ultimately leading to Linked Open Materials Science Data. Thus, a different granularity of semantics provides a continuum of cost/ease of use and expressiveness trade-off. In this presentation, we also show the application of semantic techniques for content extraction from materials and process specifications which are semi-structured and table-rich, and the application of semantic web techniques and technologies for materials vocabulary integration and curation (via semantic media wiki), semantic web visualization, efficient representation of provenance metadata and access control (via singleton property), and biomaterials information extraction
Text and Non-textual Objects: Seamless access for scientists
Uwe Rosemann (German National Library of Science and Technology (TIB), Germany)
The European High Level Expert Group on Scientific data has formulated the challenges for a scientific infrastructure to be reached by 2030: “Our vision is a scientific e-infrastructure that supports seamless access, use, re-use, and trust of data. In a sense, the physical and technical infrastructure becomes invisible and the data themselves become the infrastructure – a valuable asset, on which science, technology, the economy and society can advance”.
Here, “data” is not restricted to primary data but also includes all non-textual material (graphs, spectra, videos, 3D-objects etc.).
The German National Library of Science and Technology (TIB) has developed a concept for a national competence center for non-textual materials which is now founded by the German State and by the German Federal Countries. The center has to perform the task: developing solutions and services together with the scientific community to make such data available, citable, sharable and usable, including visual search tools and enhanced content-based retrieval.
With solutions such as DataCite and modular development for extraction, indexing and visual searching of new scientific metadata, TIB will accept the challenge. And will make all data accessible to its users fast, convenient and easy to use.
The paper shows what special tools are developed by TIB in the context of scientific AV-media, 3D-objects and research data.
RDMkit, a Research Data Management Toolkit. Built by the Community for the ...Carole Goble
https://datascience.nih.gov/news/march-data-sharing-and-reuse-seminar 11 March 2022
Starting in 2023, the US National Institutes of Health (NIH) will require institutes and researchers receiving funding to include a Data Management Plan (DMP) in their grant applications, including the making their data publicly available. Similar mandates are already in place in Europe, for example a DMP is mandatory in Horizon Europe projects involving data.
Policy is one thing - practice is quite another. How do we provide the necessary information, guidance and advice for our bioscientists, researchers, data stewards and project managers? There are numerous repositories and standards. Which is best? What are the challenges at each step of the data lifecycle? How should different types of data? What tools are available? Research Data Management advice is often too general to be useful and specific information is fragmented and hard to find.
ELIXIR, the pan-national European Research Infrastructure for Life Science data, aims to enable research projects to operate “FAIR data first”. ELIXIR supports researchers across their whole RDM lifecycle, navigating the complexity of a data ecosystem that bridges from local cyberinfrastructures to pan-national archives and across bio-domains.
The ELIXIR RDMkit (https://rdmkit.elixir-europe.org (link is external)) is a toolkit built by the biosciences community, for the biosciences community to provide the RDM information they need. It is a framework for advice and best practice for RDM and acts as a hub of RDM information, with links to tool registries, training materials, standards, and databases, and to services that offer deeper knowledge for DMP planning and FAIR-ification practices.
Launched in March 2021, over 120 contributors have provided nearly 100 pages of content and links to more than 300 tools. Content covers the data lifecycle and specialized domains in biology, national considerations and examples of “tool assemblies” developed to support RDM. It has been accessed by over 123 countries, and the top of the access list is … the United States.
The RDMkit is already a recommended resource of the European Commission. The platform, editorial, and contributor methods helped build a specialized sister toolkit for infectious diseases as part of the recently launched BY-COVID project. The toolkit’s platform is the simplest we could manage - built on plain GitHub - and the whole development and contribution approach tailored to be as lightweight and sustainable as possible.
In this talk, Carole and Frederik will present the RDMkit; aims and context, content, community management, how folks can contribute, and our future plans and potential prospects for trans-Atlantic cooperation.
Data policy must be partnered with data practice. Our researchers need to be the best informed in order to meet these new data management and data sharing mandates.
Knowledge Engineering Processes and Tools in Enterprise EnvironmentsVera G. Meister
Despite a lot of persuasive scientific studies on the benefits of semantic technologies, Knowledge Engineering (KE) in enterprise environments is far away from wide practical implementation. The consultancy Gartner locates “Enterprise Taxonomy and Ontology Management” in its 2017 Hype Cycle for Emerging Technologies in the heart of the “Through of Disillusionment” with an expectation of 5-10 years to main-stream adoption. The paper aims at analyzing this contradiction and at investigating systematically the obstacles for successful and sustainable KE in enterprises. This is based on two case studies and an ethnographic study in organizations from different sectors: IT services and software production, public administration, and life sciences. For analyzing and visualizing characteristic use cases, processes and roles, methods of systems analysis are applied. A focus is put on the analysis and systematization of tools and services available for KE. The results of the paper form a basic framework for the constitution of KE as a business function crucial for bringing semantic technologies in enterprise environments to life.
Visual Information Analysis for Crisis and Natural Disasters Management and R...Yiannis Kompatsiaris
Invited talk at the Ninth International Conference on Image Processing Theory, Tools and Applications IPTA 2019 (http://www.ipta-conference.com/ipta19/)
Crises and natural disasters are unwelcome, but also unavoidable features of modern society, affecting more communities than ever. Visual information analysis plays an important role in efficient pre-event (e.g. risk modeling), during the event (response) and post-event (recovery) emergency situation management. This talk will describe the potential role of visual information sources including satellite images, surveillance and traffic cameras, social multimedia and aerial video in applications such as floods, fires, and oil spills. Multimodal and fusion techniques will be presented combining satellite and social data and how deep neural networks can be applied in this domain. The talks will include demos and results from the relevant BeAware and EOPEN projects and from our participation in the 2018 Multimedia Satellite Task of the MediaEval Benchmarking Initiative.
g-Social - Enhancing e-Science Tools with Social Networking FunctionalityNicholas Loulloudes
Presentation of "g-Social - Enhancing e-Science Tools with Social Networking Functionality" given at the Workshop on Analyzing and Improving Collaborative eScience with Social Networks, Chicago October 8th, 2012. Co-located with IEEE eScience 2012.
David Meza's slides from his talk at Connected Data London. David is a Chief Knowledge Architect at NASA, Johnson Space Centre. His keynote talk proposed how combining strategy, data science and information architecture can help transform data to knowledge.
The Climate Tagger - a tagging and recommender service for climate informatio...Martin Kaltenböck
The Climate Tagger - a tagging- and recommender service for climate information based on PoolParty Semantic Suite - slides of the talk by Sukaina Bharwani (Stockholm Environment Institute, SEI Oxford) and Martin Kaltenböck (Semantic Web Company, SWC Vienna) at the Taxonomy Boot Camp London 2016 (TBC London) taking place on 19.10.2016
Enabling the physical world to the Internet and potential benefits for agricu...Andreas Kamilaris
The Internet of Things (IoT) allows physical devices that live inside smart homes, offices, roads, electricity networks and city infrastructures to seamlessly communicate through the Internet while the forthcoming Web of Things (WoT) ensures interoperability at the application level through standardized Web technologies and protocols. In this presentation, we explain the concepts of the IoT and the WoT and their potential through various applications in the aforementioned domains. Then, we examine how the IoT/WoT can be used in the agri-food industry in order to enable novel smart farming technologies and applications,considering the recent technological opportunities for big data analysis.
A Pragmatic Perspective on Software VisualizationArie van Deursen
Slides of the keynote presentation at the 5th International IEEE/ACM Symposium on Software Visualization, SoftVis 2010. Salt Lake City, USA, October 2010.
http://wiki.knoesis.org/index.php/MaterialWays
http://www.knoesis.org/?q=research/semMat
http://wiki.knoesis.org/index.php/MaterialWays
Abstract
The sharing, discovery, and application of materials science and engineering data and documents are possible only if domain scientists are able and willing to do so. We need to overcome technological challenges such as the development of convenient computational tools and repositories conducive to easy exchange, curation, attribution, and analysis of data, and cultural challenges such as proper protection, control, and credit for sharing data. Our thesis and value proposition is that associating machine-processable semantics with materials science and engineering data and documents can provide a solid foundation for overcoming challenges associated with data discovery, integration, and interoperability caused by data heterogeneity. Specifically, easy to use and low upfront cost lightweight semantics in the form of file-level annotation can enable document discovery and sharing, while deeper data-level annotation using standardized ontologies can benefit semantic search and summarization. Machine processability achieved through fine-grained semantic annotation, extraction, and translation can enable data integration, interoperability and reasoning, ultimately leading to Linked Open Materials Science Data. Thus, a different granularity of semantics provides a continuum of cost/ease of use and expressiveness trade-off. In this presentation, we also show the application of semantic techniques for content extraction from materials and process specifications which are semi-structured and table-rich, and the application of semantic web techniques and technologies for materials vocabulary integration and curation (via semantic media wiki), semantic web visualization, efficient representation of provenance metadata and access control (via singleton property), and biomaterials information extraction
Text and Non-textual Objects: Seamless access for scientists
Uwe Rosemann (German National Library of Science and Technology (TIB), Germany)
The European High Level Expert Group on Scientific data has formulated the challenges for a scientific infrastructure to be reached by 2030: “Our vision is a scientific e-infrastructure that supports seamless access, use, re-use, and trust of data. In a sense, the physical and technical infrastructure becomes invisible and the data themselves become the infrastructure – a valuable asset, on which science, technology, the economy and society can advance”.
Here, “data” is not restricted to primary data but also includes all non-textual material (graphs, spectra, videos, 3D-objects etc.).
The German National Library of Science and Technology (TIB) has developed a concept for a national competence center for non-textual materials which is now founded by the German State and by the German Federal Countries. The center has to perform the task: developing solutions and services together with the scientific community to make such data available, citable, sharable and usable, including visual search tools and enhanced content-based retrieval.
With solutions such as DataCite and modular development for extraction, indexing and visual searching of new scientific metadata, TIB will accept the challenge. And will make all data accessible to its users fast, convenient and easy to use.
The paper shows what special tools are developed by TIB in the context of scientific AV-media, 3D-objects and research data.
RDMkit, a Research Data Management Toolkit. Built by the Community for the ...Carole Goble
https://datascience.nih.gov/news/march-data-sharing-and-reuse-seminar 11 March 2022
Starting in 2023, the US National Institutes of Health (NIH) will require institutes and researchers receiving funding to include a Data Management Plan (DMP) in their grant applications, including the making their data publicly available. Similar mandates are already in place in Europe, for example a DMP is mandatory in Horizon Europe projects involving data.
Policy is one thing - practice is quite another. How do we provide the necessary information, guidance and advice for our bioscientists, researchers, data stewards and project managers? There are numerous repositories and standards. Which is best? What are the challenges at each step of the data lifecycle? How should different types of data? What tools are available? Research Data Management advice is often too general to be useful and specific information is fragmented and hard to find.
ELIXIR, the pan-national European Research Infrastructure for Life Science data, aims to enable research projects to operate “FAIR data first”. ELIXIR supports researchers across their whole RDM lifecycle, navigating the complexity of a data ecosystem that bridges from local cyberinfrastructures to pan-national archives and across bio-domains.
The ELIXIR RDMkit (https://rdmkit.elixir-europe.org (link is external)) is a toolkit built by the biosciences community, for the biosciences community to provide the RDM information they need. It is a framework for advice and best practice for RDM and acts as a hub of RDM information, with links to tool registries, training materials, standards, and databases, and to services that offer deeper knowledge for DMP planning and FAIR-ification practices.
Launched in March 2021, over 120 contributors have provided nearly 100 pages of content and links to more than 300 tools. Content covers the data lifecycle and specialized domains in biology, national considerations and examples of “tool assemblies” developed to support RDM. It has been accessed by over 123 countries, and the top of the access list is … the United States.
The RDMkit is already a recommended resource of the European Commission. The platform, editorial, and contributor methods helped build a specialized sister toolkit for infectious diseases as part of the recently launched BY-COVID project. The toolkit’s platform is the simplest we could manage - built on plain GitHub - and the whole development and contribution approach tailored to be as lightweight and sustainable as possible.
In this talk, Carole and Frederik will present the RDMkit; aims and context, content, community management, how folks can contribute, and our future plans and potential prospects for trans-Atlantic cooperation.
Data policy must be partnered with data practice. Our researchers need to be the best informed in order to meet these new data management and data sharing mandates.
Knowledge Engineering Processes and Tools in Enterprise EnvironmentsVera G. Meister
Despite a lot of persuasive scientific studies on the benefits of semantic technologies, Knowledge Engineering (KE) in enterprise environments is far away from wide practical implementation. The consultancy Gartner locates “Enterprise Taxonomy and Ontology Management” in its 2017 Hype Cycle for Emerging Technologies in the heart of the “Through of Disillusionment” with an expectation of 5-10 years to main-stream adoption. The paper aims at analyzing this contradiction and at investigating systematically the obstacles for successful and sustainable KE in enterprises. This is based on two case studies and an ethnographic study in organizations from different sectors: IT services and software production, public administration, and life sciences. For analyzing and visualizing characteristic use cases, processes and roles, methods of systems analysis are applied. A focus is put on the analysis and systematization of tools and services available for KE. The results of the paper form a basic framework for the constitution of KE as a business function crucial for bringing semantic technologies in enterprise environments to life.
Visual Information Analysis for Crisis and Natural Disasters Management and R...Yiannis Kompatsiaris
Invited talk at the Ninth International Conference on Image Processing Theory, Tools and Applications IPTA 2019 (http://www.ipta-conference.com/ipta19/)
Crises and natural disasters are unwelcome, but also unavoidable features of modern society, affecting more communities than ever. Visual information analysis plays an important role in efficient pre-event (e.g. risk modeling), during the event (response) and post-event (recovery) emergency situation management. This talk will describe the potential role of visual information sources including satellite images, surveillance and traffic cameras, social multimedia and aerial video in applications such as floods, fires, and oil spills. Multimodal and fusion techniques will be presented combining satellite and social data and how deep neural networks can be applied in this domain. The talks will include demos and results from the relevant BeAware and EOPEN projects and from our participation in the 2018 Multimedia Satellite Task of the MediaEval Benchmarking Initiative.
g-Social - Enhancing e-Science Tools with Social Networking FunctionalityNicholas Loulloudes
Presentation of "g-Social - Enhancing e-Science Tools with Social Networking Functionality" given at the Workshop on Analyzing and Improving Collaborative eScience with Social Networks, Chicago October 8th, 2012. Co-located with IEEE eScience 2012.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
3. Structure of the talk
• What is research data management about?
• What is BEXIS2?
• Semantic Web Techniques in BEXIS2
birgitta.koenig‐ries@uni‐jena.de 3
13. Intro
birgitta.koenig‐ries@uni‐jena.de 13
BEXIS 2 is:
• designed for collaborative projects
• focus on active data (i.e. project life‐time)
• focus on tabular data, but not limited to it
• focus on data integration and re‐use
• generic, scalable, modular, free and open source
22. Why semantics?
• Heterogeneity on the schema level
• Ambiguous or hard to interpret column names
• Heterogeneity on the instance level
• ….
• Hampers:
– Discovery
– Integration
birgitta.koenig‐ries@uni‐jena.de 22
28. Conclusion
• Life‐cycle support for research data
management essential for good science
• Semantic web techniques can support data
integration and discovery
• Long term goals:
– Seamless linking of data and publications
– Seamless integration of data management and
analysis including provenance management
– Automatic hypotheses generation
birgitta.koenig‐ries@uni‐jena.de 28
29. Thanks
• to DFG for funding
• to the BEXIS2 users for providing feedback and requirements
• to everyone involved in the development of BEXIS and BEXIS2
including Payam Adineh, Masoud Allahyari, Arefeh Bahrami, Javad
Chamanara, Florian Gaffron, Jitendra Gaikwad, Roman Gerlach,
Thorsten Hindermann, Martin Hohmuth, Nafiseh Navabpour, Jens
Nieschulze, Michael Owonibi, Andreas Ostrowski, Eleonora Petzold,
David Schöne, Sirko Schindler, Markus Steinberg, Sven Thiel,
Franziska Zander and many others
• to the AquaDiva Infra1 team: Udo Hahn, Erik Fäßler, Bernd Kampe,
Alsayed Algergawy, Hamdi Hamed and Friederike Klan
• to Christian Wirth for the iDiv slides
birgitta.koenig‐ries@uni‐jena.de 29