Bob Stanley, CEO, IO Informatics, explains the utility to RDF as a standard way of defining and redefining data that could have utility in managing life science information.
Allotrope foundation vanderwall_and_little_bio_it_world_2016OSTHUS
Allotrope Foundation is building a framework (a software toolkit) to embed a set of federated, public, non-proprietary standards for analytical data in software utilized throughout the entire analytical chemistry data lifecycle, and serves as a basis for providing controlled vocabularies and taxonomies for a variety of pharmaceutical and biotech R&D applications. This framework provides extended capabilities to build in business rules and other analytics on top of the standardized vocabularies allowing companies enhanced abilities to classify and manage their data. Legacy systems can be maintained more easily and new technologies including cloud databases, Big Data Analytics, or reasoning engines can be employed to allow researchers unprecedented access to important contextualized data, because the foundational class structure is common and highly extensible to new and expanding domains. We will briefly describe some of the current data integration and management challenges facing the industry, e.g., utilization of legacy data warehouses, the creation of new data lakes, integration of existing semantic models, cloud-scale applications and how the Allotrope Framework provides a semantic basis for improved metadata and master data management through the use of modularized semantic models that capture the most pertinent entities, attributes and relationships needed to capture the plethora of laboratory data. We will provide an update on the rapid progress of development and the release of the Allotrope Framework 1.0, including: the Allotrope Data Format (for data and semantically-described metadata), Allotrope Taxonomies, and the first release of APIs (application programming interfaces), and how Allotrope Member companies have begun to integrate these into their internal environments. We will then discuss some of the potential extensions of this framework, which in the future, could enable state-of-the-art data integration and analytics capabilities for various applications.
Allotrope Foundation & OSTHUS at SmartLab Exchange 2015: Update on the Allotr...OSTHUS
During SmartLab Exchange 2015, Allotrope Foundation and OSTHUS presented the latest update on the Allotrope Framework. To learn more, please view the slides below.
Presented by:
Dana Vanderwall, BMS Research IT & Automation Patrick Chin, Merck Research Laboratories IT Wolfgang Colsman, OSTHUS
Application of recently developed FAIR metrics to the ELIXIR Core Data ResourcesPistoia Alliance
The FAIR (Findable, Accessible, Interoperable and Reusable) principles aim to maximize the discovery and reuse of digital resources. Using recently developed software and metrics to assess FAIRness and supported through an ELIXIR Implementation Study, Michel worked with a subset of ELIXIR Core Data Resources to apply these technologies. In this webinar, he will discuss their approach, findings, and lessons learned towards the understanding and promotion of the FAIR principles.
Allotrope foundation vanderwall_and_little_bio_it_world_2016OSTHUS
Allotrope Foundation is building a framework (a software toolkit) to embed a set of federated, public, non-proprietary standards for analytical data in software utilized throughout the entire analytical chemistry data lifecycle, and serves as a basis for providing controlled vocabularies and taxonomies for a variety of pharmaceutical and biotech R&D applications. This framework provides extended capabilities to build in business rules and other analytics on top of the standardized vocabularies allowing companies enhanced abilities to classify and manage their data. Legacy systems can be maintained more easily and new technologies including cloud databases, Big Data Analytics, or reasoning engines can be employed to allow researchers unprecedented access to important contextualized data, because the foundational class structure is common and highly extensible to new and expanding domains. We will briefly describe some of the current data integration and management challenges facing the industry, e.g., utilization of legacy data warehouses, the creation of new data lakes, integration of existing semantic models, cloud-scale applications and how the Allotrope Framework provides a semantic basis for improved metadata and master data management through the use of modularized semantic models that capture the most pertinent entities, attributes and relationships needed to capture the plethora of laboratory data. We will provide an update on the rapid progress of development and the release of the Allotrope Framework 1.0, including: the Allotrope Data Format (for data and semantically-described metadata), Allotrope Taxonomies, and the first release of APIs (application programming interfaces), and how Allotrope Member companies have begun to integrate these into their internal environments. We will then discuss some of the potential extensions of this framework, which in the future, could enable state-of-the-art data integration and analytics capabilities for various applications.
Allotrope Foundation & OSTHUS at SmartLab Exchange 2015: Update on the Allotr...OSTHUS
During SmartLab Exchange 2015, Allotrope Foundation and OSTHUS presented the latest update on the Allotrope Framework. To learn more, please view the slides below.
Presented by:
Dana Vanderwall, BMS Research IT & Automation Patrick Chin, Merck Research Laboratories IT Wolfgang Colsman, OSTHUS
Application of recently developed FAIR metrics to the ELIXIR Core Data ResourcesPistoia Alliance
The FAIR (Findable, Accessible, Interoperable and Reusable) principles aim to maximize the discovery and reuse of digital resources. Using recently developed software and metrics to assess FAIRness and supported through an ELIXIR Implementation Study, Michel worked with a subset of ELIXIR Core Data Resources to apply these technologies. In this webinar, he will discuss their approach, findings, and lessons learned towards the understanding and promotion of the FAIR principles.
Should We Expect a Bang or a Whimper? Will Linked Data Revolutionize Scholar Authoring and Workflow Tools?
Jeff Baer, Senior Director of Product Management, Research Development Services, Proquest
C a s e - b a s e d S y s t e m f o r I n n o v a t i o n M a n a g e m e n t...Nit Celesc
Artigo publicado no 2nd Workshop on Applications of Knowledge-Based Technologies in Business (AKTB 2010) em conjunto com 13th International Conference on Business Information Systems (BIS 2010) realizado em Berlin, Alemanha nos dias 3-5 de maio de 2010.
Knowledge graphs ilaria maresi the hyve 23apr2020Pistoia Alliance
Data for drug discovery and healthcare is often trapped in silos which hampers effective interpretation and reuse. To remedy this, such data needs to be linked both internally and to external sources to make a FAIR data landscape which can power semantic models and knowledge graphs.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Kairntech combines technologies from natural language processing (NLP) and machine learning to support clients in analysing large amounts of text-based information.
You find more information at https://kairntech.com/
2020.04.07 automated molecular design and the bradshaw platform webinarPistoia Alliance
This presentation described how data-driven chemoinformatics methods may automate much of what has historically been done by a medicinal chemist. It explored what is reasonable to expect “AI” approaches might achieve, and what is best left with a human expert. The implications of automation for the human-machine interface were explored and illustrated with examples from Bradshaw, GSK’s experimental automated design environment.
The OntoChem IT Solutions GmbH ...
... was founded in 2015 as a purely IT-oriented offshoot of the OntoChem GmbH. Even before we had many years of experience and it has always been our mission to provide added value to our customers by helping them to navigate today’s complex information world by developing cognitive computing solutions, indexing intranet and internet data and applying semantic search solutions for pharmaceutical, material sciences and technology driven businesses.
We strive to support our customers with the most useful tools for knowledge discovery possible, encompassing up-to-date data sources, optimized ontologies and high-throughput semantic document processing and annotation techniques.
We create new knowledge from structured and unstructured data by extracting relationships thereby exploiting the full potential of full-text documents & databases while also scanning social media, news flows and analyzing web-pages.
We aim at an unprecedented, machine understanding of text and subsequent knowledge extraction and inference. The application of our methods towards chemical compounds and their properties supports our customers in generating intellectual property and their use as novel therapeutics, agrochemical products, nutraceuticals, cosmetics and in the field of novel materials.
It's our mission to provide added value to customers by:
developing and applying cognitive computing solutions
creating intranet and internet data indexing and semantic search solutions
Big Data analytics for technology driven businesses
supporting product development and surveillance.
We deliver useful tools for knowledge discovery for:
creating background knowledge ontologies
high-throughput semantic document processing and annotation
knowledge mining by extracting relationships
exploiting the full potential of full-text documents & databases while also scanning social media, news flows and analyzing web-pages.
From allotrope to reference master data management OSTHUS
We will present the updated Allotrope framework and cover .adf files and how they are used. We’ll demonstrate semantic modeling in .adf (OWL models + the SHACL constraint language). We’ll show how the data description layer in .adf can be extended via a “semantic hub” that we call Reference Master Data Management, which can be used across the enterprise. RMDM provides a means to integrate metadata about any data source within your enterprise – including structured, semi-structured and unstructured data. Customer examples from current project work will be given where possible. Last we’ll show scalability of this approach using data science techniques can be employed beyond just the metadata – we refer to this as Big Analysis.
Challenges & Opportunities of Implementation FAIR in Life SciencesOSTHUS
Speak in common terms – identify Business Outcomes (value) as well as technology
Don’t say “semantics”, “FAIR”, “ontologies”, etc. – talk about outcomes and results
Drive projects through results – QUICK WINS
Identify the right data – build off of that (evolution not revolution)
Think about legacy systems, provenance, governance, stewardship, etc. – have answers to the nay-sayers.
Be honest what this will do and what it won’t
ROI – have this in mind (Business Value not Tech Value)
Cost savings (reduced hours, faster search, accurate reporting, better visibility, etc.)
Risk Mitigation (improved regulatory, corporate knowledge vs. indivual, M&A, etc.)
Innovation (what is the value to being a thought leader?)
The Pistoia Alliance Information Ecosystem WorkshopPistoia Alliance
Michael Braxenthaler, president of the Pistoia Alliance, introduces the concept of the information ecosystem in life science research and discusses the role the Pistoia Alliance can play within this ecosystem. The workshop occurred in October 2011.
Should We Expect a Bang or a Whimper? Will Linked Data Revolutionize Scholar Authoring and Workflow Tools?
Jeff Baer, Senior Director of Product Management, Research Development Services, Proquest
C a s e - b a s e d S y s t e m f o r I n n o v a t i o n M a n a g e m e n t...Nit Celesc
Artigo publicado no 2nd Workshop on Applications of Knowledge-Based Technologies in Business (AKTB 2010) em conjunto com 13th International Conference on Business Information Systems (BIS 2010) realizado em Berlin, Alemanha nos dias 3-5 de maio de 2010.
Knowledge graphs ilaria maresi the hyve 23apr2020Pistoia Alliance
Data for drug discovery and healthcare is often trapped in silos which hampers effective interpretation and reuse. To remedy this, such data needs to be linked both internally and to external sources to make a FAIR data landscape which can power semantic models and knowledge graphs.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Kairntech combines technologies from natural language processing (NLP) and machine learning to support clients in analysing large amounts of text-based information.
You find more information at https://kairntech.com/
2020.04.07 automated molecular design and the bradshaw platform webinarPistoia Alliance
This presentation described how data-driven chemoinformatics methods may automate much of what has historically been done by a medicinal chemist. It explored what is reasonable to expect “AI” approaches might achieve, and what is best left with a human expert. The implications of automation for the human-machine interface were explored and illustrated with examples from Bradshaw, GSK’s experimental automated design environment.
The OntoChem IT Solutions GmbH ...
... was founded in 2015 as a purely IT-oriented offshoot of the OntoChem GmbH. Even before we had many years of experience and it has always been our mission to provide added value to our customers by helping them to navigate today’s complex information world by developing cognitive computing solutions, indexing intranet and internet data and applying semantic search solutions for pharmaceutical, material sciences and technology driven businesses.
We strive to support our customers with the most useful tools for knowledge discovery possible, encompassing up-to-date data sources, optimized ontologies and high-throughput semantic document processing and annotation techniques.
We create new knowledge from structured and unstructured data by extracting relationships thereby exploiting the full potential of full-text documents & databases while also scanning social media, news flows and analyzing web-pages.
We aim at an unprecedented, machine understanding of text and subsequent knowledge extraction and inference. The application of our methods towards chemical compounds and their properties supports our customers in generating intellectual property and their use as novel therapeutics, agrochemical products, nutraceuticals, cosmetics and in the field of novel materials.
It's our mission to provide added value to customers by:
developing and applying cognitive computing solutions
creating intranet and internet data indexing and semantic search solutions
Big Data analytics for technology driven businesses
supporting product development and surveillance.
We deliver useful tools for knowledge discovery for:
creating background knowledge ontologies
high-throughput semantic document processing and annotation
knowledge mining by extracting relationships
exploiting the full potential of full-text documents & databases while also scanning social media, news flows and analyzing web-pages.
From allotrope to reference master data management OSTHUS
We will present the updated Allotrope framework and cover .adf files and how they are used. We’ll demonstrate semantic modeling in .adf (OWL models + the SHACL constraint language). We’ll show how the data description layer in .adf can be extended via a “semantic hub” that we call Reference Master Data Management, which can be used across the enterprise. RMDM provides a means to integrate metadata about any data source within your enterprise – including structured, semi-structured and unstructured data. Customer examples from current project work will be given where possible. Last we’ll show scalability of this approach using data science techniques can be employed beyond just the metadata – we refer to this as Big Analysis.
Challenges & Opportunities of Implementation FAIR in Life SciencesOSTHUS
Speak in common terms – identify Business Outcomes (value) as well as technology
Don’t say “semantics”, “FAIR”, “ontologies”, etc. – talk about outcomes and results
Drive projects through results – QUICK WINS
Identify the right data – build off of that (evolution not revolution)
Think about legacy systems, provenance, governance, stewardship, etc. – have answers to the nay-sayers.
Be honest what this will do and what it won’t
ROI – have this in mind (Business Value not Tech Value)
Cost savings (reduced hours, faster search, accurate reporting, better visibility, etc.)
Risk Mitigation (improved regulatory, corporate knowledge vs. indivual, M&A, etc.)
Innovation (what is the value to being a thought leader?)
The Pistoia Alliance Information Ecosystem WorkshopPistoia Alliance
Michael Braxenthaler, president of the Pistoia Alliance, introduces the concept of the information ecosystem in life science research and discusses the role the Pistoia Alliance can play within this ecosystem. The workshop occurred in October 2011.
Notes taken to support breakout discussion of possible business models necessary to support the information ecosystem in life science R&D during the Pistoia Alliance Information Ecosystem Workshop in October 2011.
David Klatte (Pfizer) presented on this potential new working group during the "Dragons' Den" portion of the Pistoia Alliance Conference in Boston, MA, on April 24, 2012.
The Pistoia Alliance Biology Domain Strategy April 2011Pistoia Alliance
Michael Braxenthaler (Roche and external liaison officer for Pistoia) describes the Pistoia Alliance biology domain strategy at the first Pistoia Alliance Conference in April 2011.
The Pistoia Alliance: Strategy, Progress, MomentumPistoia Alliance
Pistoia Alliance Board Member Ramesh Durvasula of BMS provides an overview of the Pistoia Alliance and project status at the BioITWorld Expo in Boston on April 13, 2011.
Semantic Web: Technolgies and Applications for Real-WorldAmit Sheth
Amit Sheth and Susie Stephens, "Semantic Web: Technolgies and Applications for Real-World," Tutorial at 2007 World Wide Web Conference, Banff, Canada.
Tutorial discusses technologies and deployed real-world applications through 2007.
Tutorial description at: http://www2007.org/tutorial-T11.php
Ontology Based Approach for Semantic Information Retrieval SystemIJTET Journal
Abstract—The Information retrieval system is taking an important role in current search engine which performs searching operation based on keywords which results in an enormous amount of data available to the user, from which user cannot figure out the essential and most important information. This limitation may be overcome by a new web architecture known as the semantic web which overcome the limitation of the keyword based search technique called the conceptual or the semantic search technique. Natural language processing technique is mostly implemented in a QA system for asking user’s questions and several steps are also followed for conversion of questions to the query form for retrieving an exact answer. In conceptual search, search engine interprets the meaning of the user’s query and the relation among the concepts that document contains with respect to a particular domain that produces specific answers instead of showing lists of answers. In this paper, we proposed the ontology based semantic information retrieval system and the Jena semantic web framework in which, the user enters an input query which is parsed by Standford Parser then the triplet extraction algorithm is used. For all input queries, the SPARQL query is formed and further, it is fired on the knowledge base (Ontology) which finds appropriate RDF triples in knowledge base and retrieve the relevant information using the Jena framework.
Semantics in Financial Services -David NewmanPeter Berger
David Newman serves as a Senior Architect in the Enterprise Architecture group at Wells Fargo Bank. He has been following semantic technology for the last 3 years; and has developed several business ontologies. He has been instrumental in thought leadership at Wells Fargo on the application of Semantic Technology and is a representative of the Financial Services Technology Consortium (FSTC)on the W3C SPARQL Working Group.
AHM 2014: OceanLink, Smart Data versus Smart Applications EarthCube
Presentation given by Krysztof Janowicz and Pascal Hitzler in the afternoon Architecture Forum Session on Day 1, June 24, at the EarthCube All-Hands Meeting.
A Generic Scientific Data Model and Ontology for Representation of Chemical DataStuart Chalk
The current movement toward openness and sharing of data is likely to have a profound effect on the speed of scientific research and the complexity of questions we can answer. However, a fundamental problem with currently available datasets (and their metadata) is heterogeneity in terms of implementation, organization, and representation.
To address this issue we have developed a generic scientific data model (SDM) to organize and annotate raw and processed data, and the associated metadata. This paper will present the current status of the SDM, implementation of the SDM in JSON-LD, and the associated scientific data model ontology (SDMO). Example usage of the SDM to store data from a variety of sources with be discussed along with future plans for the work.
Linked Data: Opportunities for Entrepreneurs3 Round Stones
Multidisciplinary engineer and entrepreneur David Wood discusses the reasons, approaches and success stories for structured data on the World Wide Web. Linked Data is placed in context with the rest of the Web and that context is used to suggest some areas ripe for entrepreneurial innovation.
Richard Bolton (GSK and Pistoia's ELN query services workstream coordinator) discusses the Alliance's chemistry strategy, which includes ELN query standards, hosted ELN, and chemistry externalization faciliation
FAIR Data and Model Management for Systems Biology(and SOPs too!)Carole Goble
MultiScale Biology Network Springboard meeting, Nottingham, UK, 1 June 2015
FAIR Data and model management for Systems Biology
Over the past 5 years we have seen a change in expectations for the management of all the outcomes of research – that is the “assets” of data, models, codes, SOPs and so forth. Don’t stop reading. Yes, data management isn’t likely to win anyone a Nobel prize. But publications should be supported and accompanied by data, methods, procedures, etc. to assure reproducibility of results. Funding agencies expect data (and increasingly software) management retention and access plans as part of the proposal process for projects to be funded. Journals are raising their expectations of the availability of data and codes for pre- and post- publication. And the multi-component, multi-disciplinary nature of Systems Biology demands the interlinking and exchange of assets and the systematic recording of metadata for their interpretation.
Data and model management for the Systems Biology community is a multi-faceted one including: the development and adoption appropriate community standards (and the navigation of the standards maze); the sustaining of international public archives capable of servicing quantitative biology; and the development of the necessary tools and know-how for researchers within their own institutes so that they can steward their assets in a sustainable, coherent and credited manner while minimizing burden and maximising personal benefit.
The FAIRDOM (Findable, Accessible, Interoperable, Reusable Data, Operations and Models) Initiative has grown out of several efforts in European programmes (SysMO and EraSysAPP ERANets and the ISBE ESRFI) and national initiatives (de.NBI, German Virtual Liver Network, SystemsX, UK SynBio centres). It aims to support Systems Biology researchers with data and model management, with an emphasis on standards smuggled in by stealth.
This talk will use the FAIRDOM Initiative to discuss the FAIR management of data, SOPs, and models for Sys Bio, highlighting the challenges multi-scale biology presents.
http://www.fair-dom.org
http://www.fairdomhub.org
http://www.seek4science.org
Towards Ontology Development Based on Relational Databaseijbuiiir1
Ontology is defined as the formal explicit specification of a shared conceptualization. It has been widely used in almost all fields especially artificial intelligence, data mining, and semantic web etc. It is constructed using various set of resources. Now it has become a very important task to improve the efficiency of ontology construction. In order to improve the efficiency, need an automated method of building ontology from database resource. Since manual construction is found to be erroneous and not up to the expectation, automatic construction of ontology from database is innovated. Then the construction rules for ontology building from relational data sources are put forward. Finally, ontology for �automated building of ontology from relational data sources� has been implemented
Curation-Friendly Tools for the Scientific Researcherbwestra
Presentation for Online Northwest Conference, in Corvallis Oregon, February 10, 2012.
Highlights electronic lab notebooks (ELN) and OMERO (Open Microscopy Environment) as two tools that enable researchers to better manage their research data.
Similar to Resource Description Framework Approach to Data Publication and Federation (20)
Fairification experience clarifying the semantics of data matricesPistoia Alliance
This webinar presents the Statistics Ontology, STATO which is a semantic framework to support the creation of standardized analysis reports to help with review of results in the form of data matrices. STATO includes a hierarchy of classes and a vocabulary for annotating statistical methods used in life, natural and biomedical sciences investigations, text mining and statistical analyses.
Innovation applications of microphysiological systems (MPS) have been growing over the past decade, especially with respect to the use of complex human tissues for assessing safety of drug candidates – but broad industry adoption of MPS methods has not yet become a reality.
This webinar addresses some recent advances in MPS development and begins to explore the barriers to increased incorporation of MPS to improve drug safety assessment and to provide safer, more effective drugs into the clinical pipeline.
Federated Learning (FL) is a learning paradigm that enables collaborative learning without centralizing datasets. In this webinar, NVIDIA present the concept of FL and discuss how it can help overcome some of the barriers seen in the development of AI-based solutions for pharma, genomics and healthcare. Following the presentation, the panel debate on other elements that could drive the adoption of digital approaches more widely and help answer currently intractable science and business questions.
It seems that AI is also becoming a buzzword, like design thinking. Everyone is talking about AI or wants to have AI, and sees all the ideas and benefits – that’s fine, but how do you get started? But what’s different now? Three innovations have finally put AI on the fast track: Big Data, with the internet and sensors everywhere; massive computing power, especially through the Cloud; and the development of breakthrough algorithms, so computers can be trained to accomplish more sophisticated tasks on their own with deep learning. If you use new technology, you need to explore and know what’s possible. With design thinking, it aids to outline the steps and define the ways in which you’re going to create the solution. Starting with mapping the customer journey, defining who will be using that service enhanced with intelligent technology, or who will benefit and gain value from it. We discuss how these two worlds are coming together, and how you get started to transform your venture with Artificial Intelligence using Design Thinking.
Speaker: Claudio Mirti, Principal Solution Specialist – Data & AI, Microsoft
Themes and objectives:
To position FAIR as a key enabler to automate and accelerate R&D process workflows
FAIR Implementation within the context of a use case
Grounded in precise outcomes (e.g. faster and bigger science / more reuse of data to enhance value / increased ability to share data for collaboration and partnership)
To make data actionable through FAIR interoperability
Speakers:
Mathew Woodwark,Head of Data Infrastructure and Tools, Data Science & AI, AstraZeneca
Erik Schultes, International Science Coordinator, GO-FAIR
Georges Heiter, Founder & CEO, Databiology
This presentation reviewed the challenges in identifying, acquiring and utilizing research data in relation to an evolving data market. Strategic solutions were examined in which the FAIR principles play a key role in the future of data management.
Dr. Dennis Wang discusses possible ways to enable ML methods to be more powerful for discovery and to reduce ambiguity within translational medicine, allowing data-informed decision-making to deliver the next generation of diagnostics and therapeutics to patients quicker, at lowered costs, and at scale.
The talk by Dr. Dennis Wang was followed by a panel discussion with Mr. Albert Wang, M. Eng., Head, IT Business Partner, Translational Research & Technologies, Bristol-Myers Squibb.
With the explosion of interest in both enhanced knowledge management and open science, the past few years have seen considerable discussion about making scientific data “FAIR” — findable, accessible, interoperable, and reusable. The problem is that most scientific datasets are not FAIR. When left to their own devices, scientists do an absolutely terrible job creating the metadata that describe the experimental datasets that make their way in online repositories. The lack of standardization makes it extremely difficult for other investigators to locate relevant datasets, to re-analyse them, and to integrate those datasets with other data. The Center for Expanded Data Annotation and Retrieval (CEDAR) has the goal of enhancing the authoring of experimental metadata to make online datasets more useful to the scientific community. The CEDAR work bench for metadata management will be presented in this webinar. CEDAR illustrates the importance of semantic technology to driving open science. It also demonstrates a means for simplifying access to scientific data sets and enhancing the reuse of the data to drive new discoveries.
Open interoperability standards, tools and services at EMBL-EBIPistoia Alliance
In this webinar Dr Henriette Harmse from EMBL-EBI presents how they are using their ontology services at EMBL-EBI to scale up the annotation of data and deliver added value through ontologies and semantics to their users.
Fair webinar, Ted slater: progress towards commercial fair data products and ...Pistoia Alliance
Elsevier is a global information analytics business that helps institutions and professional’s
advance healthcare and open science to improve performance for the benefit of humanity.
In this webinar, we discuss how Elsevier is increasingly leveraging the FAIR Guiding Principles to improve its products and services to better serve the scientific community.
Implementing Blockchain applications in healthcarePistoia Alliance
Blockchain technology can revolutionise the way information is exchanged between parties by bringing an unprecedented level of security and trust to these transactions. The technology is finding its way into multiple use cases but we are yet to see full adoption and real-world business implementation in the Healthcare industry.
In this webinar we will explore the main challenges and considerations for the implementation of Blockchain technology in Healthcare use cases. This is the third webinar in our Blockchain Education series.
Building trust and accountability - the role User Experience design can play ...Pistoia Alliance
In this webinar our panel of UX specialists give a brief introduction to User Experience before presenting the design opportunities UX can bring to AI. We all know that AI has great potential but has some significant hurdles to overcome not least so the human aspect of trust and ethical considerations when designing in the life sciences.
In the late Fall and Winter of 2018, the Pistoia Alliance in cooperation with Elsevier and charitable organizations Cures within Reach and Mission: Cure ran a datathon aiming to find drugs suitable for treatment of childhood chronic pancreatitis, a rare disease that causes extreme suffering. The datathon resulted in identification of four candidate compounds in a short time frame of just under three months. In this webinar our speakers discuss the technologies that made this leap possible
PA webinar on benefits & costs of FAIR implementation in life sciences Pistoia Alliance
The slides from the Pistoia Alliance Debates Webinar where a panel of experts from technology support providers and the biopharma industry, who have been invited to share their views on the "Benefits and costs of FAIR Implementation for life science industry".
Creating novel drugs is an extraordinarily hard and complex problem.
One of the many challenges in drug design is the sheer size of the search space for novel chemical compounds. Scientists need to find molecules that are active toward a biological target or pathway and at the same time have acceptable ADMET properties.
There is now considerable research going on using various AI and ML approaches to tackle these challenges.
Our distinguished speakers, Drs. Alex Tropsha and Ola Engkvist, will discuss their recent work in Drug Design involving Deep Reinforcement Learning and Neural Networks, and will answer questions from the audience on the current state of the research in the field.
Speakers:
Prof Alex Tropsha, Professor at University of North Carolina at Chapel Hill, USA
Dr. Ola Engkvist, Associate Director at AstraZeneca R&D, Gothenburg, Sweden
The slides from thecontinuing part of Pistoia Alliance's drive to improve education and communication around new technologies to life science professionals, this webinar explored how blockchain/DLT and IoT could come together to add even more trust to the GxP domain. If you want to know more about how these new technologies could help enhance GxP compliance, then this webinar will give you much food for thought.
This talk presents an overview of the philosophy and ongoing work of the PhUSE project “Clinical Trials Results as Resource Description Framework.” The team is converting data from the CDISC Study Data Tabulation Model (SDTM) to graph data using an ontology-based approach. The wider implications of this work will be discussed, along with deployment strategies within and beyond the industry.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
RDF is very simple. It provides a framework for describing things according to their relationships. A key value proposition for semantic technologies is to make it easy to describe and connect related data sources, in order to search them as deeply as we care to. Although there is much useful effort in the formal terminology space, the broad focus is on practical outcomes – specifically, to provide a standard set of methods and framework for describing and linking data. This is NOT a standard for what the minimum or perfect data definition should be! – it is a standard way of defining and redefining data…
Lowering barriers to interoperability using standards does NOT mean using standardized, approved terms in an ontology or API. It means using a common framework for resource description, in which terms can easily be surfaced, visualized, tested, merged – due to the common approach.
SPARQL is an open standard Query Service API. It supports a query language with a variety of visual query interfaces and takes advantage of growing linked open data resources . SPARQL APIs can utilize SOAP and REST.
Agile resource description is a key benefit of RDF and SPARQL. Although it can support highly formal resource description, RDF and SPARQL lowers barrier to interoperability caused by excessive concern about perfect specification of reference content. RDF opens up this conversation providing standard framework and practices for mixing, mapping and updating resource descriptions via data model providers AND consumers. RDF also can manage provenance and links to original data, full experiment records, etc. --------- SPARQL endpoints provide visible data models that can be visually managed by data providers. They can also be rapidly adapted by local users mapping to more or less well-formed APIs. Depending on user need, these can be narrowly practical application ontologies or formal OWL models. Application ontologies can be rapidly mapped to OWL ontologies and vice versa.
Broadly, semantic technologies provide a standard framework and methods for describing and querying data resources and connections between data elements. The technology space has been slowing growing and maturing over the past decade. People were writing papers and doing work in the area in the late ‘90s. Tim Berners-Lee wrote an article in 2001 that was published in Scientific American and began popularizing the concepts. These methods make it possible to visualize and connect data in a way that makes sense both to humans and computers. Using RDF and SPARQL, data descriptions can be modified by end users without refactoring the database or databases.
RDF is very simple. It provides a framework for describing things according to their relationships. A key value proposition for semantic technologies is to make it easy to describe and connect related data sources, in order to search them as deeply as we care to. Although there is much useful effort in the formal terminology space, the broad focus is on practical outcomes – specifically, to provide a standard set of methods and framework for describing and linking data. This is NOT a standard for what the minimum or perfect data definition should be! – it is a standard way of defining and redefining data…
Clockwise from 6 oclock > Species, (protein description), Protein, Gene, Gene Location Terminology around new methods can cause confusion. For ease, I find it useful at times to refer to “assertions” instead of “triples”, and to “data structure” instead “ontologies”.
These methods have been developed by W3C, MIT, Stanford, Dublin Core, DERI and elsewhere for just about a decade. Vendor and consumer adoption and technical maturity has grown over the past 8 years or so, with exponential growth over the past 4 years. This compares very favorably to “proprietary” methods and standards invented by single vendors.
Agile method; human readable ontologies, no more over-specification, easy to manage and update, end users can extend the API
First we linked all of the data of interest to help the customer understand what happens biologically in organ transplant rejection scenarios. Next we apply SPARQL to run searches across clinical, protein and gene expression databases, to screen for patients at risk.
Not a new data standard, a standard way of publishing and linking data Extensible so you can start and get to a ‘good enough’ state quickly Created with federation in mind Out-of-box benefits for project completion, maintenance and extension to new data sources