NeXML is a proposed data exchange standard for phylogenetics that addresses issues with the current NEXUS format. It defines an XML schema for representing phylogenetic data like trees, networks, and character data. The schema is designed to be extensible, reuse prior standards, and take advantage of existing XML tools. An implementation includes XML parsers and writers in multiple programming languages and experiments with semantic annotation and web services.
NeXML is an exchange standard for representing phyloinformatic data — inspired by the commonly used NEXUS format, but more robust and easier to process.
Application of Ontology in Semantic Information Retrieval by Prof Shahrul Azm...Khirulnizam Abd Rahman
Application of Ontology in Semantic Information Retrieval
by Prof Shahrul Azman from FSTM, UKM
Presentation for MyREN Seminar 2014
Berjaya Hotel, Kuala Lumpur
27 November 2014
Gellish A Standard Data And Knowledge Representation Language And OntologyAndries_vanRenssen
Database structures should allow for the expression of any fact that can be expressed in natural languages. This means that they should allow for any expression of facts in a universal formal language. Constraints should be specified in a separate layer. This article describes the basic concepts of such a universal semantic database structure and associated formal subset of a natural language.
RuleML 2015: Ontology Reasoning using Rules in an eHealth ContextRuleML
Traditionally, nurse call systems in hospitals are rather simple:
patients have a button next to their bed to call a nurse. Which specific
nurse is called cannot be controlled, as there is no extra information
available. This is different for solutions based on semantic knowledge:
if the state of care givers (busy or free), their current position, and for
example their skills are known, a system can always choose the best
suitable nurse for a call. In this paper we describe such a semantic nurse
call system implemented using the EYE reasoner and Notation3 rules.
The system is able to perform OWL-RL reasoning. Additionally, we use
rules to implement complex decision trees. We compare our solution to
an implementation using OWL-DL, the Pellet reasoner, and SPARQL
queries. We show that our purely rule-based approach gives promising
results. Further improvements will lead to a mature product which will
significantly change the organization of modern hospitals.
NeXML is an exchange standard for representing phyloinformatic data — inspired by the commonly used NEXUS format, but more robust and easier to process.
Application of Ontology in Semantic Information Retrieval by Prof Shahrul Azm...Khirulnizam Abd Rahman
Application of Ontology in Semantic Information Retrieval
by Prof Shahrul Azman from FSTM, UKM
Presentation for MyREN Seminar 2014
Berjaya Hotel, Kuala Lumpur
27 November 2014
Gellish A Standard Data And Knowledge Representation Language And OntologyAndries_vanRenssen
Database structures should allow for the expression of any fact that can be expressed in natural languages. This means that they should allow for any expression of facts in a universal formal language. Constraints should be specified in a separate layer. This article describes the basic concepts of such a universal semantic database structure and associated formal subset of a natural language.
RuleML 2015: Ontology Reasoning using Rules in an eHealth ContextRuleML
Traditionally, nurse call systems in hospitals are rather simple:
patients have a button next to their bed to call a nurse. Which specific
nurse is called cannot be controlled, as there is no extra information
available. This is different for solutions based on semantic knowledge:
if the state of care givers (busy or free), their current position, and for
example their skills are known, a system can always choose the best
suitable nurse for a call. In this paper we describe such a semantic nurse
call system implemented using the EYE reasoner and Notation3 rules.
The system is able to perform OWL-RL reasoning. Additionally, we use
rules to implement complex decision trees. We compare our solution to
an implementation using OWL-DL, the Pellet reasoner, and SPARQL
queries. We show that our purely rule-based approach gives promising
results. Further improvements will lead to a mature product which will
significantly change the organization of modern hospitals.
Build an application upon Semantic Web models. Brief overview of Apache Jena and OWL-API.
Semantic Web course
e-Lite group (https://elite.polito.it)
Politecnico di Torino, 2017
A summary of various COMBINE standardization activitiesMike Hucka
Invited presentation given at the Whole-Cell Modeling Summer School, held in Rostock, Germany, March 2015.
https://sites.google.com/site/vwwholecellsummerschool/important-dates/programm
A little more semantics goes a lot further! Getting more out of Linked Data ...Michel Dumontier
This tutorial will provide detailed instruction to create and make use of formalized ontologies from linked open data for advanced knowledge discovery including consistency checking and answering sophisticated questions.
Automated reasoning in OWL offers the tantalizing possibility to undertake advanced knowledge discovery including verifying the consistency of conceptual schemata in information systems, verifying data integrity and answering expressive queries over the conceptual schema and the data. Given that a large amount of structured knowledge is now available as linked data, the challenge is to formalize this knowledge iso that intended semantics become explicit and that the reasoning is efficient and scalable. While using the full expressiveness of OWL 2 yields ontologies that can be used for consistency verification, classification and query answering, use of less expressive OWL profiles enable efficient reasoning and support different application scenarios. In this tutorial,
- we describe how to generate OWL ontologies from linked data
- check consistency of knowledge
- automatically transform ontologies into OWL profiles
- use this knowledge in applications to integrate data and answer sophisticated questions across domains.
- expressive ontologies enables data integration, verifying consistency of knowledge and answering questions
- formalization of linked data will create new opportunities for knowledge discovery
- OWL 2 profiles support more efficient reasoning and query answering procedures
- recent technology facilitates the automatic conversion of OWL 2 ontologies into profiles
- OWL ontologies can dramatically extend the functionality of semantically-enabled web sites
Data integration is a perennial challenge facing large-scale data scientists. Bio-ontologies are useful in this endeavour as sources of synonyms and also for rules-based fuzzy integration pipelines.
247th ACS Meeting: Experiment Markup Language (ExptML)Stuart Chalk
To integrate science into the semantic web it is important to capture the context of research as it is done. ExptML is designed to store information and workflows from the scientific process.
Often information is spread among
several data sources, such as hospital databases, lab databases,
spreadsheets, etc. Moreover, the complexity of each of these data sources
might make it difficult for end-users to access them, and even
more, to query all of them at the same time.
A new solution that has been proposed to this problem is
ontology-based data access (OBDA).
OBDA is a popular paradigm, developed since the mid 2000s, to query
various types of data sources
using a common vocabulary familiar to the end-users. In a nutshell
OBDA separates the user
from the data sources (relational databases, CVS files, etc.) by means
of an ontology, which is a common terminology that provides the user with a
convenient query vocabulary, hides the structure of the data sources,
and can enrich incomplete data with background knowledge. About a
dozen OBDA systems have been implemented in both academia and
industry.
In this tutorial we will give an overview of OBDA, and our system -ontop-
which is currently being used in the context of the European project
Optique. We will discuss how to use -ontop- for data integration,
in particular concentrating on:
– How to create an ontology (common vocabulary) for a life science domain.
– How to map available data sources to this ontology.
– How to query the database using the terms in the ontology.
– How to check consistency of the data sources w.r.t. the ontology
Ontology-based data access: why it is so cool!Josef Hardi
A brief introduction about ontology-based data access (shortly OBDA) and its core implementation. I presented too a recent simple benchmark between -ontop- and Semantika---two most available software for OBDA framework---in term of query performance (including details in the appendix section). The slides were presented for Friday Research Meeting in Stanford Center for Biomedical Informatics Research (BMIR).
License: Creative Commons by Attribution 3.0
SBML (the Systems Biology Markup Language)Mike Hucka
Morning tutorial given at the COMBINE/ERASysApp day of tutorials on "Modelling and Simulation of Biological Models" on Sunday, September 14, ahead of ICSB 2014 in Melbourne, Australia.
study or concern about what kinds of things exist
what entities there are in the universe.
the ontology derives from the Greek onto (being) and logia (written or spoken). It is a branch of metaphysics , the study of first principles or the root of things.
Short summary of recent SBML developments given at the COMBINE (COmputational Modeling in BIology NEtwork) 2014 meeting held at the University of Southern California in August, 2014. The meeting page is available at http://co.mbine.org/events/COMBINE_2014
Build an application upon Semantic Web models. Brief overview of Apache Jena and OWL-API.
Semantic Web course
e-Lite group (https://elite.polito.it)
Politecnico di Torino, 2017
A summary of various COMBINE standardization activitiesMike Hucka
Invited presentation given at the Whole-Cell Modeling Summer School, held in Rostock, Germany, March 2015.
https://sites.google.com/site/vwwholecellsummerschool/important-dates/programm
A little more semantics goes a lot further! Getting more out of Linked Data ...Michel Dumontier
This tutorial will provide detailed instruction to create and make use of formalized ontologies from linked open data for advanced knowledge discovery including consistency checking and answering sophisticated questions.
Automated reasoning in OWL offers the tantalizing possibility to undertake advanced knowledge discovery including verifying the consistency of conceptual schemata in information systems, verifying data integrity and answering expressive queries over the conceptual schema and the data. Given that a large amount of structured knowledge is now available as linked data, the challenge is to formalize this knowledge iso that intended semantics become explicit and that the reasoning is efficient and scalable. While using the full expressiveness of OWL 2 yields ontologies that can be used for consistency verification, classification and query answering, use of less expressive OWL profiles enable efficient reasoning and support different application scenarios. In this tutorial,
- we describe how to generate OWL ontologies from linked data
- check consistency of knowledge
- automatically transform ontologies into OWL profiles
- use this knowledge in applications to integrate data and answer sophisticated questions across domains.
- expressive ontologies enables data integration, verifying consistency of knowledge and answering questions
- formalization of linked data will create new opportunities for knowledge discovery
- OWL 2 profiles support more efficient reasoning and query answering procedures
- recent technology facilitates the automatic conversion of OWL 2 ontologies into profiles
- OWL ontologies can dramatically extend the functionality of semantically-enabled web sites
Data integration is a perennial challenge facing large-scale data scientists. Bio-ontologies are useful in this endeavour as sources of synonyms and also for rules-based fuzzy integration pipelines.
247th ACS Meeting: Experiment Markup Language (ExptML)Stuart Chalk
To integrate science into the semantic web it is important to capture the context of research as it is done. ExptML is designed to store information and workflows from the scientific process.
Often information is spread among
several data sources, such as hospital databases, lab databases,
spreadsheets, etc. Moreover, the complexity of each of these data sources
might make it difficult for end-users to access them, and even
more, to query all of them at the same time.
A new solution that has been proposed to this problem is
ontology-based data access (OBDA).
OBDA is a popular paradigm, developed since the mid 2000s, to query
various types of data sources
using a common vocabulary familiar to the end-users. In a nutshell
OBDA separates the user
from the data sources (relational databases, CVS files, etc.) by means
of an ontology, which is a common terminology that provides the user with a
convenient query vocabulary, hides the structure of the data sources,
and can enrich incomplete data with background knowledge. About a
dozen OBDA systems have been implemented in both academia and
industry.
In this tutorial we will give an overview of OBDA, and our system -ontop-
which is currently being used in the context of the European project
Optique. We will discuss how to use -ontop- for data integration,
in particular concentrating on:
– How to create an ontology (common vocabulary) for a life science domain.
– How to map available data sources to this ontology.
– How to query the database using the terms in the ontology.
– How to check consistency of the data sources w.r.t. the ontology
Ontology-based data access: why it is so cool!Josef Hardi
A brief introduction about ontology-based data access (shortly OBDA) and its core implementation. I presented too a recent simple benchmark between -ontop- and Semantika---two most available software for OBDA framework---in term of query performance (including details in the appendix section). The slides were presented for Friday Research Meeting in Stanford Center for Biomedical Informatics Research (BMIR).
License: Creative Commons by Attribution 3.0
SBML (the Systems Biology Markup Language)Mike Hucka
Morning tutorial given at the COMBINE/ERASysApp day of tutorials on "Modelling and Simulation of Biological Models" on Sunday, September 14, ahead of ICSB 2014 in Melbourne, Australia.
study or concern about what kinds of things exist
what entities there are in the universe.
the ontology derives from the Greek onto (being) and logia (written or spoken). It is a branch of metaphysics , the study of first principles or the root of things.
Short summary of recent SBML developments given at the COMBINE (COmputational Modeling in BIology NEtwork) 2014 meeting held at the University of Southern California in August, 2014. The meeting page is available at http://co.mbine.org/events/COMBINE_2014
Current conceptual models and methodologies for Web applications concentrate on content, navigation, and service modeling. Although some of them are meant to address semantic web applications too, they do not fully exploit the whole potential deriving from interaction with ontological data sources and and from Semantic annotations. This paper proposes an extension to Web application conceptual models toward Semantic Web. We devise an extension of the WebML modeling framework that fulfills most of the design requirements emerging for the new area of Semantic Web. We generalize the development process to cover Semantic Web and we devise a set of new primitives for ontology importing and querying. Finally, an implementation prototype of the proposed concepts is proposed within the commercial tool WebRatio.
ACS 248th Paper 136 JSmol/JSpecView Eureka IntegrationStuart Chalk
Integration of the combined JSmol/JSpecView molecular viewer/spectral viewer software in the Eureka Research Workbench. Can display molecular structures, spectra and the linked version where clicking on a peak shows molecular movement (IR).
The slides discuss the research agenda for search of the semantic web and current available search tools. The slides were prepared for an audience of information
Building nTier Applications with Entity Framework Services (Part 1)David McCarter
Learn how to build real world nTier applications with the new Entity Framework and related services. With this new technology built into .NET, you can easily wrap an object model around your database and have all the data access automatically generated or use your own stored procedures and views. The session will demonstrate how to create and consume these new technologies from the ground up and focus on database modeling including views and stored procedures along with coding against the model via LINQ. Dynamic data website will also be demonstrated.
Building nTier Applications with Entity Framework Services (Part 1)David McCarter
Learn how to build real world nTier applications with the new Entity Framework and related services. With this new technology built into .NET, you can easily wrap an object model around your database and have all the data access automatically generated or use your own stored procedures and views. The session will demonstrate how to create and consume these new technologies from the ground up and focus on database modeling including views and stored procedures along with coding against the model via LINQ. Dynamic data website will also be demonstrated. Lots of code! Make sure to attend Part 2.
A tutorial on the history, use, and caveats of Java generics. Using the simple example of an interface for sort algorithms, the tutorial presents the history of generics and describes the problems being solved by generics. It also provides definitions, and examples in Java and C++, and discusses Duck Typing. It then describes two scenarios: (1) Scenario 1: you want to enforce type safety for containers and remove the need for typecasts when using these containers and (2) Scenario 2: you want to build generic algorithms that work on several types of (possibly unrelated) things. It also summarises caveats with generics, in particular type erasure.
Facilitating Busines Interoperability from the Semantic WebRoberto García
Most approaches to B2B interoperability are based on language syntax standardisation, usually by XML Schemas. However, due to XML expressivity limitations, they are difficult to put into practice because language semantics are not available for computerised means. Therefore, there are many attempts to use formal semantics for B2B based on ontologies. However, this is a difficult jump as there is already a huge XML-based B2B framework and ontology-based approaches lack momentum. Our approach to solve this impasse is based on a di-rect and transparent transfer of existing XML Schemas and XML data to the semantic world. This process is based on a XML Schema to web ontology mapping combined with an XML data to semantic web data one. Once in the semantic space, it is easier to integrate different business standards using ontology alignment tools and to develop business information systems thanks to semantics-aware tools.
Natural history research as a replicable data scienceRutger Vos
Keynote presentation to the 2017 GARR conference, 17 November 2017, Venice, Italy. Introduction to natural history data types and analysis examples. Discussion of current practices in promoting reproducibility.
Species delimitation - species limits and character evolutionRutger Vos
Lecture slides for the program orientation Evolutionary Biology at the Institute of Biology Leiden, the Netherlands. Thursday, September 7th, 2017.
Lecture notes are here: https://docs.google.com/document/d/e/2PACX-1vRIv5mKK1fjBby--u97emC7hrqXUbxFQZe63P1FpguuhHLG6xykbwXKeKXCUE5W-LSpakXYCI621xCK/pub
Onderzoek bio-informatica Naturalis. Raad voor Cultuur 2017.Rutger Vos
Presentatie voor leden van de Raad voor Cultuur, 27 juni 2017, Naturalis. Geeft een overzicht van de onderzoeksactiviteiten aan collectiemateriaal met een bio-informatische component.
Presentation about image recognition applied to digitized specimen of the Van Groenendael Krijger collection of Javanese Papilionid butterflies. Occasion: BrainFood, 12 April 2017, Naturalis, Leiden, the Netherlands.
Taxonomic classification of digitized specimens using machine learningRutger Vos
Progress in the development of neural networks that classify images of slipper orchids and Javanese butterflies. Talk to LEBEN at Leiden University's biology department, IBL, 20 September 2016.
Self-Updating Platform for the Estimation of Rates of Speciation, Migration A...Rutger Vos
Slides for my lightning talk on the SUPERSMART platform to the SSB/SSE/ASN annual meeting, Austin, TX, USA. SSB Spotlight Session: "Next generation phylogenetic inference 2". Monday, June 20th 2016, 3:20PM, Ballroom A.
Hoe leer je een robot soorten te herkennen?Rutger Vos
Guest lecture slides for the bioinformatics student union (Exon) at the university of applied sciences, Leiden, the Netherlands. In this lecture I present the results of a research project at Naturalis Biodiversity Center to identify slipper orchids using image recognition techniques.
Modeling the biosphere: the natural historian's perspectiveRutger Vos
Natural history collections of specimens are a rich source of data for discovering the patterns of biodiversity in space and time and for furthering our understanding of the underlying processes that generate these patterns. Modeling the biosphere in this manner can help address global challenges in relation to climate change, food security, emerging disease and conservation. (Talk to the 3rd annual eScience symposium, 8 October 2015).
Kunnen we een tomaat van 400 jaar oud proevenRutger Vos
Slides voor mijn college aan de Museum Jeugd Universiteit (http://museumjeugduniversiteit.nl) in Museum Boerhaave (http://www.museumboerhaave.nl), 19 october 2014.
PhyloTastic: names-based phyloinformatic data integrationRutger Vos
Lightning talk to the 2013 TDWG conference symposium on phyloinformatics, brief report on PhyloTastic with special attention to the taxonomic name reconciliation service TaxoSaurus.
Full title: "The tree of life as central unifying artefact for the integration of phylogenetic knowledge." This is a brief intro presentation for the 2011 BioHackathon in Kyoto, Japan. I describe a simple workflow built around semantic web services that add metadata to a backbone of the Tree of Life. The take home message is that such a structure can be a useful anchor to which knowledge can be attached, but that there are still issues with standards definition and adoption.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
NeXML
1. NeXML A future data exchange standard for phylogenetics Rutger Vos University of British Columbia
2. Increased automation in evolutionary informatics is hampered by poorly defined “standards” Introduction (1/7) The problem Introduction The problem EvoInfo interests This subproject Nexus issues Parsing Extensibility XML goodies Design Principles Re-use Patterns Inheritance References Implementation Approach ERD Inheritance Anatomy Characters Trees Current status Schema blocks Parsers & writers Experiments To do Resources
3. Addressing interoperability problems by coding our way out of it Introduction (2/7) EvoInfo interests Syntax: NeXML Semantics: CDAO Transport: PhyloWS Introduction The problem EvoInfo interests This subproject Nexus issues Parsing Extensibility XML goodies Design Principles Re-use Patterns Inheritance References Implementation Approach ERD Inheritance Anatomy Characters Trees Current status Schema blocks Parsers & writers Experiments To do Resources
4.
5.
6.
7.
8.
9.
10.
11.
12. Design (4/5) Inheritance IDTagged (required id attribute) Labelled (optional label attribute) Annotated (optional dict elements) Base (optional base/lang/href attributes) AbstractElement (in root schema) ConcreteElement (in instance document) extends extends extends extends restricts Introduction The problem EvoInfo interests This subproject Nexus issues Parsing Extensibility XML goodies Design Principles Re-use Patterns Inheritance References Implementation Approach ERD Inheritance Anatomy Characters Trees Current status Schema blocks Parsers & writers Experiments To do Resources
13.
14.
15. Implementation (2/6) Entity relationships Introduction The problem EvoInfo interests This subproject Nexus issues Parsing Extensibility XML goodies Design Principles Re-use Patterns Inheritance References Implementation Approach ERD Inheritance Anatomy Characters Trees Current status Schema blocks Parsers & writers Experiments To do Resources
16. Implementation (3/6) inheritance tree for elements Introduction The problem EvoInfo interests This subproject Nexus issues Parsing Extensibility XML goodies Design Principles Re-use Patterns Inheritance References Implementation Approach ERD Inheritance Anatomy Characters Trees Current status Schema blocks Parsers & writers Experiments To do Resources
17. Implementation (4/6) anatomy of a “block” <characters id="c1" xsi:type="nex:DnaSeqs" otus="t1"> </characters> <dict> <key>desc</key> <string>description … </string> </dict> Contents… Introduction The problem EvoInfo interests This subproject Nexus issues Parsing Extensibility XML goodies Design Principles Re-use Patterns Inheritance References Implementation Approach ERD Inheritance Anatomy Characters Trees Current status Schema blocks Parsers & writers Experiments To do Resources
18. Implementation (5/6) Character Classes RestrictionCells RestrictionSeqs Restriction ContinuousCells ContinuousSeqs Continuous StandardCells StandardSeqs Standard ProteinCells ProteinSeqs Protein RnaCells RnaSeqs RNA DnaCells DnaSeqs DNA Cells Sequence Introduction The problem EvoInfo interests This subproject Nexus issues Parsing Extensibility XML goodies Design Principles Re-use Patterns Inheritance References Implementation Approach ERD Inheritance Anatomy Characters Trees Current status Schema blocks Parsers & writers Experiments To do Resources
19. Implementation (6/6) Tree Classes IntTree FloatTree Tree IntNetwork FloatNetwork Network Int Float Introduction The problem EvoInfo interests This subproject Nexus issues Parsing Extensibility XML goodies Design Principles Re-use Patterns Inheritance References Implementation Approach ERD Inheritance Anatomy Characters Trees Current status Schema blocks Parsers & writers Experiments To do Resources