SBML (the Systems Biology Markup Language)Mike Hucka
Morning tutorial given at the COMBINE/ERASysApp day of tutorials on "Modelling and Simulation of Biological Models" on Sunday, September 14, ahead of ICSB 2014 in Melbourne, Australia.
Recent developments in the world of SBML (the Systems Biology Markup Language) Mike Hucka
The document discusses recent developments in SBML (Systems Biology Markup Language), including new features added to software tools like the SBML Test Suite, the addition of packages to SBML Level 3 to support new model types, and ongoing work to evolve SBML standards to better enable model sharing and reuse. SBML has become widely adopted for representing computational models in systems biology, with many software tools and thousands of publicly available models using the standard.
Short summary of recent SBML developments given at the COMBINE (COmputational Modeling in BIology NEtwork) 2014 meeting held at the University of Southern California in August, 2014. The meeting page is available at http://co.mbine.org/events/COMBINE_2014
A summary of various COMBINE standardization activitiesMike Hucka
Invited presentation given at the Whole-Cell Modeling Summer School, held in Rostock, Germany, March 2015.
https://sites.google.com/site/vwwholecellsummerschool/important-dates/programm
SBML, SBML Packages, SED-ML, COMBINE Archive, and moreMike Hucka
SBML, SBML Packages, SED-ML, COMBINE Archive, and more is a presentation about standards for representing computational models in systems biology. It introduces SBML (Systems Biology Markup Language) as a format for exchanging models of biological processes between software tools. It describes extensions to SBML called packages that add new modeling constructs. It also briefly mentions related standards like SED-ML and COMBINE Archive.
Recent software and services to support the SBML community Mike Hucka
MOCCASIN analyzed the MATLAB model and generated an equivalent SBML model with:
- 2 species named 'x1' and 'x2'
- 4 parameters named 'a', 'b', 'c', and 'd'
- An initial assignment block setting the initial concentrations of 'x1' and 'x2'
- A single reaction that uses the parameters and species to define the rate of change of each species according to the equations in the MATLAB code
SBML (the Systems Biology Markup Language)Mike Hucka
Morning tutorial given at the COMBINE/ERASysApp day of tutorials on "Modelling and Simulation of Biological Models" on Sunday, September 14, ahead of ICSB 2014 in Melbourne, Australia.
Recent developments in the world of SBML (the Systems Biology Markup Language) Mike Hucka
The document discusses recent developments in SBML (Systems Biology Markup Language), including new features added to software tools like the SBML Test Suite, the addition of packages to SBML Level 3 to support new model types, and ongoing work to evolve SBML standards to better enable model sharing and reuse. SBML has become widely adopted for representing computational models in systems biology, with many software tools and thousands of publicly available models using the standard.
Short summary of recent SBML developments given at the COMBINE (COmputational Modeling in BIology NEtwork) 2014 meeting held at the University of Southern California in August, 2014. The meeting page is available at http://co.mbine.org/events/COMBINE_2014
A summary of various COMBINE standardization activitiesMike Hucka
Invited presentation given at the Whole-Cell Modeling Summer School, held in Rostock, Germany, March 2015.
https://sites.google.com/site/vwwholecellsummerschool/important-dates/programm
SBML, SBML Packages, SED-ML, COMBINE Archive, and moreMike Hucka
SBML, SBML Packages, SED-ML, COMBINE Archive, and more is a presentation about standards for representing computational models in systems biology. It introduces SBML (Systems Biology Markup Language) as a format for exchanging models of biological processes between software tools. It describes extensions to SBML called packages that add new modeling constructs. It also briefly mentions related standards like SED-ML and COMBINE Archive.
Recent software and services to support the SBML community Mike Hucka
MOCCASIN analyzed the MATLAB model and generated an equivalent SBML model with:
- 2 species named 'x1' and 'x2'
- 4 parameters named 'a', 'b', 'c', and 'd'
- An initial assignment block setting the initial concentrations of 'x1' and 'x2'
- A single reaction that uses the parameters and species to define the rate of change of each species according to the equations in the MATLAB code
Standards and software: practical aids for reproducibility of computational r...Mike Hucka
My presentation during the session titled "Reproducibility of computational research: methods to avoid madness" on Wednesday, 17 September 2014, during ICSB 2014, held in Melbourne, Australia.
SBML (Systems Biology Markup Language) is a format for representing computational models of biological processes. It defines data structures and serialization to XML for representing models in a neutral, machine-readable way. Development of SBML started in 2000 with the goal of facilitating exchange of models between software tools and databases. SBML provides syntax but limited semantics, so standard annotation schemes have been developed to link models to external data resources and provide additional meaning. The scope of SBML encompasses many types of biological models and is expanding through new packages to support additional model types.
This presentation discusses three declarative meta-programming techniques - meta-modelling, meta-logic programming, and explanation-based constraint programming - that are used in the author's research on describing, applying, and detecting design patterns and design defects. Each technique is introduced with examples and discussions of advantages and drawbacks. Finally, the presentation describes how the techniques are combined in the Ptidej tool to load programs, dynamic information, pattern descriptions, and detect patterns.
Use of artificial neural network in pattern recognitionkamalsrit
This document summarizes research on using artificial neural networks for pattern recognition. It discusses how pattern recognition involves tasks like classification and clustering. It describes the main stages of a pattern recognition system as data acquisition, representation, and decision making. It then focuses on artificial neural networks, describing their ability to learn complex relationships and adapt to data. Finally, it briefly mentions one previous work from 2009 that used neural networks for voice pattern recognition in interactive voice response systems.
Generating Natural-Language Text with Neural NetworksJonathan Mugan
Automatic text generation enables computers to summarize text, to have conversations in customer-service and other settings, and to customize content based on the characteristics and goals of the human interlocutor. Using neural networks to automatically generate text is appealing because they can be trained through examples with no need to manually specify what should be said when. In this talk, we will provide an overview of the existing algorithms used in neural text generation, such as sequence2sequence models, reinforcement learning, variational methods, and generative adversarial networks. We will also discuss existing work that specifies how the content of generated text can be determined by manipulating a latent code. The talk will conclude with a discussion of current challenges and shortcomings of neural text generation.
The document summarizes activities of POPIX and DDMoRe related to population modelling and the clinical trial simulator. Key points include:
1) POPIX develops new population modelling methods in fields like pharmacology while partnering with DDMoRe on PK/PD modelling.
2) DDMoRe aims to establish modelling standards and shares disease models through its modelling library and framework.
3) POPIX works on flexible statistical models, Bayesian estimation, errors in design/covariates, hidden Markov models, and stochastic differential equation models.
4) The clinical trial simulator can simulate trials using various PKPD models and recruitment/compliance models, and integrate workflows for simulation and analysis.
Finding common ground between modelers and simulation software in systems bio...Mike Hucka
The document discusses Systems Biology Markup Language (SBML), a format for representing computational models of biological processes. SBML allows models to be exchanged between different software applications and defines concepts like species, compartments, reactions, and parameters. It aims to serve as a common language for software in systems biology. The document outlines some basic SBML concepts and notes that the scope of SBML is not limited to metabolic models, but can also represent signaling pathways, neural models, pharmacokinetic models, and other types of models. It discusses how SBML continues to evolve through new Levels and Packages to support additional model constructs and capabilities.
A Profile of Today's SBML-Compatible SoftwareMike Hucka
Slides from presentation given at the Workshop on Interoperability in Scientific Computing during the 7th IEEE International Conference on e-Science Stockholm, Sweden, December 5, 2011.
SBML and related resources and standardization effortsMike Hucka
Slides from presentation given on November 21, 2011, at the 4th Global COE International Symposium on Physiome and Systems Biology for Integrated Life Sciences and Predictive Medicine, in Osaka, Japan.
A status update on COMBINE standardization activities, with a focus on SBMLMike Hucka
The document discusses the Systems Biology Markup Language (SBML), which is a format for representing computational models of biological processes that has been under development since 2000; it describes the core concepts of SBML including reactions, species, compartments, and parameters as well as SBML levels and packages that extend its capabilities; and it provides information on where to find SBML specifications, software, and libraries.
This document discusses SBML (Systems Biology Markup Language), SBGN (Systems Biology Graphical Notation), and BioModels.net. SBML is a standard format for representing computational models of biochemical networks that allows models to be exchanged between software tools and researchers. Over 100 software tools now support SBML. SBGN is a project to develop a standard notation for diagrams of cellular networks. BioModels.net is a database of models encoded in SBML that has been made possible by the adoption of SBML as a common exchange format.
Kaggle Days Paris - Alberto Danese - ML InterpretabilityAlberto Danese
This document discusses the importance of machine learning interpretability for enterprise adoption of ML. It begins with a brief history of AI and ML, noting that while adoption has increased, most companies are still in the exploration phase and have not deployed ML models into production. Regulators and humans require explanations for model predictions. The document then outlines different levels and approaches for ML interpretability, including enforcing constraints before building models, producing global explanations after model building using techniques like partial dependence plots, and generating local explanations for individual predictions using methods like LIME and Shapley additive explanations. It emphasizes that interpretability allows for more robust, fair and transparent models.
Creating a new language to support open innovationMike Hucka
Presentation given on 19 August 2013 at a BioBriefings meeting of the BioMelbourne Network (http://www.biomelbourne.org/events/view/289) in Melbourne, Australia.
A new language for a new biology: How SBML and other tools are transforming m...Mike Hucka
Presentation given at the Victorian Systems Biology Symposium (http://www.emblaustralia.org/About_us/news/mike-hucka.aspx) at the Walter and Eliza Hall Institute in Melbourne, Australia, on 20 August 2013.
Systems biology aims to understand biological systems as a whole rather than individual parts. Early criticisms saw molecular biology as too reductionist. Systems modeling using mathematical approaches also emerged. Standards like SBML and community-building efforts were important to allow sharing and integration of computational models between different research groups and software tools. This helped a systems biology community flourish by providing interoperability between various modeling approaches and data types.
This document provides an overview of different modeling concepts and principles. It begins with an introduction to modeling in different domains like the human body, military operations, and economics. It then discusses general modeling approaches like physical, logical, and mathematical modeling. Different categories of events for simulation are outlined. Common modeling architectures, principles, and cycles from fields like military simulation, surgery, and physics are summarized. The document concludes with advice for early career modelers around designing their own models, generalizing principles, reading widely, questioning others, and drawing from multiple disciplines.
Practical Constraint Solving for Generating System Test DataLionel Briand
The document presents a technique for generating system test data by formulating it as a constraint solving problem. It discusses using constraint solving to find solutions that satisfy constraints from system specifications. Existing constraint solvers are not adequate for complex system constraints like those in the Object Constraint Language (OCL). The technique targets OCL and aims to generate valid test data more efficiently through a custom constraint solver. It provides a tax system as a sample data model to demonstrate generating test instances that satisfy constraints.
The document discusses a company that finances, develops, and operates renewable energy and efficiency installations. They collect large amounts of time series data from these installations, including temperature readings and flow rates taken at regular intervals. The author is considering using MongoDB to build a flexible data pipeline to store, search, and analyze this time series data. Key requirements are that the system needs to scale to potentially large amounts of data from many installations, and that it is designed with analytics and flexibility in mind to support a variety of use cases and evolving business needs.
Network Metrics and Measurements in the Era of the Digital EconomiesPavel Loskot
Rapidly evolving socio-technical systems require radically new approaches to measure and monitor their performance. The system complexity and desire for autonomy requires to move from simple metrics to whole metrics frameworks. The metrics and measurements of complex systems is emerging as a new discipline.
A Mixed Discrete-Continuous Attribute List Representation for Large Scale Cla...jaumebp
This work assesses the performance of the BioHEL data mining method to handle large-scale datasets, and proposes a representation to deal efficiently with domains with mixed discrete-continuous attributes
Standards and software: practical aids for reproducibility of computational r...Mike Hucka
My presentation during the session titled "Reproducibility of computational research: methods to avoid madness" on Wednesday, 17 September 2014, during ICSB 2014, held in Melbourne, Australia.
SBML (Systems Biology Markup Language) is a format for representing computational models of biological processes. It defines data structures and serialization to XML for representing models in a neutral, machine-readable way. Development of SBML started in 2000 with the goal of facilitating exchange of models between software tools and databases. SBML provides syntax but limited semantics, so standard annotation schemes have been developed to link models to external data resources and provide additional meaning. The scope of SBML encompasses many types of biological models and is expanding through new packages to support additional model types.
This presentation discusses three declarative meta-programming techniques - meta-modelling, meta-logic programming, and explanation-based constraint programming - that are used in the author's research on describing, applying, and detecting design patterns and design defects. Each technique is introduced with examples and discussions of advantages and drawbacks. Finally, the presentation describes how the techniques are combined in the Ptidej tool to load programs, dynamic information, pattern descriptions, and detect patterns.
Use of artificial neural network in pattern recognitionkamalsrit
This document summarizes research on using artificial neural networks for pattern recognition. It discusses how pattern recognition involves tasks like classification and clustering. It describes the main stages of a pattern recognition system as data acquisition, representation, and decision making. It then focuses on artificial neural networks, describing their ability to learn complex relationships and adapt to data. Finally, it briefly mentions one previous work from 2009 that used neural networks for voice pattern recognition in interactive voice response systems.
Generating Natural-Language Text with Neural NetworksJonathan Mugan
Automatic text generation enables computers to summarize text, to have conversations in customer-service and other settings, and to customize content based on the characteristics and goals of the human interlocutor. Using neural networks to automatically generate text is appealing because they can be trained through examples with no need to manually specify what should be said when. In this talk, we will provide an overview of the existing algorithms used in neural text generation, such as sequence2sequence models, reinforcement learning, variational methods, and generative adversarial networks. We will also discuss existing work that specifies how the content of generated text can be determined by manipulating a latent code. The talk will conclude with a discussion of current challenges and shortcomings of neural text generation.
The document summarizes activities of POPIX and DDMoRe related to population modelling and the clinical trial simulator. Key points include:
1) POPIX develops new population modelling methods in fields like pharmacology while partnering with DDMoRe on PK/PD modelling.
2) DDMoRe aims to establish modelling standards and shares disease models through its modelling library and framework.
3) POPIX works on flexible statistical models, Bayesian estimation, errors in design/covariates, hidden Markov models, and stochastic differential equation models.
4) The clinical trial simulator can simulate trials using various PKPD models and recruitment/compliance models, and integrate workflows for simulation and analysis.
Finding common ground between modelers and simulation software in systems bio...Mike Hucka
The document discusses Systems Biology Markup Language (SBML), a format for representing computational models of biological processes. SBML allows models to be exchanged between different software applications and defines concepts like species, compartments, reactions, and parameters. It aims to serve as a common language for software in systems biology. The document outlines some basic SBML concepts and notes that the scope of SBML is not limited to metabolic models, but can also represent signaling pathways, neural models, pharmacokinetic models, and other types of models. It discusses how SBML continues to evolve through new Levels and Packages to support additional model constructs and capabilities.
A Profile of Today's SBML-Compatible SoftwareMike Hucka
Slides from presentation given at the Workshop on Interoperability in Scientific Computing during the 7th IEEE International Conference on e-Science Stockholm, Sweden, December 5, 2011.
SBML and related resources and standardization effortsMike Hucka
Slides from presentation given on November 21, 2011, at the 4th Global COE International Symposium on Physiome and Systems Biology for Integrated Life Sciences and Predictive Medicine, in Osaka, Japan.
A status update on COMBINE standardization activities, with a focus on SBMLMike Hucka
The document discusses the Systems Biology Markup Language (SBML), which is a format for representing computational models of biological processes that has been under development since 2000; it describes the core concepts of SBML including reactions, species, compartments, and parameters as well as SBML levels and packages that extend its capabilities; and it provides information on where to find SBML specifications, software, and libraries.
This document discusses SBML (Systems Biology Markup Language), SBGN (Systems Biology Graphical Notation), and BioModels.net. SBML is a standard format for representing computational models of biochemical networks that allows models to be exchanged between software tools and researchers. Over 100 software tools now support SBML. SBGN is a project to develop a standard notation for diagrams of cellular networks. BioModels.net is a database of models encoded in SBML that has been made possible by the adoption of SBML as a common exchange format.
Kaggle Days Paris - Alberto Danese - ML InterpretabilityAlberto Danese
This document discusses the importance of machine learning interpretability for enterprise adoption of ML. It begins with a brief history of AI and ML, noting that while adoption has increased, most companies are still in the exploration phase and have not deployed ML models into production. Regulators and humans require explanations for model predictions. The document then outlines different levels and approaches for ML interpretability, including enforcing constraints before building models, producing global explanations after model building using techniques like partial dependence plots, and generating local explanations for individual predictions using methods like LIME and Shapley additive explanations. It emphasizes that interpretability allows for more robust, fair and transparent models.
Creating a new language to support open innovationMike Hucka
Presentation given on 19 August 2013 at a BioBriefings meeting of the BioMelbourne Network (http://www.biomelbourne.org/events/view/289) in Melbourne, Australia.
A new language for a new biology: How SBML and other tools are transforming m...Mike Hucka
Presentation given at the Victorian Systems Biology Symposium (http://www.emblaustralia.org/About_us/news/mike-hucka.aspx) at the Walter and Eliza Hall Institute in Melbourne, Australia, on 20 August 2013.
Systems biology aims to understand biological systems as a whole rather than individual parts. Early criticisms saw molecular biology as too reductionist. Systems modeling using mathematical approaches also emerged. Standards like SBML and community-building efforts were important to allow sharing and integration of computational models between different research groups and software tools. This helped a systems biology community flourish by providing interoperability between various modeling approaches and data types.
This document provides an overview of different modeling concepts and principles. It begins with an introduction to modeling in different domains like the human body, military operations, and economics. It then discusses general modeling approaches like physical, logical, and mathematical modeling. Different categories of events for simulation are outlined. Common modeling architectures, principles, and cycles from fields like military simulation, surgery, and physics are summarized. The document concludes with advice for early career modelers around designing their own models, generalizing principles, reading widely, questioning others, and drawing from multiple disciplines.
Practical Constraint Solving for Generating System Test DataLionel Briand
The document presents a technique for generating system test data by formulating it as a constraint solving problem. It discusses using constraint solving to find solutions that satisfy constraints from system specifications. Existing constraint solvers are not adequate for complex system constraints like those in the Object Constraint Language (OCL). The technique targets OCL and aims to generate valid test data more efficiently through a custom constraint solver. It provides a tax system as a sample data model to demonstrate generating test instances that satisfy constraints.
The document discusses a company that finances, develops, and operates renewable energy and efficiency installations. They collect large amounts of time series data from these installations, including temperature readings and flow rates taken at regular intervals. The author is considering using MongoDB to build a flexible data pipeline to store, search, and analyze this time series data. Key requirements are that the system needs to scale to potentially large amounts of data from many installations, and that it is designed with analytics and flexibility in mind to support a variety of use cases and evolving business needs.
Network Metrics and Measurements in the Era of the Digital EconomiesPavel Loskot
Rapidly evolving socio-technical systems require radically new approaches to measure and monitor their performance. The system complexity and desire for autonomy requires to move from simple metrics to whole metrics frameworks. The metrics and measurements of complex systems is emerging as a new discipline.
A Mixed Discrete-Continuous Attribute List Representation for Large Scale Cla...jaumebp
This work assesses the performance of the BioHEL data mining method to handle large-scale datasets, and proposes a representation to deal efficiently with domains with mixed discrete-continuous attributes
Introduction to modeling_and_simulationAysun Duran
1. This document provides an introduction to modeling and simulation. It discusses what modeling and simulation are, and the types of problems they can address.
2. Simulation involves operating a model of a system to study the system's properties without actually changing the real system. It allows experimenting with different configurations to evaluate and optimize system performance.
3. Developing a simulation model involves identifying the system components and relationships between them. Validating the model and performing simulation experiments then allows making recommendations to improve the real system.
This document provides an overview of modeling and simulation. It discusses what modeling and simulation are, the steps in developing a simulation model, designing a simulation experiment, and analyzing simulation output. It also provides an example of simulating a machine shop with multiple work stations to determine resource utilization under different arrival patterns. The key steps in a simulation study are problem formulation, model development, experiment design, output analysis and recommendations.
Handling Missing Attributes using Matrix Factorization CS, NcState
This document summarizes a study on using matrix factorization to handle missing attributes in software defect prediction models. The study conducted two experiments: 1) evaluating the performance of a naive Bayes classifier as features were gradually removed, and 2) comparing the performance of naive Bayes with imputation versus matrix factorization on datasets with missing attributes. The results showed that naive Bayes performance decreased with fewer features, while matrix factorization performed better than naive Bayes when attributes were missing. The study concludes that matrix factorization is a promising approach for the missing data problem in defect prediction.
This document discusses varieties of self-awareness and their uses in natural and artificial systems. It proposes a conceptual framework for metacognition and natural cognition. The document contains slides for presentations on this topic, including:
- Discussing how to analyze requirements by examining natural and artificial systems to understand design discontinuities.
- Explaining how environments can have agent-relative structure that produces varied information processing demands.
- Outlining a conceptual framework that includes reactive and deliberative architectures in natural systems, with different layers providing varieties of self-awareness.
Similar to SBML (the Systems Biology Markup Language) (20)
This presentation gives an overview of Caltech DIBS, a system for digital controlled lending (CDL) implemented by the California Institute of Technology Library in early 2021 to support course serves and other academic library needs at Caltech.
This document reports on a survey of 69 software developers and non-developers about how they search for and evaluate ready-to-run software. The survey found that the most important characteristics when searching for software included the size of the software, similarity to other software used, software architecture, quality of support, and other people's opinions. Respondents also rated characteristics like programming languages used, reputation of developers, ease of use, licensing, and performance as usually or above-average in importance. The document concludes by recommending that software developers make key information like features, standards, pricing, requirements, and licensing prominent in documentation to improve discoverability and reuse of their software.
The document discusses COMBINE, which stands for Computational Modeling in Biology Network. COMBINE coordinates standards development, meetings, and infrastructure to support modeling in biology. It brings together people from different fields to develop standards through multiple phases of creation, evolution, and support. COMBINE also coordinates annual meetings and hackathons to facilitate software development and interoperability. It provides resources like common URIs to support adopted specifications.
Afternoon tutorial given at the COMBINE/ERASysApp day of tutorials on "Modelling and Simulation of Biological Models" on Sunday, September 14, ahead of ICSB 2014 in Melbourne, Australia.
Reproducibility of computational research: methods to avoid madness (Session ...Mike Hucka
Introduction on the session "Reproducibility of computational research: methods to avoid madness" held Wednesday, September 17, during ICSB 2014 in Melbourne, Australia, 2014.
Update on SBML for Tuesday Sep. 17 (COMBINE 2013)Mike Hucka
Michael Hucka provided an update on the status of SBML (Systems Biology Markup Language) development at the COMBINE 2013 conference in Paris. Key points included:
- The SBML editors are working to finalize changes for Version 2 of SBML Level 3 and Version 5 of Level 2, focusing on backward compatibility.
- Detailed status pages track the progress of package specifications.
- A status tracking spreadsheet monitors the progress of SBML Level 3 packages including hierarchical models, constraints, qualitative models, and more.
- Discussions are ongoing to develop specifications for multistate, multicomponent and multicompartment species to support structured entities and pattern rules.
- A draft specification has been
Computational Approaches to Systems BiologyMike Hucka
Presentation given at the Sydney Computational Biologists meetup on 21 August 2013 (http://australianbioinformatics.net/past-events/2013/8/21/computational-approaches-to-systems-biology.html).
Common ground between modelers and simulation software: the Systems Biology M...Mike Hucka
The document discusses several standards for modeling biological processes including SBML, SBO, MIRIAM, and SED-ML. SBML is a format for representing biological models that over 230 software systems support. SBO provides controlled vocabularies to annotate SBML models to make their meaning more precise. MIRIAM defines guidelines for including minimum provenance information in models. SED-ML is a format for recording simulation experiments to reproduce modeling results across different software. These standards aim to facilitate sharing and reuse of biological models.
General updates about SBML and SBML Team activitiesMike Hucka
The summary provides updates about SBML and the SBML Team's activities:
1. The SBML.org website was updated with a new version of the SBML software survey and statistics about 81 SBML software tools reported between May-July.
2. The SBML development process is progressing with work on specifications for the Hierarchical Model Composition package and new LaTeX templates for SBML package specifications.
3. A vote for a new SBML editor will take place, with nominations open to the sbml-discuss mailing list. The SBML Team software like libSBML continues to see strong download rates, with over 3,400 downloads of libSBML 5.0.0 since April
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
CAKE: Sharing Slices of Confidential Data on BlockchainClaudio Di Ciccio
Presented at the CAiSE 2024 Forum, Intelligent Information Systems, June 6th, Limassol, Cyprus.
Synopsis: Cooperative information systems typically involve various entities in a collaborative process within a distributed environment. Blockchain technology offers a mechanism for automating such processes, even when only partial trust exists among participants. The data stored on the blockchain is replicated across all nodes in the network, ensuring accessibility to all participants. While this aspect facilitates traceability, integrity, and persistence, it poses challenges for adopting public blockchains in enterprise settings due to confidentiality issues. In this paper, we present a software tool named Control Access via Key Encryption (CAKE), designed to ensure data confidentiality in scenarios involving public blockchains. After outlining its core components and functionalities, we showcase the application of CAKE in the context of a real-world cyber-security project within the logistics domain.
Paper: https://doi.org/10.1007/978-3-031-61000-4_16
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
1. SBML (the Systems Biology Markup Language)
Michael Hucka, Ph.D.
(On behalf of many people)
California Institute of Technology
Pasadena, California, USA
Tuesday, July 26, 2011 1
2. Roadmap
What is SBML?
What is the SBML community like today, and how did it get there?
Acknowledgments
Tuesday, July 26, 2011 2
4. Subject matter: computational modeling
Data
Experiments
Models
Focus on mechanistic, computational models
• Preferrably not statistical or curve-fitting models, but dynamical
models expressing hypothesized physical & chemical mechanisms
- Equations refer to identifiable processes
- Parameters have physical interpretations
Tuesday, July 26, 2011 4
7. To achieve that, need effective means for sharing models
Not enough simply to publish lists of equations!
Need a software-independent format
• No single package answers all needs
• New techniques ( new tools) are developed continuously
• Different packages have different niche strengths
- Strengths are often complementary
Need to capture both
• Mathematical content of a model
• Semantic content of a model
Tuesday, July 26, 2011 7
8. SBML = Systems Biology Markup Language
Format for representing quantitative models
• Defines object model + rules for its use
- Serialized to XML
Neutral with respect to modeling framework
• ODE vs. stochastic vs. ...
A lingua franca for software
But: not a procedural description
Tuesday, July 26, 2011 8
9. Basic SBML concepts are simple
The reaction is central: a process occurring at a given rate
• Participants are pools of entities (species)
f ([A],[B],[P ],...)
na A + nb B − − − − − − → np P
−−−−−−
f (...)
nc C −−
−→ nd D + ne E + nf F
.
.
.
Models can further include:
• Other constants & variables • Unit definitions
• Compartments • Annotations
• Explicit math
• Discontinuous events
Tuesday, July 26, 2011 9
10. Basic SBML concepts are simple
The reaction is central: a process occurring at a given rate
• Participants are pools of entities (species) Can be anything
conceptually
f ([A],[B],[P ],...)
na A + nb B − − − − − − → np P
−−−−−− compatible
f (...)
nc C −−
−→ nd D + ne E + nf F
.
.
.
Models can further include:
• Other constants & variables • Unit definitions
• Compartments • Annotations
• Explicit math
• Discontinuous events
Tuesday, July 26, 2011 9
11. Example of model type Example model
Signaling pathway models BioModels Database model
#BIOMD0000000153
Conductance-based models
• “Rate rules” for temporal evolution
of quantitative parameters
BioModels Database model
#BIOMD0000000020
Neural models
• “Events” for discontinuous changes
in quantitative parameters
BioModels Database model
#BIOMD0000000127
Pharmacokinetic/dynamics models
• “Species” is not required to be a
biochemical entity
BioModels Database model
#BIOMD0000000234
Infectious diseases BioModels Database model
#MODEL1008060001
Scope of SBML is not limited to one kind of model
Tuesday, July 26, 2011 10
12. 300+ curated & annotated models in BioModels Database
Tuesday, July 26, 2011 11
15. Number of software systems supporting SBML
300
229 as of July 14 ↓
200
100
0
2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011
(counted in middle of each year)
http://sbml.org/SBML_Software_Guide
Tuesday, July 26, 2011 13
16. libSBML
Reads, writes, validates SBML Latest stable version: 5.0.0
• Hundreds of rules for helping http://sbml.org/Software/libSBML
to ensure correct SBML
Unit checking & conversion
Well-tested
Core is written in portable C++
Runs on Linux, Mac, Windows
APIs for C, C++, C#, Java,
Octave, Perl, Python, Ruby,
MATLAB (some via SWIG) Developed by Sarah Keating, Frank
Can use Expat, libxml2, or Xerces Bergmann, Ben Bornstein, Akiya
Jouraku, & Mike Hucka, with
Open-source under LGPL substantial contributions from many
other people
Tuesday, July 26, 2011 14
17. Current state of SBML specifications
Specification document available from
http://sbml.org/Documents
Newest: Level 3 Version 1 Core
• Oct. 2010
About SBML “Levels”:
• Levels help manage significant restructuring of SBML architecture
• Levels coexist
- E.g., Level 2 models will remain valid and exist for a long time
• A Level is not solely a vertical change (i.e., more features)—there is
horizontal change too (i.e., changes to existing elements)
Tuesday, July 26, 2011 15
18. What is the SBML community like today, and how did it get there?
Tuesday, July 26, 2011 16
19. What happened in the beginning?
Circa 2000: Hamid Bolouri contacted groups having relevant software tools
• Organized workshops & set goal: develop interoperability
• Funding from Japanese agency JST (via Hiroaki Kitano & John Doyle)
• 3 core developers worked on software infrastructure at Caltech
Early years: focus on software infrastructure (SBW)
• SBML was a component, but not sole (or even primary) focus
Eventually: SBML turned out to be more popular
• 2 core developers remained (Finney & Hucka), focused on SBML
• More groups/software supported SBML
• Original dev. process was ad hoc, but involved constant feedback
- Hosted biannual workshops where intense discussions were held
Tuesday, July 26, 2011 17
20. What happened when SBML gained users?
Implemented editorial board
• Bootstrapped with heavily-involved people (Hucka, Finney, Le Novère)
- After that, turned to community-based elections
• Editors are volunteers, serve for limited terms
Implemented electronic polling for major decisions & voting
Continued biannual meetings
• Split into forum meetings and hackathons
Developed a somewhat more formal process
• http://sbml.org/Documents/SBML_Development_Process
Tuesday, July 26, 2011 18
21. SBML’s scope is widening to support more types of models
Package Z
Package X Package Y
SBML Level 3 Core
SBML Level 3 is designed around concept of modular additions
• A package adds constructs & capabilities
Models declare which packages they use
• Applications tell users which packages they support
Package development can be decoupled
Tuesday, July 26, 2011 19
22. What’s happening now?
SBML Level 3 Package specification & software development is ongoing
Creation of COMBINE: Computational Modeling in Biology Network
• Goal: coordinate development of interoperable, non-overlapping
standards covering all aspects of modeling in biology
• http://co.mbine.org/
Tuesday, July 26, 2011 20
23. Model Procedures Results
Representation
format SBRML
Minimal info
?
requirements
Semantics—
Mathematical
Other
annotations annotations annotations
Standards emerging for related but out-of-scope areas
Tuesday, July 26, 2011 21
24. Some lessons about what think we got right
Start with actual stakeholders
• Address real needs, not perceived ones
Don’t include the kitchen sink
• Smaller & simpler easier to understand, describe, implement
Provide transparent & inclusive process
• Critical to legitimacy—people must see their ideas being considered
Engage people, constantly, in many ways
• Not just electronic forums, email, etc., but face-to-face
• Not getting responses? Find a new approach!
Have independent leaders/organizers/shepherds
• Avoid the appearance of bias or agenda
Tuesday, July 26, 2011 22
25. Some lessons about what definitely got wrong
Inadequate testing before freezing/releasing
Not managing complexity creep
• Feature changes between SBML versions make support harder
Not formalizing the process sufficiently
• Need “Requests for Comments” procedures, voting procedures, etc.
• Only put most of this in place in recent years
Underestimating how much time it takes to do everything
• Also: democratic, open processes move slowly
Tuesday, July 26, 2011 23
26. Roadmap
What is SBML?
What is the SBML community like today, and how did it get there?
Acknowledgments
Tuesday, July 26, 2011 24
27. People on SBML Team & BioModels Team
SBML Team BioModels.net Team
Michael Hucka Nicolas Le Novère
Sarah Keating Camille Laibe
Frank Bergmann Nicolas Rodriguez
Lucian Smith Nick Juty
Nicolas Rodriguez Vijayalakshmi Chelliah
Linda Taddeo Michael Schubert
Akiya Joukarou Lukas Endler
Akira Funahashi Chen Li
Kimberley Begley Harish Dharuri
Bruce Shapiro Lu Li
Andrew Finney Enuo He
Ben Bornstein Mélanie Courtot
Ben Kovitz Alexander Broicher
Hamid Bolouri Arnaud Henry
Herbert Sauro Visionaries Marco Donizelli
Jo Matthews Hiroaki Kitano
Maria Schilstra John Doyle
Tuesday, July 26, 2011 25
28. National Institute of General Medical Sciences (USA)
European Molecular Biology Laboratory (EMBL)
ELIXIR (UK)
Beckman Institute, Caltech (USA)
Keio University (Japan)
JST ERATO Kitano Symbiotic Systems Project (Japan) (to 2003)
National Science Foundation (USA)
International Joint Research Program of NEDO (Japan)
JST ERATO-SORST Program (Japan)
Japanese Ministry of Agriculture
Japanese Ministry of Educ., Culture, Sports, Science and Tech.
BBSRC (UK)
DARPA IPTO Bio-SPICE Bio-Computation Program (USA)
Air Force Office of Scientific Research (USA)
STRI, University of Hertfordshire (UK)
Molecular Sciences Institute (USA)
Agencies to thank for supporting SBML & BioModels.net
Tuesday, July 26, 2011 26
29. Where to find out more
SBML http://sbml.org
libSBML & JSBML http://sbml.org/Software
BioModels Database http://biomodels.net/biomodels
MIRIAM http://biomodels.net/miriam
SED-ML http://biomodels.net/sed-ml
SBO http://biomodels.net/sbo
KiSAO http://www.ebi.ac.uk/compneur-srv/kisao/
TEDDY http://www.ebi.ac.uk/compneur-srv/teddy/
Thank you for your time!
Tuesday, July 26, 2011 27
31. Evolution of features took time & practical experience
Level 1 Level 2 Level 3
predefined math
user-defined functions user-defined functions
functions
text-string math notation MathML subset MathML subset
reserved namespaces for no reserved namespaces no reserved namespaces
annotations for annotations for annotations
no controlled annotation RDF-based controlled RDF-based controlled
scheme annotation scheme annotation scheme
no discrete events discrete events discrete events
default values defined default values defined no default values
monolithic monolithic modular
Tuesday, July 26, 2011 29
32. Level 3 package Active? libSBML 5 implementation?
Graph layout ✓
Groups ✓
Spatial ✓
Flux balance constraints ✓
Hierarchical composition ✓ (in progress)
Multicomponent species ✓
Annotations ✓
Graph rendering ✓
Distribution & ranges ✓
Qualitative models ✓
Dynamic structures
Arrays & sets
Tuesday, July 26, 2011 30
33. Model Entity
element referenced
relationship qualifier
(optional)
MIRIAM cross-references are simple triples
{ Data type
identifier
Data item
identifier
Annotation
qualifier }
(Required) (Required) (Optional)
Format:
URI chosen from Syntax & value space Controlled
agreed-upon list depends on data type vocabulary term
Tuesday, July 26, 2011 31