Model-Driven Development in the context of Software Product LinesMarkus Voelter
Domain specific languages, together with code generation or interpreters (a.k.a. model-driven development), are becoming more and more important. Since there is a certain overhead involved in building languages and processors, this approach is especially useful in environments where a spe-cific set of languages and generators can be reused many times. Product lines are such an environment. Consequently, the use of domain specific languages (DSLs) for Software Product Line Engi-neering (SPLE) is becoming more relevant. However, exploiting DSLs in the context of product lines involves more than just defining and using languages. This tutorial explains the differences as well as commonalities between model-driven development (MDD) and SPLE and shows how the two approaches can be combined. In this tutorial we will first recap/introduce feature modeling and model-driven development. We then build a simple textual DSL and a code generator based on Eclipse openArchitectureWare (oAW). Based on this language we’ll discuss the kinds of variability expressible via DSLs versus those expressible via feature modeling, leading to a discussion about ways to combine the two. In the next demo slot we’ll do just that: we’ll annotate a model with feature dependencies. When generating code, the elements whose features are not selected will be removed, and hence no code will be generated. Finally we’ll discuss and demo the integration feature dependencies into code generators to con-figure the kind of code generated from the model.
Close Encounters in MDD: when models meet codelbergmans
“Close encounters in MDD: when Models meet Code”
Model-Driven Development (MDD) promises a number of advantages, which include the ability to work at higher abstraction levels, static reasoning about models, and generation of platform-specific code. To achieve this, generally a transformation-based approach is adopted, which generates code from models. In this presentation we discuss –in addition to the potential advantages– a number of possible misunderstandings and risks of MDD.
In particular, we address the risks of transformation-based software development, such as:
• It is rarely possible to generate the full functionality of a (sub-)system from models; as a result, it is necessary to either do additional ‘manual coding’ –a challenge to integrate with the generated code– or annotate the model with small or larger fragments of executable code, which has several restrictions and practical consequences: for instance it mingles abstraction levels, and reduces maintainability of code and models.
• MDD is particularly effective when various different models can be used, each optimized for a specific domain. However, when using transformation techniques, de combination of multiple models in an integrated application is far from trivial.
In this talk we propose –as a low-threshold approach–, ‘bottom-up’ model-driven development. This means that the focus on domain-specific abstractions remains, as well as the separation of platform-specific and platform-independent software. This approach, which is related to Domain-Driven Design and domain-specific languages (DSLs), aims to exploit the advantages of modeling in terms of abstractions, while at the same time reducing the gap between models and code. This can be achieved by specifying the models in code, while separating platform-specific code from the model code. An important issue is the capability to combine several different models, without getting into technical difficulties: we discuss existing as well as a novel approach, entitled Co-op, which aim to address this problem.
Finally, we discuss how the presented approach fits with the ‘scalable design’ approach for developing software that is scalable with respect to evolving requirements.
This document describes a tutorial on advanced ATL techniques including model refactoring. The tutorial agenda includes an introduction to model transformations using ATL, exercises on model visualization, refactoring, and compiling a DSL, as well as discussions of ATL architecture and industrialization. The document provides examples of using ATL to transform a class diagram metamodel to a relational database metamodel.
Detecting Occurrences of Refactoring with Heuristic SearchShinpei Hayashi
This document describes a technique for detecting refactorings between two versions of a program using heuristic search. Refactorings are detected by generating intermediate program states through applying refactorings, and finding a path from the original to modified program that minimizes differences. Structural differences are used to identify likely refactorings. Candidate refactorings are evaluated and applied to generate new states, with the search terminating when the state matches the modified program. A supporting tool was developed and a case study found the technique could correctly detect an actual series of refactorings between program versions.
This document presents an approach for reusing model transformations through the extraction of typing requirements models (TRMs) from transformations. TRMs characterize the minimal requirements that source and target meta-models must satisfy for a transformation to be well-typed. The approach extracts TRMs from ATL transformations through an attribute grammar. Meta-models can then be checked for conformance to the TRMs to determine if a transformation can be reused. An evaluation of the approach on four transformations showed that it achieved high precision and recall in checking over 2,000 mutated meta-models for conformance.
Sentence-to-Code Traceability Recovery with Domain OntologiesShinpei Hayashi
The document describes a technique for recovering traceability between natural language sentences and source code using domain ontologies. An automated tool was implemented and evaluated on a case study using the JDraw software. Results showed the technique worked well, recovering traceability between 7 sentences and code with higher accuracy than without using the ontology. The ontology helped improve recall and detect traceability in cases where word similarity alone did not work well. Future work is needed to evaluate on larger cases and domains.
The document discusses model-to-model (M2M) transformation using the ATLAS Transformation Language (ATL). It provides an overview of ATL, including its history, community resources, and programming style. The document describes how ATL transformations work, including the use of declarative rules to match source patterns and create target patterns, and imperative rules that can contain declarative patterns and action blocks. Examples of ATL rules and transformations are also presented.
Model-Driven Development in the context of Software Product LinesMarkus Voelter
Domain specific languages, together with code generation or interpreters (a.k.a. model-driven development), are becoming more and more important. Since there is a certain overhead involved in building languages and processors, this approach is especially useful in environments where a spe-cific set of languages and generators can be reused many times. Product lines are such an environment. Consequently, the use of domain specific languages (DSLs) for Software Product Line Engi-neering (SPLE) is becoming more relevant. However, exploiting DSLs in the context of product lines involves more than just defining and using languages. This tutorial explains the differences as well as commonalities between model-driven development (MDD) and SPLE and shows how the two approaches can be combined. In this tutorial we will first recap/introduce feature modeling and model-driven development. We then build a simple textual DSL and a code generator based on Eclipse openArchitectureWare (oAW). Based on this language we’ll discuss the kinds of variability expressible via DSLs versus those expressible via feature modeling, leading to a discussion about ways to combine the two. In the next demo slot we’ll do just that: we’ll annotate a model with feature dependencies. When generating code, the elements whose features are not selected will be removed, and hence no code will be generated. Finally we’ll discuss and demo the integration feature dependencies into code generators to con-figure the kind of code generated from the model.
Close Encounters in MDD: when models meet codelbergmans
“Close encounters in MDD: when Models meet Code”
Model-Driven Development (MDD) promises a number of advantages, which include the ability to work at higher abstraction levels, static reasoning about models, and generation of platform-specific code. To achieve this, generally a transformation-based approach is adopted, which generates code from models. In this presentation we discuss –in addition to the potential advantages– a number of possible misunderstandings and risks of MDD.
In particular, we address the risks of transformation-based software development, such as:
• It is rarely possible to generate the full functionality of a (sub-)system from models; as a result, it is necessary to either do additional ‘manual coding’ –a challenge to integrate with the generated code– or annotate the model with small or larger fragments of executable code, which has several restrictions and practical consequences: for instance it mingles abstraction levels, and reduces maintainability of code and models.
• MDD is particularly effective when various different models can be used, each optimized for a specific domain. However, when using transformation techniques, de combination of multiple models in an integrated application is far from trivial.
In this talk we propose –as a low-threshold approach–, ‘bottom-up’ model-driven development. This means that the focus on domain-specific abstractions remains, as well as the separation of platform-specific and platform-independent software. This approach, which is related to Domain-Driven Design and domain-specific languages (DSLs), aims to exploit the advantages of modeling in terms of abstractions, while at the same time reducing the gap between models and code. This can be achieved by specifying the models in code, while separating platform-specific code from the model code. An important issue is the capability to combine several different models, without getting into technical difficulties: we discuss existing as well as a novel approach, entitled Co-op, which aim to address this problem.
Finally, we discuss how the presented approach fits with the ‘scalable design’ approach for developing software that is scalable with respect to evolving requirements.
This document describes a tutorial on advanced ATL techniques including model refactoring. The tutorial agenda includes an introduction to model transformations using ATL, exercises on model visualization, refactoring, and compiling a DSL, as well as discussions of ATL architecture and industrialization. The document provides examples of using ATL to transform a class diagram metamodel to a relational database metamodel.
Detecting Occurrences of Refactoring with Heuristic SearchShinpei Hayashi
This document describes a technique for detecting refactorings between two versions of a program using heuristic search. Refactorings are detected by generating intermediate program states through applying refactorings, and finding a path from the original to modified program that minimizes differences. Structural differences are used to identify likely refactorings. Candidate refactorings are evaluated and applied to generate new states, with the search terminating when the state matches the modified program. A supporting tool was developed and a case study found the technique could correctly detect an actual series of refactorings between program versions.
This document presents an approach for reusing model transformations through the extraction of typing requirements models (TRMs) from transformations. TRMs characterize the minimal requirements that source and target meta-models must satisfy for a transformation to be well-typed. The approach extracts TRMs from ATL transformations through an attribute grammar. Meta-models can then be checked for conformance to the TRMs to determine if a transformation can be reused. An evaluation of the approach on four transformations showed that it achieved high precision and recall in checking over 2,000 mutated meta-models for conformance.
Sentence-to-Code Traceability Recovery with Domain OntologiesShinpei Hayashi
The document describes a technique for recovering traceability between natural language sentences and source code using domain ontologies. An automated tool was implemented and evaluated on a case study using the JDraw software. Results showed the technique worked well, recovering traceability between 7 sentences and code with higher accuracy than without using the ontology. The ontology helped improve recall and detect traceability in cases where word similarity alone did not work well. Future work is needed to evaluate on larger cases and domains.
The document discusses model-to-model (M2M) transformation using the ATLAS Transformation Language (ATL). It provides an overview of ATL, including its history, community resources, and programming style. The document describes how ATL transformations work, including the use of declarative rules to match source patterns and create target patterns, and imperative rules that can contain declarative patterns and action blocks. Examples of ATL rules and transformations are also presented.
Live model transformations driven by incremental pattern matchingIstvan Rath
Live model transformations can be driven incrementally by detecting changes to the matching set of patterns over the model. The VIATRA implementation uses RETE networks to efficiently maintain and update the matching sets when models change. This enables live transformations to respond instantly to modifications by mapping only the changes to the target model. Future work aims to improve performance further and enhance the language for debugging and static analysis of live transformations.
High performance model queries and their novel applications discusses model query performance for large models. Benchmark results show that model size affects query response times polynomially, with incremental engines achieving lower exponents. Query complexity also significantly impacts performance, with RETE-based tools like EMF-IncQuery performing well regardless of complexity. EMF-IncQuery is presented as an optimized model query engine for incremental queries, enabling on-the-fly validation over large models.
Event-driven Model Transformations in Domain-specific Modeling LanguagesIstvan Rath
This PhD thesis by István Ráth focuses on event-driven model transformations in domain-specific modeling languages. The thesis contains 3 parts: 1) developing concepts for event-driven graph transformations based on incremental pattern matching, 2) applying these concepts to provide advanced language engineering features like simulation, and 3) integrating modeling tools using change-driven transformations. The research aims to address challenges in scalability, usability and tool integration for model-driven software engineering.
DLF Fall 2012: Institutional OA Policy Implementation: The Joys and ChallengesLisa Schiff
This document discusses the implementation of an open access policy at the University of California, San Francisco (UCSF). It provides details on the terms of the UCSF open access policy, including that faculty grant a nonexclusive license to the university to distribute their scholarly articles and are required to provide an electronic copy of the final published article. It also outlines challenges in implementing the policy at UCSF and across the UC system, such as engaging with publishers, tracking waivers and embargoes, developing a harvesting solution, and ensuring an easy deposit process for faculty. Lastly, it discusses next steps for the implementation, including establishing workflows for manual deposit, evaluating harvesting options, and providing a streamlined waiver and embargo request system.
Actas de cabildo del municipio de Elota, gestiones, programas, apoyos, auditorias cuentas, legislaciones, licitaciones, plan municipal, eje de gobierno etc.
Actas de cabildo del municipio de Elota, gestiones, programas, apoyos, auditorias cuentas, legislaciones, licitaciones, plan municipal, eje de gobierno etc.
Presentation of use cases for using ORCID with eScholarship and other services/applications from the California Digital Library at the University of California.
A benchmark evaluation for incremental pattern matching in graph transformationIstvan Rath
This document summarizes a presentation given at the Budapest University of Technology and Economics about benchmarking incremental pattern matching in graph transformation. It describes two case studies used as benchmarks: model simulation using petri nets and object-relational mapping for synchronization. Performance results are presented showing that the Rete algorithm provides predictable linear scaling for incremental pattern matching in practical problems.
Open Access and the Humanities at the California Digital Library and Beyond, ...Lisa Schiff
This presentation covers three topics:
1. The risks and opportunities of open access for the humanities, including how we can attack some of the significant challenges open access is bringing to the forefront of scholarly communications.
2. The projects and services the CDL has engaged in in support of humanities scholarship within an open access
environment
3. An overview of the UC Faculty’s Open Access policy and the underlying infrastructure the library system at the CDL and the campuses are providing them to support their enactment of their policy.
Este documento presenta el acta de la quinta sesión de cabildo del Ayuntamiento de Elota, Sinaloa, México. Se discutieron y aprobaron varios puntos relacionados con la cultura municipal, incluyendo la integración de un Consejo Ciudadano de Cultura y la autorización para firmar un convenio con el Consejo Nacional para la Cultura y las Artes para participar en un fideicomiso estatal para proyectos culturales, así como la inversión de $100,000 pesos mexicanos en dichos proyectos.
Model transformations in the VIATRA2 frameworkIstvan Rath
This document discusses model transformations and their use in tool integration. It presents the VIATRA2 transformation framework, which allows modeling languages and tools to be connected through model transformations. Model transformations help address the major challenges of tool integration, including different modeling languages and continuous tool evolution. They enable moving models between tools while refining and synchronizing the models. This allows performing analyses like mathematical analysis on refined models and generating code or deployment descriptions to implement systems.
Incremental pattern matching in the VIATRA2 model transformation frameworkIstvan Rath
This document discusses incremental pattern matching in the VIATRA2 model transformation framework. It introduces incremental pattern matching using the RETE algorithm as implemented in VIATRA2. The RETE algorithm caches pattern matches and incrementally updates them as the model changes. This allows pattern matching to be performed incrementally for efficient model transformations on evolving models. The document outlines how RETE networks are constructed from patterns and how they are updated based on model changes notified through the VIATRA framework. Initial performance analysis is discussed to compare incremental versus local search approaches.
This document provides an overview of a tutorial on EMF-IncQuery, an incremental query engine for EMF models. It discusses the motivation for model queries and issues with existing solutions. The tutorial will cover the EMF-IncQuery technology, including hands-on examples of basic and advanced model queries. It will conclude with performance benchmarks and a question and answer section. Attendees will learn how to define and execute queries over EMF models incrementally for improved performance with complex queries over large models.
Actas de cabildo del municipio de Elota, gestiones, programas, apoyos, auditorias cuentas, legislaciones, licitaciones, plan municipal, eje de gobierno etc.
EMF-IncQuery 0.7 Presentation for ItemisIstvan Rath
The document introduces EMF-INCQUERY, a model query engine for Eclipse Modeling Framework (EMF) models. It provides an expressive graph pattern query language and incremental query evaluation based on the Rete algorithm. This enables efficient complex queries over large models. EMF-INCQUERY addresses performance issues of model queries in modeling tools and simplifies writing complex queries through reusable query libraries and pattern composition. It integrates with EMF-based applications and provides features like on-the-fly validation and view maintenance.
Efficient Validation of Large Models using the Mogwaï ToolGwendal Daniel
Scalable model persistence frameworks have been proposed to handle large (potentially generated) models involved in current industrial processes. They usually rely on databases to store and access the underlying models, and provide a lazy-loading strategy that aims to reduce the memory footprint of model
navigation and manipulation. Dedicated query and transformation solutions have been proposed to further improve performances by generating native database
queries leveraging the backend’s advanced capabilities. However, existing solutions are not designed to specifically target the validation of a set of constraints over large models. They usually rely on low-level modeling APIs to retrieve
model elements to validate, limiting the benefits of computing native database queries. In this paper we present an extension of the Mogwa¨ı query engine that aims to handle large model validation efficiently. We show how model constraints
are pre-processed and translated into database queries, and how the validation of the model can benefit from the underlying database optimizations. Our approach is released as a set of open source Eclipse plugins and is fully available online.
Tensors Are All You Need: Faster Inference with HummingbirdDatabricks
The ever-increasing interest around deep learning and neural networks has led to a vast increase in processing frameworks like TensorFlow and PyTorch. These libraries are built around the idea of a computational graph that models the dataflow of individual units. Because tensors are their basic computational unit, these frameworks can run efficiently on hardware accelerators (e.g. GPUs).Traditional machine learning (ML) such as linear regressions and decision trees in scikit-learn cannot currently be run on GPUs, missing out on the potential accelerations that deep learning and neural networks enjoy.
In this talk, we’ll show how you can use Hummingbird to achieve 1000x speedup in inferencing on GPUs by converting your traditional ML models to tensor-based models (PyTorch andTVM). https://github.com/microsoft/hummingbird
This talk is for intermediate audiences that use traditional machine learning and want to speedup the time it takes to perform inference with these models. After watching the talk, the audience should be able to use ~5 lines of code to convert their traditional models to tensor-based models to be able to try them out on GPUs.
Outline:
Introduction of what ML inference is (and why it’s different than training)
Motivation: Tensor-based DNN frameworks allow inference on GPU, but “traditional” ML frameworks do not
Why “traditional” ML methods are important
Introduction of what Hummingbirddoes and main benefits
Deep dive on how traditional ML models are built
Brief intro onhow Hummingbird converter works
Example of how Hummingbird can convert a tree model into a tensor-based model
Other models
Demo
Status
Q&A
NIMBLE is a modeling language and algorithm programming framework for Bayesian and likelihood-based statistical analysis. It allows users to write custom algorithms that operate on statistical models specified in the BUGS language. The framework processes BUGS models to extract relationships, builds a graphical model object, generates C++ code, and provides interfaces to compiled algorithm functions. This allows for flexible and distributed development of advanced Bayesian computational methods.
Live model transformations driven by incremental pattern matchingIstvan Rath
Live model transformations can be driven incrementally by detecting changes to the matching set of patterns over the model. The VIATRA implementation uses RETE networks to efficiently maintain and update the matching sets when models change. This enables live transformations to respond instantly to modifications by mapping only the changes to the target model. Future work aims to improve performance further and enhance the language for debugging and static analysis of live transformations.
High performance model queries and their novel applications discusses model query performance for large models. Benchmark results show that model size affects query response times polynomially, with incremental engines achieving lower exponents. Query complexity also significantly impacts performance, with RETE-based tools like EMF-IncQuery performing well regardless of complexity. EMF-IncQuery is presented as an optimized model query engine for incremental queries, enabling on-the-fly validation over large models.
Event-driven Model Transformations in Domain-specific Modeling LanguagesIstvan Rath
This PhD thesis by István Ráth focuses on event-driven model transformations in domain-specific modeling languages. The thesis contains 3 parts: 1) developing concepts for event-driven graph transformations based on incremental pattern matching, 2) applying these concepts to provide advanced language engineering features like simulation, and 3) integrating modeling tools using change-driven transformations. The research aims to address challenges in scalability, usability and tool integration for model-driven software engineering.
DLF Fall 2012: Institutional OA Policy Implementation: The Joys and ChallengesLisa Schiff
This document discusses the implementation of an open access policy at the University of California, San Francisco (UCSF). It provides details on the terms of the UCSF open access policy, including that faculty grant a nonexclusive license to the university to distribute their scholarly articles and are required to provide an electronic copy of the final published article. It also outlines challenges in implementing the policy at UCSF and across the UC system, such as engaging with publishers, tracking waivers and embargoes, developing a harvesting solution, and ensuring an easy deposit process for faculty. Lastly, it discusses next steps for the implementation, including establishing workflows for manual deposit, evaluating harvesting options, and providing a streamlined waiver and embargo request system.
Actas de cabildo del municipio de Elota, gestiones, programas, apoyos, auditorias cuentas, legislaciones, licitaciones, plan municipal, eje de gobierno etc.
Actas de cabildo del municipio de Elota, gestiones, programas, apoyos, auditorias cuentas, legislaciones, licitaciones, plan municipal, eje de gobierno etc.
Presentation of use cases for using ORCID with eScholarship and other services/applications from the California Digital Library at the University of California.
A benchmark evaluation for incremental pattern matching in graph transformationIstvan Rath
This document summarizes a presentation given at the Budapest University of Technology and Economics about benchmarking incremental pattern matching in graph transformation. It describes two case studies used as benchmarks: model simulation using petri nets and object-relational mapping for synchronization. Performance results are presented showing that the Rete algorithm provides predictable linear scaling for incremental pattern matching in practical problems.
Open Access and the Humanities at the California Digital Library and Beyond, ...Lisa Schiff
This presentation covers three topics:
1. The risks and opportunities of open access for the humanities, including how we can attack some of the significant challenges open access is bringing to the forefront of scholarly communications.
2. The projects and services the CDL has engaged in in support of humanities scholarship within an open access
environment
3. An overview of the UC Faculty’s Open Access policy and the underlying infrastructure the library system at the CDL and the campuses are providing them to support their enactment of their policy.
Este documento presenta el acta de la quinta sesión de cabildo del Ayuntamiento de Elota, Sinaloa, México. Se discutieron y aprobaron varios puntos relacionados con la cultura municipal, incluyendo la integración de un Consejo Ciudadano de Cultura y la autorización para firmar un convenio con el Consejo Nacional para la Cultura y las Artes para participar en un fideicomiso estatal para proyectos culturales, así como la inversión de $100,000 pesos mexicanos en dichos proyectos.
Model transformations in the VIATRA2 frameworkIstvan Rath
This document discusses model transformations and their use in tool integration. It presents the VIATRA2 transformation framework, which allows modeling languages and tools to be connected through model transformations. Model transformations help address the major challenges of tool integration, including different modeling languages and continuous tool evolution. They enable moving models between tools while refining and synchronizing the models. This allows performing analyses like mathematical analysis on refined models and generating code or deployment descriptions to implement systems.
Incremental pattern matching in the VIATRA2 model transformation frameworkIstvan Rath
This document discusses incremental pattern matching in the VIATRA2 model transformation framework. It introduces incremental pattern matching using the RETE algorithm as implemented in VIATRA2. The RETE algorithm caches pattern matches and incrementally updates them as the model changes. This allows pattern matching to be performed incrementally for efficient model transformations on evolving models. The document outlines how RETE networks are constructed from patterns and how they are updated based on model changes notified through the VIATRA framework. Initial performance analysis is discussed to compare incremental versus local search approaches.
This document provides an overview of a tutorial on EMF-IncQuery, an incremental query engine for EMF models. It discusses the motivation for model queries and issues with existing solutions. The tutorial will cover the EMF-IncQuery technology, including hands-on examples of basic and advanced model queries. It will conclude with performance benchmarks and a question and answer section. Attendees will learn how to define and execute queries over EMF models incrementally for improved performance with complex queries over large models.
Actas de cabildo del municipio de Elota, gestiones, programas, apoyos, auditorias cuentas, legislaciones, licitaciones, plan municipal, eje de gobierno etc.
EMF-IncQuery 0.7 Presentation for ItemisIstvan Rath
The document introduces EMF-INCQUERY, a model query engine for Eclipse Modeling Framework (EMF) models. It provides an expressive graph pattern query language and incremental query evaluation based on the Rete algorithm. This enables efficient complex queries over large models. EMF-INCQUERY addresses performance issues of model queries in modeling tools and simplifies writing complex queries through reusable query libraries and pattern composition. It integrates with EMF-based applications and provides features like on-the-fly validation and view maintenance.
Efficient Validation of Large Models using the Mogwaï ToolGwendal Daniel
Scalable model persistence frameworks have been proposed to handle large (potentially generated) models involved in current industrial processes. They usually rely on databases to store and access the underlying models, and provide a lazy-loading strategy that aims to reduce the memory footprint of model
navigation and manipulation. Dedicated query and transformation solutions have been proposed to further improve performances by generating native database
queries leveraging the backend’s advanced capabilities. However, existing solutions are not designed to specifically target the validation of a set of constraints over large models. They usually rely on low-level modeling APIs to retrieve
model elements to validate, limiting the benefits of computing native database queries. In this paper we present an extension of the Mogwa¨ı query engine that aims to handle large model validation efficiently. We show how model constraints
are pre-processed and translated into database queries, and how the validation of the model can benefit from the underlying database optimizations. Our approach is released as a set of open source Eclipse plugins and is fully available online.
Tensors Are All You Need: Faster Inference with HummingbirdDatabricks
The ever-increasing interest around deep learning and neural networks has led to a vast increase in processing frameworks like TensorFlow and PyTorch. These libraries are built around the idea of a computational graph that models the dataflow of individual units. Because tensors are their basic computational unit, these frameworks can run efficiently on hardware accelerators (e.g. GPUs).Traditional machine learning (ML) such as linear regressions and decision trees in scikit-learn cannot currently be run on GPUs, missing out on the potential accelerations that deep learning and neural networks enjoy.
In this talk, we’ll show how you can use Hummingbird to achieve 1000x speedup in inferencing on GPUs by converting your traditional ML models to tensor-based models (PyTorch andTVM). https://github.com/microsoft/hummingbird
This talk is for intermediate audiences that use traditional machine learning and want to speedup the time it takes to perform inference with these models. After watching the talk, the audience should be able to use ~5 lines of code to convert their traditional models to tensor-based models to be able to try them out on GPUs.
Outline:
Introduction of what ML inference is (and why it’s different than training)
Motivation: Tensor-based DNN frameworks allow inference on GPU, but “traditional” ML frameworks do not
Why “traditional” ML methods are important
Introduction of what Hummingbirddoes and main benefits
Deep dive on how traditional ML models are built
Brief intro onhow Hummingbird converter works
Example of how Hummingbird can convert a tree model into a tensor-based model
Other models
Demo
Status
Q&A
NIMBLE is a modeling language and algorithm programming framework for Bayesian and likelihood-based statistical analysis. It allows users to write custom algorithms that operate on statistical models specified in the BUGS language. The framework processes BUGS models to extract relationships, builds a graphical model object, generates C++ code, and provides interfaces to compiled algorithm functions. This allows for flexible and distributed development of advanced Bayesian computational methods.
The method identifies likely refactored code by comparing call trees generated from execution traces of two program revisions. It labels pairs of nodes as likely refactored if their contexts are equal and contents similar. A difference call graph is generated by extracting and merging subtrees of the call trees. The method was applied to an open source program, identifying differences within hundreds of lines across five source files at a high level.
Close encounters in MDD: when Models meet Codelbergmans
Model-Driven Development (MDD) promises a number of advantages, which include the ability to work at higher abstraction levels, static reasoning about models, and generation of platform-specific code. To achieve this, generally a transformation-based approach is adopted, which generates code from models. In this presentation we discuss –in addition to the potential advantages– a number of possible misunderstandings and risks of MDD.
In particular, we address the risks of transformation-based software development, such as:
• It is rarely possible to generate the full functionality of a (sub-)system from models; as a result, it is necessary to either do additional ‘manual coding’ –a challenge to integrate with the generated code– or annotate the model with small or larger fragments of executable code, which has several restrictions and practical consequences: for instance it mingles abstraction levels, and reduces maintainability of code and models.
• MDD is particularly effective when various different models can be used, each optimized for a specific domain. However, when using transformation techniques, de combination of multiple models in an integrated application is far from trivial.
In this talk we propose –as a low-threshold approach–, ‘bottom-up’ model-driven development. This means that the focus on domain-specific abstractions remains, as well as the separation of platform-specific and platform-independent software. This approach, which is related to Domain-Driven Design and domain-specific languages (DSLs), aims to exploit the advantages of modeling in terms of abstractions, while at the same time reducing the gap between models and code. This can be achieved by specifying the models in code, while separating platform-specific code from the model code. An important issue is the capability to combine several different models, without getting into technical difficulties: we discuss existing as well as a novel approach, entitled Co-op, which aim to address this problem.
Clipper is a low-latency online prediction serving system that aims to unify prediction serving approaches. It addresses challenges of supporting diverse machine learning models and frameworks in a production environment. Clipper decouples models from applications through a common interface. Models run in isolated Docker containers. Clipper optimizes latency and throughput using cross-framework caching, batching, and adaptive batch sizing. It also supports model selection and composition for improved accuracy. Clipper makes it easier to deploy and serve models from various frameworks like TensorFlow, Caffe, and Spark applications to Clipper for low-latency prediction serving.
Over time, Machine Learning inference workloads became more and more demanding in terms of latency and throughput. Moreover, many inference workloads compute predictions based on a limited number of models that are deployed in the system. This scenario provides large rooms for optimizations of runtime and memory, which current systems fall short in exploring because they employ a black-box model of ML models and tasks.
On the opposite side, Pretzel adopts a white-box description of ML models, which allows the framework to perform optimizations over deployed models and running tasks, saving memory and increasing the overall system performance. In particular, Pretzel can properly schedule ML jobs on NUMA machines, whose complexities may impact latencies and efficiency aspects.
In this talk we will show the motivations behind Pretzel, its current design and possible future developments.
Why is dev ops for machine learning so differentRyan Dawson
DevOps instincts tend to be shaped by what has worked well before. Instincts derived from mainstream software development projects get challenged when we turn to enabling machine learning projects. The key reasons are that the development/delivery workflow is different and the kind of software artefacts involved are different. We will explore the differences and look at emerging open source projects in order to appreciate why the DevOps for machine learning space is growing and the needs that it addresses.
Quick fix for domain-specific modeling languages using the VIATRA framework (http://viatra.inf.mit.bme.hu/) presented at the VL/HCC 2011 conference (http://www.cs.cmu.edu/~vlhcc2011/)
The document discusses the challenges of migrating a production pipeline from a legacy Big Data platform to Spark. It presents an approach using CyFlow, a framework built on Spark that allows component reuse and defines dependencies through a directed acyclic graph (DAG). Key challenges addressed include maintaining semantics during code conversion, meeting real-time constraints, and reducing costs. Metrics for validation include Jaccard similarity and precision/recall. Performance is tuned by aggregating state, modifying partitions, caching data, and unpersisting unneeded dataframes.
The Use of Development History in Software Refactoring Using a Multi-Objectiv...Ali Ouni
The document presents a multi-objective approach to automate software refactoring using evolutionary algorithms. It formulates refactoring as a multi-objective optimization problem to improve code quality, preserve semantics, and maximize reuse of past development history. An evaluation on two open source projects shows the approach corrects most defects while maintaining high refactoring precision compared to existing techniques. Future work includes leveraging refactoring histories from multiple systems and improving context-based similarity measures.
Computational Approaches to Systems BiologyMike Hucka
Presentation given at the Sydney Computational Biologists meetup on 21 August 2013 (http://australianbioinformatics.net/past-events/2013/8/21/computational-approaches-to-systems-biology.html).
This document discusses the Open Modelling Interface and Environment (OpenMI), which provides a standard for linking hydrological models and exchanging data between them. OpenMI allows models running on different time steps or scales to be integrated. It uses a request-reply mechanism to pass data between models at runtime. Existing models can be migrated to the OpenMI standard by wrapping their engine code. The key benefits of OpenMI are that it facilitates integrated modelling and leverages existing models, while its main drawback is the work required to migrate models to the standard.
Top 40 C Programming Interview QuestionsSimplilearn
This video by Simplilearn will explain to you on Top 40 C Programming Interview Questions. C Programming Interview Questions And Answers Tutorial will explain to you the beginner-level, intermediate-level, and advanced-level programming questions. This video has covered all the basic interview questions that every candidate is asked to check his/her knowledge in their programming skills. They have become essential to crack by every interviewer in the current IT industry.
Beginner-level
✅00:00-What are the features of the c programming language?
✅02:03-Mention the dynamic memory allocation functions
✅03:20-What is the use of pointer variables in c programming and what do u mean by dangling pointer variable?
✅03:59-What is the use of break control statements?
✅04:30-what is a predefined function in c?
✅04:56 What is the use of header files in c?
✅05:47-What is a memory leak?
-Intermediate level
✅06:04-Differentiate between call by value and call by reference.
✅06:40-What is the difference between a compiler and an interpreter?
✅07:16-What is typecasting?
✅07:40-What is the use of the size of an operator in c?
✅08:25-Write a c program to print the following pattern
✅10:34-Write a c code to swap two numbers without using a third variable
-Advanced level
✅12:51-What is a union?
✅13:37-What is a recursion?
✅13:47-What are macros in c?
✅14:30-Write the difference between macros and functions.
✅15:00-Sort an array using a quick sort algorithm
✅19:26-Write a c code to find the Fibonacci series.
✅23:02-How to Implement a program to find the height of a binary tree?
✅26:14-Implement a C program to display a string in reverse order.
✅30:35-Implement a program to add a node at the beginning, end, and specified positions in any linked list.
🔥 Learn Advanced C++ Course Online And Get a Completion Certificate: https://www.simplilearn.com/advanced-...
🔥Explore Our Free Courses With a Completion Certificate by SkillUp: https://www.simplilearn.com/skillup-f...
✅Subscribe to our Channel to learn more about the top Technologies: https://bit.ly/2VT4WtH
⏩ Check out C++ Training videos: https://youtube.com/playlist?list=PLE...
#CProgramming #CInterviewQuestions #CInterview #InterviewQuestionsAndAnswers #CLanguageObjectiveQuestions #CProgrammingQuestions #interview #Programming #ProgrammingBook #Simplilearn
🔥 Watch Top Trending Videos From Simplilearn:
⏩ Top 10 Programming Languages in 2023: https://youtu.be/Q2u3llawnvc
⏩ Top 10 Certifications for 2023: https://youtu.be/S6yadRofCsM
⏩ Top 10 Highest Paying Jobs in 2023: https://youtu.be/9tL1m9MXaXQ
⏩ Top 10 Dying Programming Languages 2023: https://youtu.be/51mUwZ6J2D4
⏩ Top 10 Technologies to Learn in 2023: https://youtu.be/jTX8MSw0Ufw
⏩ Top 5 Programming Languages To Get Hired In MAANG: https://youtu.be/AXchY3kFTuI
⏩ Top 10 Certifications & Highest Paying Jobs Across The Globe In 2023: https://youtu.be/RiTsqruVXAI
About Free Advanced
Thesis Defense (Gwendal DANIEL) - Nov 2017Gwendal Daniel
This document summarizes Gwendal Daniel's PhD thesis on efficient persistence, querying, and transformation of large models. It presents four main contributions:
1. NeoEMF, a scalable model persistence framework that allows storing models across multiple databases for improved performance and memory usage.
2. PrefetchML, a model prefetching and caching component that uses declarative rules to efficiently load related model elements from the database.
3. Mogwaï, an approach to generate efficient graph database queries from OCL expressions to compute model queries without overhead from modeling frameworks.
4. Gremlin-ATL, an extension of Mogwaï to generate Gremlin traversals from ATL
This document provides an overview of optimizing stored procedure performance in SQL Server 2000. It discusses the initial processing of stored procedures, including resolution, compilation, optimization and execution. It covers issues that can cause recompilation of stored procedures and different options for handling recompilation. The document also provides best practices for naming conventions, writing solid code to avoid excessive recompilations, and detecting recompilations. It recommends testing recompilation behavior and using modular code and statement recompilation where appropriate. The overview aims to help optimize stored procedure performance.
The document discusses using Model Driven Architecture (MDA) to reengineer legacy software systems in a more automated way compared to traditional reengineering approaches. MDA provides platform independent and specific models that can be used to generate code for different platforms, formalizing the mapping of services between source and target platforms. Several papers are referenced that propose techniques for static and dynamic analysis of code to generate UML models as part of the reengineering process using MDA.
Clipper: A Low-Latency Online Prediction Serving SystemDatabricks
Machine learning is being deployed in a growing number of applications which demand real-time, accurate, and robust predictions under heavy serving loads. However, most machine learning frameworks and systems only address model training and not deployment.
Clipper is a general-purpose model-serving system that addresses these challenges. Interposing between applications that consume predictions and the machine-learning models that produce predictions, Clipper simplifies the model deployment process by isolating models in their own containers and communicating with them over a lightweight RPC system. This architecture allows models to be deployed for serving in the same runtime environment as that used during training. Further, it provides simple mechanisms for scaling out models to meet increased throughput demands and performing fine-grained physical resource allocation for each model.
In this talk, I will provide an overview of the Clipper serving system and then discuss how to get started using Clipper to serve Spark and TensorFlow models in a production serving environment.
Searching for Quality: Genetic Algorithms and Metamorphic Testing for Softwar...Annibale Panichella
More machine learning (ML) models are introduced to the field of Software Engineering (SE) and reached a stage of maturity to be considered for real-world use; But the real world is complex, and testing these models lacks often in explainability, feasibility and computational capacities. Existing research introduced meta-morphic testing to gain additional insights and certainty about the model, by applying semantic-preserving changes to input-data while observing model-output. As this is currently done at random places, it can lead to potentially unrealistic datapoints and high computational costs. With this work, we introduce genetic search as an aid for metamorphic testing in SE ML. Exploiting the delta in output as a fitness function, the evolutionary intelligence optimizes the transformations to produce higher deltas with less changes. We perform a case study minimizing F1 and MRR for Code2Vec on a representative sample from java-small with both genetic and random search. Our results show that within the same amount of time, genetic search was able to achieve a decrease of 10% in F1 while random search produced 3% drop.
This document provides an overview of JVM JIT compilers, specifically focusing on the HotSpot JVM compiler. It discusses the differences between static and dynamic compilation, how just-in-time compilation works in the JVM, profiling and optimizations performed by JIT compilers like inlining and devirtualization, and how to monitor the JIT compiler through options like -XX:+PrintCompilation and -XX:+PrintInlining.
Similar to Change-driven model transformations (20)
The document discusses cloud-based modeling solutions from IncQuery Labs that enable tool integration. It describes challenges with large-scale collaboration and automation across multiple teams and tools. The IncQuery Model Checking Tool Suite uses a custom query language to perform validation checks and transformations across models stored in a repository. Case studies demonstrate tool integration workflows at companies like Airbus. Live demos of the solutions are also provided.
IncQuery Labs provides cloud-based modeling solutions to enable tool integration in model-based systems engineering (MBSE). Their IncQuery tool suite includes a desktop query authoring tool and backend server that allows running complex queries on large models. IncQuery was used to develop an interoperability platform for Airbus that automates workflows involving transformations between modeling tools and generates reports through a web interface.
MBSE meets Industrial IoT: Introducing the New MagicDraw Plug-in for RTI Co...Istvan Rath
Slides of the talk at the MBSE Cyber Experience Symposium 2019 (https://mbsecyberexperience2019.com/speakers/abstracts/item/mbse-meets-industrial-iot-introducing-the-new-magicdraw-connext-dds-plug-in)
IncQuery Server for Teamwork Cloud - Talk at IW2019Istvan Rath
IncQuery Server provides scalable query evaluation over collaborative model repositories. It uses a hybrid database technology that is 10-100x faster than conventional databases and supports large models and complex queries. IncQuery Server integrates with MagicDraw and Teamwork Cloud to enable version control, access control, and customizable queries for model validation and impact analysis.
Easier smart home development with simulators and rule enginesIstvan Rath
The document discusses using simulators and rule engines like Drools Fusion to make smart home development easier. It presents a smart home demonstrator that uses a HomeIO MQTT adapter, an extended event bus, and Drools rules to integrate a simulator with OpenHAB. Rules provide a simple yet flexible way to program smart home logic. The demonstrator source code is open source and available on GitHub to help developers prototype and test smart home applications.
- The VIATRA framework provides a model query and transformation engine for design tools, with applications in systems engineering.
- It features a declarative query language called VQL, Java and Xtend APIs, and a reactive engine for live queries and transformations.
- VIATRA helps validate design rules on large models, allowing designers to be immediately notified of violations during architecture design. It can efficiently query models with millions of elements.
Smarter internet of things with stream and event processing virtual io_t_meet...Istvan Rath
This document summarizes a presentation on using stream and event processing for smarter IoT applications. It introduces concepts like IoT, stream processing, complex event processing (CEP), and discusses how IncQuery Labs' smart home CEP demonstrator uses Drools Fusion for CEP integrated with Eclipse SmartHome and OpenHAB. The demonstrator features a home simulator, extended event bus, and sample rules. It aims to make smart home development easier by bringing CEP capabilities to the edge for low latency offline operation.
Modes3: Model-based Demonstrator for Smart and Safe SystemsIstvan Rath
A talk on Modes3, presented at the IoT Budapest Meetup (April 2017). https://www.meetup.com/IoT-Budapest/events/238267893/
More information:
http://inf.mit.bme.hu/en/research/projects/modes3
https://github.com/FTSRG/BME-MODES3
http://modes3.tumblr.com
Eclipse DemoCamp Budapest 2016 November: Best of EclipseCon Europe 2016Istvan Rath
Ebben a DemoCamp előadásban az EclipseCon Europe 2016 és SiriusCon 2016 konferenciák legfontosabb témáit, technológiáit foglalom össze, kiegészítve néhány szubjektív véleménnyel és megérzéssel a technológiai trendekről.
Exploring the Future of Eclipse Modeling: Web and Semantic CollaborationIstvan Rath
This document discusses a new framework for semantic collaboration on Eclipse modeling projects. It aims to provide fine-grained access control for modeling assets while retaining compatibility with traditional version control systems. The framework uses model queries and transformations to filter models on the server-side according to access rules. This allows for rule-based, context-aware access policies without modifying modeling tools or infrastructure. A demonstration of the framework showed how standard version control features like locking, history and merging still work while providing improved security and flexibility over file-based access control. The framework was presented at MODELS 2016 and the authors are looking for contributors to help bring it to Eclipse.
IoT Supercharged: Complex event processing for MQTT with Eclipse technologiesIstvan Rath
Slides for our talk at EclipseCon Europe 2015. More details at https://www.eclipsecon.org/europe2015/session/iot-supercharged-complex-event-processing-mqtt-eclipse-technologies
Xcore meets IncQuery: How the New Generation of DSLs are MadeIstvan Rath
Slides for the presentation at EclipseCon Europe 2013.
For more details, see
http://www.eclipsecon.org/europe2013/xcore-meets-incquery-how-new-generation-dsls-are-made
http://incquery.net/blog/2013/10/xcore-meets-incquery-how-new-generation-dsls-are-made-talk-eclipsecon-europe-2013
The SENSORIA Development Environment is a CASE tool for service-oriented architecture (SOA) development from the SENSORIA EU FP6 project. It has 19 partners from 7 countries over 4 years with 4 million Euro funding. The tool provides an integrated platform for SOA development tools, allowing tools to be discovered, installed, composed, and orchestrated as services. The environment is based on Eclipse and OSGi services. It addresses challenges in SOA such as service specification, composition correctness, and continuous operation in changing environments.
Efficient model transformations by combining pattern matching strategiesIstvan Rath
This document discusses combining different pattern matching strategies to improve efficiency in model transformations. It presents research from the Budapest University of Technology and Economics on using both local search-based and incremental pattern matching techniques. The researchers developed a hybrid approach that selects the most efficient strategy for each pattern to optimize performance and memory usage during model transformations.
Incremental pattern matching in the VIATRA2 model transformation systemIstvan Rath
Incremental pattern matching allows model transformations to update target models incrementally based on changes to the source model. The VIATRA model transformation system implements incremental pattern matching using a RETE network to efficiently retrieve matching sets as models change. Benchmark results show near-linear performance for sparse models and constant execution time for certain patterns. Future work includes improving construction algorithms and enabling event-driven live transformations.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
1. Change‐Driven Model
Transforma5ons
Deriva'on and Processing of Change Histories
István Ráth, Gergely Varró, Dániel Varró
rath@mit.bme.hu
Budapest University of Technology and Economics
Model Driven Engineering Languages and Systems 2009, Denver, Colorado, USA
6. Model synchroniza5on and 'me
On‐demand: batch transforma5ons
o The “tradi5onal” way
Instantly: live transforma5ons
o React instantly to context (model) changes
• “event‐driven” transforma5ons
o Hearnden‐Lawley‐Raymond. Incremental Model
Transforma'on for the Evolu'on of Model‐driven Systems.
MODELS 2006.
o Ráth‐Ökrös‐Bergmann‐Varró. Live model transforma'ons
driven by incremental paCern matching. ICMT 2008.
• Transac5on‐oriented approach
• Reac5ons possible to arbitrarily complex changes
7. Live model synchroniza5on
Source Target
MA MB
change
MA’ MB’
2. React to 3. Merge
1. Watch for changes
changes
8. Model synchroniza5on and 'me
On‐demand: batch transforma5ons
o The “tradi5onal” way
Instantly: live transforma5ons
o React instantly to context (model) changes
• “event‐driven” transforma5ons
o Hearnden‐Lawley‐Raymond. Incremental Model
Common assump5ons:
Transforma'on for the Evolu'on of Model‐driven
1. All models are available in
Systems. MODELS 2006. memory
2. Changes are propagated
o Ráth‐Ökrös‐Bergmann‐Varró. Live model
“synchronously”
transforma'ons driven by incremental paCern
matching. ICMT 2008.
• Transac5on‐oriented approach
9. Asynchronous synchroniza5on
What if…
o Some models cannot (should not) be materialized in
memory?
• Models are too large
• Models have to be manipulated “inside” their na5ve
environment (tool)
o Changes are to be applied/reproduced “later”?
• Changes have to recorded for e.g. traceability
Asynchronous (off‐line) synchroniza5on
10. Mo5va5ng scenario
Source Target
MA MB
change
? IF
MA’ MB’
Trace
record
High level (domain‐
Deployed process
specific) process
template (jPDL)
model
11. Case study and challenges
Tool integra5on in a heterogeneous environment
o Developed for the SENSORIA and MOGENTES EU
research projects
High level process models describe (complex)
development process segments
o E.g. automated test genera5on, deployment
configura5on genera5on
Processes are executed in
o A distributed environment (worksta5ons, tool servers)
o Orchestrated by the jBPM process execu5on engine.
13. Conceptual overview
3. Apply changes to
Source Target
external models
through an interface
MA MB (IF)
change CHMA CHMB
IF
MA’ MB’
1. Record changes 2. Map source changes to
into traceability target changes (=CHMs to
models (=CHMs) CHMs) instead of source
models to target models
14. Change history models
Traceability models CHMA CHMB
o Opera5onal difference models
o Record historical opera5on sequences
• WHEN (5mestamps in a linked list structure)
• WHAT (CUDM)
• Context (referenced model elements)
o “weak” references
• IDs or FQNs
• Allows to reference external (non‐ or par5ally materialized)
models
16. Genera5on of CHMs
Live transforma5ons
Source o Editor‐independent!
Generate trace model
MA
snippets as the user is
edi5ng the model
change CHMA
o Timestamps
o Contextual references
MA’
17. Genera5on of CHMs: Generic example
: parentFQN
Sample execu5on sequence:
Parent
CE: CreateEn5ty
{pre}
Timestamp: <sysTime()>
Name: <name(E)>
E:Type : targetFQN {new}
: typeFQN
Type
En5ty and Rela5on are
basic VIATRA concepts for
{pre}
: newSrcFQN
graph node and edge
Src
CR: CreateRela5on
R:Type Timestamp: <sysTime()>
Name: <name(R)>
Trg : newTrgFQN {new}
: typeFQN
Type
18. Genera5ng domain‐specific CHMs
{pre}
W: : parentID
2a. Create a compound
Workflow CJN: CreateJPDLNode
CHM sequence as
Timestamp: <sysTime()>
postcondi5on
I: Invoca5on : targetID {new}
: next
: parameters : returns
DI: DO: CJA: CreateJPDLAtribute
DataInput DataOutput targetID: CJN.targetID
+”.parameters”
1. Use a compound parentID: CJN.targetID
patern as 2b. Use a
targetTextValue: value(DI)
precondi5on, : next
“compressed” CHM
corresponding to a CJA: CreateJPDLAtribute
element corresponding
(complex) model to a complex domain‐
targetID: CJN.targetID+”.returns”
parentID: CJN.targetID
structure specific opera5on
targetTextValue: value(DO)
19. Change‐driven transforma5ons
Input:
o Changes of the source
model
Source Target Output
o Corresponding changes of
CHMA CHMB the target model
May be formulated as:
MA’ o Live transforma5on
o Batch transforma5on
Granularity?
o “one‐to‐one”
o “n‐to‐m”
20. Mapping CHMs
{pre}
Parent : parentFQN
CE: CreateEn5ty
Sample execu5on con5nued:
E:Invoca5on typeFQN = meta.Invoca5on
func5onName:<> : targetFQN
: typeFQN
Invoca5on
{new}
CJN: CreateJPDLNode
targetID: name(Parent)+”.”+name(E)
parentID: name(Parent)
For each newly created
Invoca5on, create a
: next corresponding JPDL node
CJA: CreateJPDLAtribute together with is “func5on”
targetID: CJN.targetID+”.fun5on” atribute (=domain‐specific
parentID: CJN.targetID
targetTextValue: E.func5onName mapping logic)
21. Applying CHMs to external models
Applying CHMs = model
Target “interpreta5on”
External models are
MB
manipulated through a
(service) interface
CHMB
o VIATRA: “na5ve func5ons”
IF
MB’
23. Summary
Change‐driven transforma5ons =
o An innova5ve synthesis of known techniques:
• Trace models
• Live transforma5ons
• Non‐materialized model manipula5on
o A solu5on for an engineering problem
o Lots of open ques5ons and new ideas…
24. Future work
A beginning, rather than an end…
Lots of open ques5ons
o How to write CDTs?
o How to generate CDTs from “tradi5onal”
transforma5ons?
Are they useful?
o Efficient, intui5ve model synchroniza5on
o Change representa5on, processing ( (re)verifica5on,
change impact analysis)
o Model merging (~opera5onal merging)
Thank you for your aten5on.