Incremental pattern matching allows model transformations to update target models incrementally based on changes to the source model. The VIATRA model transformation system implements incremental pattern matching using a RETE network to efficiently retrieve matching sets as models change. Benchmark results show near-linear performance for sparse models and constant execution time for certain patterns. Future work includes improving construction algorithms and enabling event-driven live transformations.
Liszt los alamos national laboratory Aug 2011Ed Dodds
Liszt is a domain specific language for building portable mesh-based partial differential equation (PDE) solvers. It provides domain specific language features like mesh elements, topology functions, fields, and parallel for comprehensions to solve problems related to parallelism, data locality, and synchronization that arise when programming complex PDE solvers for parallel computers. The Liszt compiler analyzes code written in the Liszt language to extract data dependencies and generate optimized code for different hardware platforms like clusters, shared memory machines, and GPUs.
The document discusses Intel Threading Building Blocks (TBB), a C++ template library for parallel programming. TBB provides features like parallel_for to simplify parallelizing loops across CPU cores without managing threads directly. It uses generic programming principles and provides common parallel algorithms, concurrent data structures, and synchronization primitives to make parallel programming more accessible. TBB aims to improve both correctness through avoiding race conditions and performance through efficient hardware utilization.
This document provides an introduction to OpenMP, a standard for parallel programming using shared memory. OpenMP uses compiler directives like #pragma omp parallel to create threads that can access shared data. It uses a fork-join model where the master thread creates worker threads to execute blocks of code in parallel. OpenMP supports work sharing constructs like parallel for loops and sections to distribute work among threads, and synchronization constructs like barriers to coordinate thread execution. Variables can be declared as private to each thread or shared among all threads.
Traditionally, computer software has been written for serial computation. To solve a problem, an algorithm is constructed and implemented as a serial stream of instructions. These instructions are executed on a central processing unit on one computer. Only one instruction may execute at a time—after that instruction is finished, the next is executed.
C++ and OpenMP can be used together to create fast and maintainable parallel programs. However, there are some challenges to parallelizing C++ code using OpenMP due to inconsistencies between the C++ and OpenMP specifications. Objects used in OpenMP clauses like shared, private, and firstprivate require special handling of constructors, destructors, and assignment operators. Parallelizing C++ loops can also be problematic if the loop index is not an integer type or if the loop uses STL iterators. STL containers introduce additional issues for parallelization related to initialization and data distribution across processors.
Directive-based approach to Heterogeneous ComputingRuymán Reyes
The document discusses a directive-based approach to heterogeneous computing. It describes how applications used in HPC centers commonly use MPI and OpenMP programming models. It also discusses how complexity arises from mixing different Fortran dialects and the need for faster ways to migrate code to new architectures like accelerators without rewriting the code. The document proposes using directives to enhance legacy code for heterogeneous systems in a portable way.
OpenMP and MPI are two common APIs for parallel programming. OpenMP uses a shared memory model where threads have access to shared memory and can synchronize access. It is best for multi-core processors. MPI uses a message passing model where separate processes communicate by exchanging messages. It provides portability and is useful for distributed memory systems. Both have advantages like performance and portability but also disadvantages like difficulty of debugging for MPI. Future work may include improvements to threading support and fault tolerance in MPI.
The document discusses parallel programming approaches for multicore processors, advocating for using Haskell and embracing diverse approaches like task parallelism with explicit threads, semi-implicit parallelism by evaluating pure functions in parallel, and data parallelism. It argues that functional programming is well-suited for parallel programming due to its avoidance of side effects and mutable state, but that different problems require different solutions and no single approach is a silver bullet.
Liszt los alamos national laboratory Aug 2011Ed Dodds
Liszt is a domain specific language for building portable mesh-based partial differential equation (PDE) solvers. It provides domain specific language features like mesh elements, topology functions, fields, and parallel for comprehensions to solve problems related to parallelism, data locality, and synchronization that arise when programming complex PDE solvers for parallel computers. The Liszt compiler analyzes code written in the Liszt language to extract data dependencies and generate optimized code for different hardware platforms like clusters, shared memory machines, and GPUs.
The document discusses Intel Threading Building Blocks (TBB), a C++ template library for parallel programming. TBB provides features like parallel_for to simplify parallelizing loops across CPU cores without managing threads directly. It uses generic programming principles and provides common parallel algorithms, concurrent data structures, and synchronization primitives to make parallel programming more accessible. TBB aims to improve both correctness through avoiding race conditions and performance through efficient hardware utilization.
This document provides an introduction to OpenMP, a standard for parallel programming using shared memory. OpenMP uses compiler directives like #pragma omp parallel to create threads that can access shared data. It uses a fork-join model where the master thread creates worker threads to execute blocks of code in parallel. OpenMP supports work sharing constructs like parallel for loops and sections to distribute work among threads, and synchronization constructs like barriers to coordinate thread execution. Variables can be declared as private to each thread or shared among all threads.
Traditionally, computer software has been written for serial computation. To solve a problem, an algorithm is constructed and implemented as a serial stream of instructions. These instructions are executed on a central processing unit on one computer. Only one instruction may execute at a time—after that instruction is finished, the next is executed.
C++ and OpenMP can be used together to create fast and maintainable parallel programs. However, there are some challenges to parallelizing C++ code using OpenMP due to inconsistencies between the C++ and OpenMP specifications. Objects used in OpenMP clauses like shared, private, and firstprivate require special handling of constructors, destructors, and assignment operators. Parallelizing C++ loops can also be problematic if the loop index is not an integer type or if the loop uses STL iterators. STL containers introduce additional issues for parallelization related to initialization and data distribution across processors.
Directive-based approach to Heterogeneous ComputingRuymán Reyes
The document discusses a directive-based approach to heterogeneous computing. It describes how applications used in HPC centers commonly use MPI and OpenMP programming models. It also discusses how complexity arises from mixing different Fortran dialects and the need for faster ways to migrate code to new architectures like accelerators without rewriting the code. The document proposes using directives to enhance legacy code for heterogeneous systems in a portable way.
OpenMP and MPI are two common APIs for parallel programming. OpenMP uses a shared memory model where threads have access to shared memory and can synchronize access. It is best for multi-core processors. MPI uses a message passing model where separate processes communicate by exchanging messages. It provides portability and is useful for distributed memory systems. Both have advantages like performance and portability but also disadvantages like difficulty of debugging for MPI. Future work may include improvements to threading support and fault tolerance in MPI.
The document discusses parallel programming approaches for multicore processors, advocating for using Haskell and embracing diverse approaches like task parallelism with explicit threads, semi-implicit parallelism by evaluating pure functions in parallel, and data parallelism. It argues that functional programming is well-suited for parallel programming due to its avoidance of side effects and mutable state, but that different problems require different solutions and no single approach is a silver bullet.
Live model transformations driven by incremental pattern matchingIstvan Rath
Live model transformations can be driven incrementally by detecting changes to the matching set of patterns over the model. The VIATRA implementation uses RETE networks to efficiently maintain and update the matching sets when models change. This enables live transformations to respond instantly to modifications by mapping only the changes to the target model. Future work aims to improve performance further and enhance the language for debugging and static analysis of live transformations.
A benchmark evaluation for incremental pattern matching in graph transformationIstvan Rath
This document summarizes a presentation given at the Budapest University of Technology and Economics about benchmarking incremental pattern matching in graph transformation. It describes two case studies used as benchmarks: model simulation using petri nets and object-relational mapping for synchronization. Performance results are presented showing that the Rete algorithm provides predictable linear scaling for incremental pattern matching in practical problems.
The document discusses a new compiler architecture for the Dotty Scala Compiler (dsc) that takes inspiration from functional databases. The architecture treats all values as time-varying functions indexed by compilation phase. This allows the compiler to answer questions about program elements by looking up their meaning at a specific point in time. The core data types include time-indexed abstract syntax trees, types, references to declarations, and denotations, which capture the meaning of references. Caching is used to efficiently store and retrieve values across phases.
Data-Intensive Computing for Competent Genetic Algorithms: A Pilot Study us...Xavier Llorà
Data-intensive computing has positioned itself as a valuable programming paradigm to efficiently approach problems requiring processing very large volumes of data. This paper presents a pilot study about how to apply the data-intensive computing paradigm to evolutionary computation algorithms. Two representative cases (selectorecombinative genetic algorithms and estimation of distribution algorithms) are presented, analyzed, and discussed. This study shows that equivalent data-intensive computing evolutionary computation algorithms can be easily developed, providing robust and scalable algorithms for the multicore-computing era. Experimental results show how such algorithms scale with the number of available cores without further modification.
Go is a general purpose programming language created by Google. It is statically typed, compiled, garbage collected, and memory safe. Go has good support for concurrency with goroutines and channels. It has a large standard library and integrates well with C. Some key differences compared to other languages are its performance, explicit concurrency model, and lack of classes. Common data types in Go include arrays, slices, maps, structs and interfaces.
Incremental pattern matching in the VIATRA2 model transformation frameworkIstvan Rath
This document discusses incremental pattern matching in the VIATRA2 model transformation framework. It introduces incremental pattern matching using the RETE algorithm as implemented in VIATRA2. The RETE algorithm caches pattern matches and incrementally updates them as the model changes. This allows pattern matching to be performed incrementally for efficient model transformations on evolving models. The document outlines how RETE networks are constructed from patterns and how they are updated based on model changes notified through the VIATRA framework. Initial performance analysis is discussed to compare incremental versus local search approaches.
LoLA is an explicit-state model checker for Petri nets that focuses on standard properties and uses many reduction techniques such as stubborn sets, symmetries, and sweep-line heuristics to efficiently analyze large state spaces. It takes Petri nets as input in the form of place/transition nets or high-level algebraic nets and allows users to specify verification tasks involving properties such as boundedness, reachability, and temporal logics. LoLA is open source and has been used in several case studies to generate experimental results tables exploring the impact of basic design decisions.
1) OpenDA is a data assimilation toolbox that allows for both data assimilation and model calibration in a generic way.
2) It has an object oriented design that allows components like models and algorithms to be easily exchanged.
3) OpenDA supports parallel computing concepts and various ways of integrating models, including keeping models as "black boxes".
This document discusses graph processing and the need for distributed graph frameworks. It provides examples of real-world graph sizes that are too large for a single machine to process. It then summarizes some of the key challenges in parallel graph processing like irregular structure and data transfer issues. Several graph processing frameworks are described including Pregel, GraphLab, PowerGraph, and LFGraph. LFGraph is presented as a simple and fast distributed graph analytics framework that aims to have low pre-processing, load-balanced computation and communication, and low memory footprint compared to previous frameworks. The document provides examples and analyses to compare the computation and communication characteristics of different frameworks. It concludes by discussing some open questions and potential areas for improvement in LFGraph.
Wrapper induction construct wrappers automatically to extract information f...George Ang
Wrapper induction is a technique to automatically generate wrappers to extract information from web sources. It involves learning extraction rules from labeled examples to construct a wrapper as a finite state machine or set of delimiters. Two main wrapper induction systems are WIEN, which defines wrapper classes including LR, and STALKER, which uses a more expressive model with extraction rules and landmarks to handle structure hierarchically. Remaining challenges include selecting informative examples, generating label pages automatically, and developing more expressive models.
CETPA INFOTECH PVT LTD is one of the IT education and training service provider brands of India that is preferably working in 3 most important domains. It includes IT Training services, software and embedded product development and consulting services.
CETPA INFOTECH PVT LTD is one of the IT education and training service provider brands of India that is preferably working in 3 most important domains. It includes IT Training services, software and embedded product development and consulting services.
http://www.cetpainfotech.com
Process the Twitter stream using Storm & Redstorm with Ruby & JRuby. Full working demo, code on github https://github.com/colinsurprenant/tweitgeist and live demo http://tweitgeist.needium.com/
"Source Code Abstracts Classification Using CNN", Vadim Markovtsev, Lead Soft...Dataconomy Media
"Source Code Abstracts Classification Using CNN", Vadim Markovtsev, Lead Software Engineer - Machine Learning Team at Source {d}
Watch more from Data Natives Berlin 2016 here: http://bit.ly/2fE1sEo
Visit the conference website to learn more: www.datanatives.io
Follow Data Natives:
https://www.facebook.com/DataNatives
https://twitter.com/DataNativesConf
Stay Connected to Data Natives by Email: Subscribe to our newsletter to get the news first about Data Natives 2016: http://bit.ly/1WMJAqS
About the Author:
Currently Vadim is a Senior Machine Learning Engineer at source{d} where he works on deep neural networks that aim to understand all of the world's developers through their code. Vadim is one of the creators of the distributed deep learning platform Veles (https://velesnet.ml) while working at Samsung. Afterwards Vadim was responsible for the machine learning efforts to fight email spam at Mail.Ru. In the past Vadim was also a visiting associate professor at Moscow Institute of Physics and Technology, teaching about new technologies and conducting ACM-like internal coding competitions. Vadim is also a big fan of GitHub (vmarkovtsev) and HackerRank (markhor), as well as likes to write technical articles on a number of web sites.
The document summarizes model-driven engineering (MDE) and discusses approaches to reuse in MDE transformations. Specifically:
- MDE aims to increase abstraction in software development by modeling at a higher level of abstraction rather than coding directly. Models are used to describe problems, simulate/verify/test, and generate code.
- Reusing MDE artifacts like transformations is challenging as they are defined for specific meta-models. Current practice involves ad-hoc copying and adapting transformations, which is error-prone.
- The document presents three approaches to improve reuse: concepts, multi-level modeling, and a-posteriori typing. Concepts define transformations at a more abstract level and allow automated adaptation.
To date, Hadoop usage has focused primarily on offline analysis--making sense of web logs, parsing through loads of unstructured data in HDFS, etc. But what if you want to run map/reduce against your live data set without affecting online performance? Combining Hadoop with Cassandra's multi-datacenter replication capabilities makes this possible. If you're interested in getting value from your data without the hassle and latency of first moving it into Hadoop, this talk is for you. I'll show you how to connect all the parts, enabling you to write map/reduce jobs or run Pig queries against your live data. As a bonus I'll cover writing map/reduce in Scala, which is particularly well-suited for the task.
The document discusses cloud-based modeling solutions from IncQuery Labs that enable tool integration. It describes challenges with large-scale collaboration and automation across multiple teams and tools. The IncQuery Model Checking Tool Suite uses a custom query language to perform validation checks and transformations across models stored in a repository. Case studies demonstrate tool integration workflows at companies like Airbus. Live demos of the solutions are also provided.
IncQuery Labs provides cloud-based modeling solutions to enable tool integration in model-based systems engineering (MBSE). Their IncQuery tool suite includes a desktop query authoring tool and backend server that allows running complex queries on large models. IncQuery was used to develop an interoperability platform for Airbus that automates workflows involving transformations between modeling tools and generates reports through a web interface.
More Related Content
Similar to Incremental pattern matching in the VIATRA2 model transformation system
Live model transformations driven by incremental pattern matchingIstvan Rath
Live model transformations can be driven incrementally by detecting changes to the matching set of patterns over the model. The VIATRA implementation uses RETE networks to efficiently maintain and update the matching sets when models change. This enables live transformations to respond instantly to modifications by mapping only the changes to the target model. Future work aims to improve performance further and enhance the language for debugging and static analysis of live transformations.
A benchmark evaluation for incremental pattern matching in graph transformationIstvan Rath
This document summarizes a presentation given at the Budapest University of Technology and Economics about benchmarking incremental pattern matching in graph transformation. It describes two case studies used as benchmarks: model simulation using petri nets and object-relational mapping for synchronization. Performance results are presented showing that the Rete algorithm provides predictable linear scaling for incremental pattern matching in practical problems.
The document discusses a new compiler architecture for the Dotty Scala Compiler (dsc) that takes inspiration from functional databases. The architecture treats all values as time-varying functions indexed by compilation phase. This allows the compiler to answer questions about program elements by looking up their meaning at a specific point in time. The core data types include time-indexed abstract syntax trees, types, references to declarations, and denotations, which capture the meaning of references. Caching is used to efficiently store and retrieve values across phases.
Data-Intensive Computing for Competent Genetic Algorithms: A Pilot Study us...Xavier Llorà
Data-intensive computing has positioned itself as a valuable programming paradigm to efficiently approach problems requiring processing very large volumes of data. This paper presents a pilot study about how to apply the data-intensive computing paradigm to evolutionary computation algorithms. Two representative cases (selectorecombinative genetic algorithms and estimation of distribution algorithms) are presented, analyzed, and discussed. This study shows that equivalent data-intensive computing evolutionary computation algorithms can be easily developed, providing robust and scalable algorithms for the multicore-computing era. Experimental results show how such algorithms scale with the number of available cores without further modification.
Go is a general purpose programming language created by Google. It is statically typed, compiled, garbage collected, and memory safe. Go has good support for concurrency with goroutines and channels. It has a large standard library and integrates well with C. Some key differences compared to other languages are its performance, explicit concurrency model, and lack of classes. Common data types in Go include arrays, slices, maps, structs and interfaces.
Incremental pattern matching in the VIATRA2 model transformation frameworkIstvan Rath
This document discusses incremental pattern matching in the VIATRA2 model transformation framework. It introduces incremental pattern matching using the RETE algorithm as implemented in VIATRA2. The RETE algorithm caches pattern matches and incrementally updates them as the model changes. This allows pattern matching to be performed incrementally for efficient model transformations on evolving models. The document outlines how RETE networks are constructed from patterns and how they are updated based on model changes notified through the VIATRA framework. Initial performance analysis is discussed to compare incremental versus local search approaches.
LoLA is an explicit-state model checker for Petri nets that focuses on standard properties and uses many reduction techniques such as stubborn sets, symmetries, and sweep-line heuristics to efficiently analyze large state spaces. It takes Petri nets as input in the form of place/transition nets or high-level algebraic nets and allows users to specify verification tasks involving properties such as boundedness, reachability, and temporal logics. LoLA is open source and has been used in several case studies to generate experimental results tables exploring the impact of basic design decisions.
1) OpenDA is a data assimilation toolbox that allows for both data assimilation and model calibration in a generic way.
2) It has an object oriented design that allows components like models and algorithms to be easily exchanged.
3) OpenDA supports parallel computing concepts and various ways of integrating models, including keeping models as "black boxes".
This document discusses graph processing and the need for distributed graph frameworks. It provides examples of real-world graph sizes that are too large for a single machine to process. It then summarizes some of the key challenges in parallel graph processing like irregular structure and data transfer issues. Several graph processing frameworks are described including Pregel, GraphLab, PowerGraph, and LFGraph. LFGraph is presented as a simple and fast distributed graph analytics framework that aims to have low pre-processing, load-balanced computation and communication, and low memory footprint compared to previous frameworks. The document provides examples and analyses to compare the computation and communication characteristics of different frameworks. It concludes by discussing some open questions and potential areas for improvement in LFGraph.
Wrapper induction construct wrappers automatically to extract information f...George Ang
Wrapper induction is a technique to automatically generate wrappers to extract information from web sources. It involves learning extraction rules from labeled examples to construct a wrapper as a finite state machine or set of delimiters. Two main wrapper induction systems are WIEN, which defines wrapper classes including LR, and STALKER, which uses a more expressive model with extraction rules and landmarks to handle structure hierarchically. Remaining challenges include selecting informative examples, generating label pages automatically, and developing more expressive models.
CETPA INFOTECH PVT LTD is one of the IT education and training service provider brands of India that is preferably working in 3 most important domains. It includes IT Training services, software and embedded product development and consulting services.
CETPA INFOTECH PVT LTD is one of the IT education and training service provider brands of India that is preferably working in 3 most important domains. It includes IT Training services, software and embedded product development and consulting services.
http://www.cetpainfotech.com
Process the Twitter stream using Storm & Redstorm with Ruby & JRuby. Full working demo, code on github https://github.com/colinsurprenant/tweitgeist and live demo http://tweitgeist.needium.com/
"Source Code Abstracts Classification Using CNN", Vadim Markovtsev, Lead Soft...Dataconomy Media
"Source Code Abstracts Classification Using CNN", Vadim Markovtsev, Lead Software Engineer - Machine Learning Team at Source {d}
Watch more from Data Natives Berlin 2016 here: http://bit.ly/2fE1sEo
Visit the conference website to learn more: www.datanatives.io
Follow Data Natives:
https://www.facebook.com/DataNatives
https://twitter.com/DataNativesConf
Stay Connected to Data Natives by Email: Subscribe to our newsletter to get the news first about Data Natives 2016: http://bit.ly/1WMJAqS
About the Author:
Currently Vadim is a Senior Machine Learning Engineer at source{d} where he works on deep neural networks that aim to understand all of the world's developers through their code. Vadim is one of the creators of the distributed deep learning platform Veles (https://velesnet.ml) while working at Samsung. Afterwards Vadim was responsible for the machine learning efforts to fight email spam at Mail.Ru. In the past Vadim was also a visiting associate professor at Moscow Institute of Physics and Technology, teaching about new technologies and conducting ACM-like internal coding competitions. Vadim is also a big fan of GitHub (vmarkovtsev) and HackerRank (markhor), as well as likes to write technical articles on a number of web sites.
The document summarizes model-driven engineering (MDE) and discusses approaches to reuse in MDE transformations. Specifically:
- MDE aims to increase abstraction in software development by modeling at a higher level of abstraction rather than coding directly. Models are used to describe problems, simulate/verify/test, and generate code.
- Reusing MDE artifacts like transformations is challenging as they are defined for specific meta-models. Current practice involves ad-hoc copying and adapting transformations, which is error-prone.
- The document presents three approaches to improve reuse: concepts, multi-level modeling, and a-posteriori typing. Concepts define transformations at a more abstract level and allow automated adaptation.
To date, Hadoop usage has focused primarily on offline analysis--making sense of web logs, parsing through loads of unstructured data in HDFS, etc. But what if you want to run map/reduce against your live data set without affecting online performance? Combining Hadoop with Cassandra's multi-datacenter replication capabilities makes this possible. If you're interested in getting value from your data without the hassle and latency of first moving it into Hadoop, this talk is for you. I'll show you how to connect all the parts, enabling you to write map/reduce jobs or run Pig queries against your live data. As a bonus I'll cover writing map/reduce in Scala, which is particularly well-suited for the task.
Similar to Incremental pattern matching in the VIATRA2 model transformation system (20)
The document discusses cloud-based modeling solutions from IncQuery Labs that enable tool integration. It describes challenges with large-scale collaboration and automation across multiple teams and tools. The IncQuery Model Checking Tool Suite uses a custom query language to perform validation checks and transformations across models stored in a repository. Case studies demonstrate tool integration workflows at companies like Airbus. Live demos of the solutions are also provided.
IncQuery Labs provides cloud-based modeling solutions to enable tool integration in model-based systems engineering (MBSE). Their IncQuery tool suite includes a desktop query authoring tool and backend server that allows running complex queries on large models. IncQuery was used to develop an interoperability platform for Airbus that automates workflows involving transformations between modeling tools and generates reports through a web interface.
MBSE meets Industrial IoT: Introducing the New MagicDraw Plug-in for RTI Co...Istvan Rath
Slides of the talk at the MBSE Cyber Experience Symposium 2019 (https://mbsecyberexperience2019.com/speakers/abstracts/item/mbse-meets-industrial-iot-introducing-the-new-magicdraw-connext-dds-plug-in)
IncQuery Server for Teamwork Cloud - Talk at IW2019Istvan Rath
IncQuery Server provides scalable query evaluation over collaborative model repositories. It uses a hybrid database technology that is 10-100x faster than conventional databases and supports large models and complex queries. IncQuery Server integrates with MagicDraw and Teamwork Cloud to enable version control, access control, and customizable queries for model validation and impact analysis.
Easier smart home development with simulators and rule enginesIstvan Rath
The document discusses using simulators and rule engines like Drools Fusion to make smart home development easier. It presents a smart home demonstrator that uses a HomeIO MQTT adapter, an extended event bus, and Drools rules to integrate a simulator with OpenHAB. Rules provide a simple yet flexible way to program smart home logic. The demonstrator source code is open source and available on GitHub to help developers prototype and test smart home applications.
- The VIATRA framework provides a model query and transformation engine for design tools, with applications in systems engineering.
- It features a declarative query language called VQL, Java and Xtend APIs, and a reactive engine for live queries and transformations.
- VIATRA helps validate design rules on large models, allowing designers to be immediately notified of violations during architecture design. It can efficiently query models with millions of elements.
Smarter internet of things with stream and event processing virtual io_t_meet...Istvan Rath
This document summarizes a presentation on using stream and event processing for smarter IoT applications. It introduces concepts like IoT, stream processing, complex event processing (CEP), and discusses how IncQuery Labs' smart home CEP demonstrator uses Drools Fusion for CEP integrated with Eclipse SmartHome and OpenHAB. The demonstrator features a home simulator, extended event bus, and sample rules. It aims to make smart home development easier by bringing CEP capabilities to the edge for low latency offline operation.
Modes3: Model-based Demonstrator for Smart and Safe SystemsIstvan Rath
A talk on Modes3, presented at the IoT Budapest Meetup (April 2017). https://www.meetup.com/IoT-Budapest/events/238267893/
More information:
http://inf.mit.bme.hu/en/research/projects/modes3
https://github.com/FTSRG/BME-MODES3
http://modes3.tumblr.com
Eclipse DemoCamp Budapest 2016 November: Best of EclipseCon Europe 2016Istvan Rath
Ebben a DemoCamp előadásban az EclipseCon Europe 2016 és SiriusCon 2016 konferenciák legfontosabb témáit, technológiáit foglalom össze, kiegészítve néhány szubjektív véleménnyel és megérzéssel a technológiai trendekről.
Exploring the Future of Eclipse Modeling: Web and Semantic CollaborationIstvan Rath
This document discusses a new framework for semantic collaboration on Eclipse modeling projects. It aims to provide fine-grained access control for modeling assets while retaining compatibility with traditional version control systems. The framework uses model queries and transformations to filter models on the server-side according to access rules. This allows for rule-based, context-aware access policies without modifying modeling tools or infrastructure. A demonstration of the framework showed how standard version control features like locking, history and merging still work while providing improved security and flexibility over file-based access control. The framework was presented at MODELS 2016 and the authors are looking for contributors to help bring it to Eclipse.
IoT Supercharged: Complex event processing for MQTT with Eclipse technologiesIstvan Rath
Slides for our talk at EclipseCon Europe 2015. More details at https://www.eclipsecon.org/europe2015/session/iot-supercharged-complex-event-processing-mqtt-eclipse-technologies
Xcore meets IncQuery: How the New Generation of DSLs are MadeIstvan Rath
Slides for the presentation at EclipseCon Europe 2013.
For more details, see
http://www.eclipsecon.org/europe2013/xcore-meets-incquery-how-new-generation-dsls-are-made
http://incquery.net/blog/2013/10/xcore-meets-incquery-how-new-generation-dsls-are-made-talk-eclipsecon-europe-2013
EMF-IncQuery 0.7 Presentation for ItemisIstvan Rath
The document introduces EMF-INCQUERY, a model query engine for Eclipse Modeling Framework (EMF) models. It provides an expressive graph pattern query language and incremental query evaluation based on the Rete algorithm. This enables efficient complex queries over large models. EMF-INCQUERY addresses performance issues of model queries in modeling tools and simplifies writing complex queries through reusable query libraries and pattern composition. It integrates with EMF-based applications and provides features like on-the-fly validation and view maintenance.
Event-driven Model Transformations in Domain-specific Modeling LanguagesIstvan Rath
This PhD thesis by István Ráth focuses on event-driven model transformations in domain-specific modeling languages. The thesis contains 3 parts: 1) developing concepts for event-driven graph transformations based on incremental pattern matching, 2) applying these concepts to provide advanced language engineering features like simulation, and 3) integrating modeling tools using change-driven transformations. The research aims to address challenges in scalability, usability and tool integration for model-driven software engineering.
The SENSORIA Development Environment is a CASE tool for service-oriented architecture (SOA) development from the SENSORIA EU FP6 project. It has 19 partners from 7 countries over 4 years with 4 million Euro funding. The tool provides an integrated platform for SOA development tools, allowing tools to be discovered, installed, composed, and orchestrated as services. The environment is based on Eclipse and OSGi services. It addresses challenges in SOA such as service specification, composition correctness, and continuous operation in changing environments.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Project Management Semester Long Project - Acuityjpupo2018
Acuity is an innovative learning app designed to transform the way you engage with knowledge. Powered by AI technology, Acuity takes complex topics and distills them into concise, interactive summaries that are easy to read & understand. Whether you're exploring the depths of quantum mechanics or seeking insight into historical events, Acuity provides the key information you need without the burden of lengthy texts.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdf
Incremental pattern matching in the VIATRA2 model transformation system
1. Incremental pattern matching in the
VIATRA d lt f ti t
VIATRA model transformation system
Gábor Bergmann
András Ökrös
István Ráth (rath@mit.bme.hu)
Dániel Varró
Dániel Varró Department of Measurement and
Department of Measurement and
Gergely Varró Information Systems
Budapest University of Technology and
University of Technology and
Economics
4. Incremental model transformations
model transformations
• Key usage scenarios for MT:
▫ Mapping between languages
▫ Intra‐domain model manipulation
Model execution
Validity checking (constraint evaluation)
Validity checking (constraint evaluation)
• They work with evolving models.
▫ Users are constantly changing/modifying them.
▫ Users usually work with large models.
Users usually work with large
• Problem: transformations are slow
▫ To execute… (large models)
▫ and to re‐execute again and again (always starting from scratch).
• Solution: incrementality
▫ Take the source model, and its mapped counterpart;
▫ Use the information about how the source model was changed;
▫ M
Map and apply the changes (but ONLY the changes) to the target model.
d l h h (b ONLY h h ) h d l
5. Towards incrementality
Towards incrementality
• How to achieve incrementality?
How to achieve incrementality?
▫ Incremental updates: avoid re‐generation.
Don t recreate what is already there.
Don’t recreate what is already there
Use reference (correspondence) models.
▫ Incremental execution: avoid re‐computation.
Incremental execution: avoid re computation.
Don’t recalculate what was already computed.
How?
6. Incremental graph pattern matching
Incremental graph pattern matching
• Graph transformations require pattern matching
p q p g
• Goal: retrieve the matching set quickly
• How?
▫ Store (cache) matchings
▫ Update them as the model changes
Update precisely (incrementality)
U d t i l (i t lit )
• Expected results: good, if…
▫ There is enough memory ( )
There is enough memory (*)
▫ Queries are dominant
▫ Model changes are relatively sparse (**)
▫ e.g. synchronization, constraint evaluation, …
7. Operational overview
XForm
pattern interpreter model manipulation
matching
Incremental
event
VIATRA
pattern
notification Model space
matcher
updates
8. Architecture
LS pattern
LS tt
Model XForm matcher
parser parser
XML XForm
VIATRA
V
serializer interpreter
Incremental
Native importer & pattern
A2 Framewo
loader interface
l d i t f matcher
Core interfaces
ork
VIATRA Model space Program model store
10. Core idea: use RETE nets
• RETE network INPUT
▫ node: (partial) matches of a
node: (partial) matches of a
(sub)pattern
t3 Model space
Model space p1 p2 p3 t1 t2 k1 k2 t3
▫ edge: update propagation t3 t3 t3
• Demonstrating the principle
Demonstrating the principle
▫ input: Petri net Input nodes
: Place
p1 p2 p3
d : Token
k1 k2
: Transition
t1 t2 t3
▫ pattern: fireable transition
▫ Model change: new transition
t3
(t3)
t1
Intermediate
p1, k1 p2, k2
p1
nodes p1, k1, t1 p2, k2, t3
p3
3 p2
t2 p2, k2, t3
t3
Production node
Production node
p1, k1, t1, p3 p2, k2, t2, p3
11. RETE network construction
RETE network construction
• Key: pattern decomposition
y p p
▫ Pattern = set of constraints (defined over pattern variables)
▫ Types of constraints: type, topology (source/target),
hierarchy (containment), attribute value, generics
h h ( ) b l
(instanceOf/supertypeOf), injectivity, [negative] pattern
calls, …
calls, …
• Construction algorithm (roughly)
▫ 1. Decompose the pattern into elementary constraints (*)
▫ 2. Process the elementary constraints and connect them
with appropriate intermediate nodes (JOIN, MINUS‐JOIN,
UNION, …)
UNION )
▫ 3. Create terminator production node
13. Other VIATRA features
Other VIATRA features
• Pattern calls
▫ Simply connect the production nodes
▫ Pattern recursion is fully supported
• OR patterns
OR‐patterns
▫ UNION intermediate nodes
• Check conditions
▫ check (value(X) % 5 == 3)
▫ check (length(name(X)) < 4)
▫ check (myFunction(name(X))!=‘myException’)
check (myFunction(name(X))!= myException )
▫ Filter and term evaluator nodes
• Result: full VIATRA transformation language support; any
pattern can be matched incrementally.
14. Updates
• Needed when the model space changes
Needed when the model space changes
• VIATRA notification mechanism (EMF is also possible)
▫ Transparent: user modification, model imports, results of a
Transparent: user modification, model imports, results of a
transformation, external modification, … RETE is always
updated!
• Input nodes receive elementary modifications and
release an update token
▫ Represents a change in the partial matching (+/‐)
• Nodes process updates and propagate them if needed
▫ PRECISE update mechanism
16. Performance
• In theory…
▫ Building phase is slow (“warm‐up”)
How slow?
▫ Once the network is built, pattern matching is an
,p g
“instantaneous” operation.
Excluding the linear cost of reading the result set.
▫ But… there is a performance penalty on model manipulation.
p p y p
How much?
• Dependencies?
▫ Pattern size
Pattern size
▫ Matching set size
▫ Model size
▫ …?
?
17. Benchmarking
• Example transformation: Petri net simulation
▫ One complex pattern for the enabledness condition
▫ Two graph transformation rules for firing
▫ As‐long‐as‐possible (ALAP) style execution (“fire at will”)
▫ Model graphs:
A “large” Petri net actually used in a research project (~60 places, ~70
transitions, ~300 arcs)
Scaling up: automatic generation preserving liveness ( t 100000
S li t ti ti i li (up to 100000
places, 100000 transitions, 500000 arcs)
• Analysis
▫ Measure execution time (average multiple runs)
Measure execution time (average multiple runs)
▫ Take “warm‐up” runs into consideration
• Profiling
▫ Measure overhead network construction time
Measure overhead, network construction time
▫ “Normalize” results
18. Profiling results
Profiling results
• Model manipulation overhead: ~15% (of overall CPU
p (
time)
▫ Depends largely on the transformation!
• Memory overhead
Memory overhead
▫ Petri nets (with RETE networks) up to ~100000 fit into 1‐
1.5GB RAM (VIATRA model space limitations)
▫ Grows linearly with model size (as expected)
( )
▫ Nature of growth is pattern‐dependent
• Network construction overhead
Network construction overhead
▫ Similar to memory; pattern‐dependent.
▫ PN: In the same order as VIATRA’s LS heuristics
initialization.
initialization
19. Execution times
Execution times Matches/outperforms
Sparse Petri net benchmark GrGEN.NET for large
models and high
d l d hi h
1000000 iteration counts.
Viatra/RETE
Three orders of (x1k)
( 1k)
100000 magnitude and Viatra/LS (x1k)
growing…
ms)
10000
on time (m
GrGenNET (x1k)
G G NET ( 1k)
1000
Viatra/RETE
Executio
100
(x1M)
( 1M)
GrGen.NET
10 (x1M)
100 1000 10000 100000
Petri net size
20. Benchmarking summary
Benchmarking summary
• Predictable near‐linear growth
Predictable near linear growth
▫ As long as there is enough memory
▫ Certain problem classes: constant execution time ☺
Certain problem classes: constant execution time ☺
▫ A ga
21. Improving performance
Improving performance
• Strategies
▫ Improve the construction algorithm
Memory efficiency (node sharing)
Memory efficiency (node sharing)
Heuristics‐driven constraint enumeration (based on
p
pattern [and model space] content)
[ p ] )
▫ Parallelism
Update the RETE network in parallel with the
transformation
Parallel network construction
▫?
23. More benchmarking…
More benchmarking
• Ongoing research
Ongoing research
▫ Extending the Varro benchmark
Mutex STS/LTS
ORM
▫ Extended benchmarking use cases
Extended benchmarking use cases
Simulation (model execution)
Synchronization
y
Constraint evaluation
▫ Parallel transformations
24. Event‐driven live transformations
Event‐driven live transformations
• Problem: MT is mostly batch‐like
y
▫ But models are constantly evolving Frequent re‐
transformations are needed for
mapping
synchronization
constraint checking
…
• An incremental PM can solve the performance problem,
but a formalism is needed
▫ to specify when to (re)act
▫ and how.
• Ideally the formalism should be MT‐like
Ideally, the formalism should be MT like.
25. Event‐driven live transformations (cont d)
Event‐driven live transformations (cont’d)
• An idea: represent events as model elements.
p
• Our take: represent events as changes in the matching
set of a pattern.
▫ ~generalization
• Live transformations
▫ maintain the context (variable values, global variables, …);
i t i th t t ( i bl l l b l i bl )
▫ run as a “daemon”, react whenever necessary;
▫ as the models change the system can react instantly since
as the models change, the system can react instantly, since
everything needed is there in the RETE network: no re‐
computation is necessary.
• Paper accepted at ICMT2008.
26. Summary
• Incremental pattern matching support integrated
Incremental pattern matching support integrated
into VIATRA2 R3
▫ Based on the RETE algorithm
Based on the RETE algorithm
▫ Provides full support for the pattern language
▫ High performance in certain problem classes
High performance in certain problem classes
• Future
▫ Performance will be further improved
Performance will be further improved
▫ New applications in live transformations