This document discusses constraint satisfaction problems over models (CSP(M)). It defines CSP(M) as being described by an initial model, a set of global constraints, goals, and labeling rules. It presents an example of allocating jobs to partitions in an integrated modular avionics system. Constraints and goals are defined as graph patterns. Labeling rules are defined using graph transformation rules. Solving CSP(M) involves applying labeling rules to reach a state satisfying all constraints and goals, backtracking if needed. The document outlines an implementation over VIATRA2 and discusses optimizations and future work.
Ontology-based data access: why it is so cool!Josef Hardi
A brief introduction about ontology-based data access (shortly OBDA) and its core implementation. I presented too a recent simple benchmark between -ontop- and Semantika---two most available software for OBDA framework---in term of query performance (including details in the appendix section). The slides were presented for Friday Research Meeting in Stanford Center for Biomedical Informatics Research (BMIR).
License: Creative Commons by Attribution 3.0
In the modern world, we are permanently using, leveraging, interacting with, and relying upon systems of ever higher sophistication, ranging from our cars, recommender systems in eCommerce, and networks when we go online, to integrated circuits when using our PCs and smartphones, security-critical software when accessing our bank accounts, and spreadsheets for financial planning and decision making. The complexity of these systems coupled with our high dependency on them implies both a non-negligible likelihood of system failures, and a high potential that such failures have significant negative effects on our everyday life. For that reason, it is a vital requirement to keep the harm of emerging failures to a minimum, which means minimizing the system downtime as well as the cost of system repair. This is where model-based diagnosis comes into play.
Model-based diagnosis is a principled, domain-independent approach that can be generally applied to troubleshoot systems of a wide variety of types, including all the ones mentioned above. It exploits and orchestrates techniques for knowledge representation, automated reasoning, heuristic problem solving, intelligent search, learning, stochastics, statistics, decision making under uncertainty, as well as combinatorics and set theory to detect, localize, and fix faults in abnormally behaving systems.
In this talk, we will give an introduction to the topic of model-based diagnosis, point out the major challenges in the field, and discuss a selection of approaches from our research addressing these challenges. For instance, we will present methods for the optimization of the time and memory performance of diagnosis systems, show efficient techniques for a semi-automatic debugging by interacting with a user or expert, and demonstrate how our algorithms can be effectively leveraged in important application domains such as scheduling or the Semantic Web.
It Does What You Say, Not What You Mean: Lessons From A Decade of Program RepairClaire Le Goues
In this talk we present lessons learned, good ideas, and thoughts on the future, with an eye toward informing junior researchers about the realities and opportunities of a long-running project. We highlight some notions from the original paper that stood the test of time, some that were not as prescient, and some that became more relevant as industrial practice advanced. We place the work in context, highlighting perceptions from software engineering and evolutionary computing, then and now, of how program repair could possibly work. We discuss the importance of measurable benchmarks and reproducible research in bringing scientists together and advancing the area. We give our thoughts on the role of quality requirements and properties in program repair. From testing to metrics to scalability to human factors to technology transfer, software repair touches many aspects of software engineering, and we hope a behind-the-scenes exploration of some of our struggles and successes may benefit researchers pursuing new projects.
Data pre-processing plays a key role in a data analytics process (e.g., supervised learning). It encompasses a broad range of activities that span from correcting errors to selecting the most relevant features for the analysis phase. There is no clear evidence, or rules defined, on how pre-processing transformations (e,g., normalization, discretization, etc.) impact the final results of the analysis. The problem is exacerbated when transformations are combined into pre-processing pipeline prototypes. Data scientists cannot easily foresee the impact of pipeline prototypes and hence require a method to discriminate between them and find the most relevant ones (e.g., with highest positive impact) for their study at hand. Once found, these pipelines can be optimized using AutoML in order to generate executable pipelines (i.e., with parametrized operators for each transformation). In this work, we study the impact of transformations in general, and the impact of transformations when combined together into pipelines. We develop a generic method that allows to find effective pipeline prototypes. Evaluated using Scikit-learn, our effective pipeline prototypes, when optimized, provide results that get 90% of the optimal predictive accuracy in the median, but with a cost that is 24 times smaller.
Instead of randomly injecting faults ( i.e. Chaos Monkey), what if we could order our experiments to perform min number of experiments for maximum yield? We present a solution(& results) to the problem of experiment selection using Lineage Driven Fault Injection to reduce the search space of faults.
Lineage Driven Fault Injection (LDFI) is a state of the art technique in chaos engineering experiment selection. LDFI since its inception has used an SAT solver under the hood which presents solutions to the decision problem (which faults to inject) in no particular order. As SRE’s we would like to perform experiments that reveal the bugs that the customers are most likely to hit first. In this talk, we present new improvements to LDFI that orders the experiment suggestions.
In the first the half of the talk we will show LDFI is a technique that can be widely used within an enterprise. We present the motivation for ordering the chaos experiments along with some prioritization we utilized while conducting the experiments. We also highlight how ordering is a general purpose technique that we can use to encode the peculiarities of a heterogeneous microservices architecture. LDFI can work in an enterprise by harnessing the observability infrastructure to model the redundancy of the system.
Next, we present experiments conducted within our organization using ordered LDFI and some preliminary results. We show examples of services where we discovered bugs, and how carefully controlling the order of experiments allowed LDFI to avoid running unnecessary experiments. We also present an example of an application where we declared the service shippable under crash stop model. We also present a comparison with Chaos Monkey and show how LDFI found the known bugs in a given application using orders of magnitude fewer experiments than a random fault injection tool like Chaos Monkey.
Finally, we discuss how we plan to take LDFI forward. We discuss open problems and possible solutions for scalarizing probabilities of failure, latency injection, integration with service mesh technologies like envoy for fine-grained fault injection, fault injection for stateful systems.
Key takeaways: 1) Understand how LDFI can be integrated in the enterprise by harnessing the observability infrastructure. 2) Limitations of LDFI w.r.t unordered solutions and why ordering matters for chaos engineering experiments. 3) Preliminary results of prioritized LDFI and a future direction for the community.
Description: WeightWatcher (WW): is an open-source, diagnostic tool for analyzing Deep Neural Networks (DNN), without needing access to training or even test data. It can be used to:analyze pre/trained PyTorch, Keras, DNN models (Conv2D and Dense layers) monitor models, and the model layers, to see if they are over-trained or over-parameterized, predict test accuracies across different models, with or without training data, and detect potential problems when compressing or fine-tuning pre-trained models. see https://weightwatcher.ai
Next-Generation Completeness and Consistency Management in the Digital Threa...Ákos Horváth
In the new era of digitalization, there is an ever-growing need for design and production processes capable of increasing systems quality, reducing risks and the chance of errors, while, at the same me, reducing overall production costs. Nowadays, more and more systems design scenarios comprise a high number of domains.
However, the underlying tool landscape is still dominated by closed ecosystems, resulting in the design data remaining in separated silos. In order to effectively deal with novel, massively diverse yet interconnected engineering scenarios, while also considering industrial sustainability and the well-being of the future digital society, we have to propose new ways to look at the digital thread, supporting every phase of a digital engineering lifecycle, while turning the siloed multi-domain engineering data into a holistic, accessible and globally analyzable digital thread.
This talk serves two main purposes: first, to overview the state-of-the-art digital thread tool landscape along the aspects of domain and vendor/tool coverage, scalability, as well as decisive functional capabilities, such as the support of transformations or interdomain link/trace handling. We review offerings such as Intercax Syndeia, Smartfacts, eQube, ModelCenter and the IncQuery Suite, and demonstrate some practical aspects through a complex multi-domain engineering scenario.
Natural Language Understanding of Systems Engineering ArtifactsÁkos Horváth
This paper examines in close relation two fields of growing importance: model-based systems engineering (MBSE) and natural language processing (NLP). System models provide a structured description of engineering data, whose inherent semantics often remains hard to explore. Natural language understanding, (i.e., the machine analysis of texts produced by humans) an important field of NLP, focuses on semantic text comprehension but cannot directly account for structured information sources.
More Related Content
Similar to CPS(M): Constraint Satisfaction Problem over Models (a.k.a rule based design space exploration)
Ontology-based data access: why it is so cool!Josef Hardi
A brief introduction about ontology-based data access (shortly OBDA) and its core implementation. I presented too a recent simple benchmark between -ontop- and Semantika---two most available software for OBDA framework---in term of query performance (including details in the appendix section). The slides were presented for Friday Research Meeting in Stanford Center for Biomedical Informatics Research (BMIR).
License: Creative Commons by Attribution 3.0
In the modern world, we are permanently using, leveraging, interacting with, and relying upon systems of ever higher sophistication, ranging from our cars, recommender systems in eCommerce, and networks when we go online, to integrated circuits when using our PCs and smartphones, security-critical software when accessing our bank accounts, and spreadsheets for financial planning and decision making. The complexity of these systems coupled with our high dependency on them implies both a non-negligible likelihood of system failures, and a high potential that such failures have significant negative effects on our everyday life. For that reason, it is a vital requirement to keep the harm of emerging failures to a minimum, which means minimizing the system downtime as well as the cost of system repair. This is where model-based diagnosis comes into play.
Model-based diagnosis is a principled, domain-independent approach that can be generally applied to troubleshoot systems of a wide variety of types, including all the ones mentioned above. It exploits and orchestrates techniques for knowledge representation, automated reasoning, heuristic problem solving, intelligent search, learning, stochastics, statistics, decision making under uncertainty, as well as combinatorics and set theory to detect, localize, and fix faults in abnormally behaving systems.
In this talk, we will give an introduction to the topic of model-based diagnosis, point out the major challenges in the field, and discuss a selection of approaches from our research addressing these challenges. For instance, we will present methods for the optimization of the time and memory performance of diagnosis systems, show efficient techniques for a semi-automatic debugging by interacting with a user or expert, and demonstrate how our algorithms can be effectively leveraged in important application domains such as scheduling or the Semantic Web.
It Does What You Say, Not What You Mean: Lessons From A Decade of Program RepairClaire Le Goues
In this talk we present lessons learned, good ideas, and thoughts on the future, with an eye toward informing junior researchers about the realities and opportunities of a long-running project. We highlight some notions from the original paper that stood the test of time, some that were not as prescient, and some that became more relevant as industrial practice advanced. We place the work in context, highlighting perceptions from software engineering and evolutionary computing, then and now, of how program repair could possibly work. We discuss the importance of measurable benchmarks and reproducible research in bringing scientists together and advancing the area. We give our thoughts on the role of quality requirements and properties in program repair. From testing to metrics to scalability to human factors to technology transfer, software repair touches many aspects of software engineering, and we hope a behind-the-scenes exploration of some of our struggles and successes may benefit researchers pursuing new projects.
Data pre-processing plays a key role in a data analytics process (e.g., supervised learning). It encompasses a broad range of activities that span from correcting errors to selecting the most relevant features for the analysis phase. There is no clear evidence, or rules defined, on how pre-processing transformations (e,g., normalization, discretization, etc.) impact the final results of the analysis. The problem is exacerbated when transformations are combined into pre-processing pipeline prototypes. Data scientists cannot easily foresee the impact of pipeline prototypes and hence require a method to discriminate between them and find the most relevant ones (e.g., with highest positive impact) for their study at hand. Once found, these pipelines can be optimized using AutoML in order to generate executable pipelines (i.e., with parametrized operators for each transformation). In this work, we study the impact of transformations in general, and the impact of transformations when combined together into pipelines. We develop a generic method that allows to find effective pipeline prototypes. Evaluated using Scikit-learn, our effective pipeline prototypes, when optimized, provide results that get 90% of the optimal predictive accuracy in the median, but with a cost that is 24 times smaller.
Instead of randomly injecting faults ( i.e. Chaos Monkey), what if we could order our experiments to perform min number of experiments for maximum yield? We present a solution(& results) to the problem of experiment selection using Lineage Driven Fault Injection to reduce the search space of faults.
Lineage Driven Fault Injection (LDFI) is a state of the art technique in chaos engineering experiment selection. LDFI since its inception has used an SAT solver under the hood which presents solutions to the decision problem (which faults to inject) in no particular order. As SRE’s we would like to perform experiments that reveal the bugs that the customers are most likely to hit first. In this talk, we present new improvements to LDFI that orders the experiment suggestions.
In the first the half of the talk we will show LDFI is a technique that can be widely used within an enterprise. We present the motivation for ordering the chaos experiments along with some prioritization we utilized while conducting the experiments. We also highlight how ordering is a general purpose technique that we can use to encode the peculiarities of a heterogeneous microservices architecture. LDFI can work in an enterprise by harnessing the observability infrastructure to model the redundancy of the system.
Next, we present experiments conducted within our organization using ordered LDFI and some preliminary results. We show examples of services where we discovered bugs, and how carefully controlling the order of experiments allowed LDFI to avoid running unnecessary experiments. We also present an example of an application where we declared the service shippable under crash stop model. We also present a comparison with Chaos Monkey and show how LDFI found the known bugs in a given application using orders of magnitude fewer experiments than a random fault injection tool like Chaos Monkey.
Finally, we discuss how we plan to take LDFI forward. We discuss open problems and possible solutions for scalarizing probabilities of failure, latency injection, integration with service mesh technologies like envoy for fine-grained fault injection, fault injection for stateful systems.
Key takeaways: 1) Understand how LDFI can be integrated in the enterprise by harnessing the observability infrastructure. 2) Limitations of LDFI w.r.t unordered solutions and why ordering matters for chaos engineering experiments. 3) Preliminary results of prioritized LDFI and a future direction for the community.
Description: WeightWatcher (WW): is an open-source, diagnostic tool for analyzing Deep Neural Networks (DNN), without needing access to training or even test data. It can be used to:analyze pre/trained PyTorch, Keras, DNN models (Conv2D and Dense layers) monitor models, and the model layers, to see if they are over-trained or over-parameterized, predict test accuracies across different models, with or without training data, and detect potential problems when compressing or fine-tuning pre-trained models. see https://weightwatcher.ai
Next-Generation Completeness and Consistency Management in the Digital Threa...Ákos Horváth
In the new era of digitalization, there is an ever-growing need for design and production processes capable of increasing systems quality, reducing risks and the chance of errors, while, at the same me, reducing overall production costs. Nowadays, more and more systems design scenarios comprise a high number of domains.
However, the underlying tool landscape is still dominated by closed ecosystems, resulting in the design data remaining in separated silos. In order to effectively deal with novel, massively diverse yet interconnected engineering scenarios, while also considering industrial sustainability and the well-being of the future digital society, we have to propose new ways to look at the digital thread, supporting every phase of a digital engineering lifecycle, while turning the siloed multi-domain engineering data into a holistic, accessible and globally analyzable digital thread.
This talk serves two main purposes: first, to overview the state-of-the-art digital thread tool landscape along the aspects of domain and vendor/tool coverage, scalability, as well as decisive functional capabilities, such as the support of transformations or interdomain link/trace handling. We review offerings such as Intercax Syndeia, Smartfacts, eQube, ModelCenter and the IncQuery Suite, and demonstrate some practical aspects through a complex multi-domain engineering scenario.
Natural Language Understanding of Systems Engineering ArtifactsÁkos Horváth
This paper examines in close relation two fields of growing importance: model-based systems engineering (MBSE) and natural language processing (NLP). System models provide a structured description of engineering data, whose inherent semantics often remains hard to explore. Natural language understanding, (i.e., the machine analysis of texts produced by humans) an important field of NLP, focuses on semantic text comprehension but cannot directly account for structured information sources.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...Wasswaderrick3
In this book, we use conservation of energy techniques on a fluid element to derive the Modified Bernoulli equation of flow with viscous or friction effects. We derive the general equation of flow/ velocity and then from this we derive the Pouiselle flow equation, the transition flow equation and the turbulent flow equation. In the situations where there are no viscous effects , the equation reduces to the Bernoulli equation. From experimental results, we are able to include other terms in the Bernoulli equation. We also look at cases where pressure gradients exist. We use the Modified Bernoulli equation to derive equations of flow rate for pipes of different cross sectional areas connected together. We also extend our techniques of energy conservation to a sphere falling in a viscous medium under the effect of gravity. We demonstrate Stokes equation of terminal velocity and turbulent flow equation. We look at a way of calculating the time taken for a body to fall in a viscous medium. We also look at the general equation of terminal velocity.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
CPS(M): Constraint Satisfaction Problem over Models (a.k.a rule based design space exploration)
1. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
CSP(M): Constraint Satisfaction Problem over
Models
Ákos Horváth and
Dániel Varró
2. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
2
Outline
Introduction
CSP(M) Conclusion
Solving
CSP(M)
3. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
Eight Queens Problem
Place 8 queens on a checkboard without captures
5. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
Solving CSP: Labeling
Place first
queen:
A8 = 1
6. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
Solving CSP: Constraint Propagation
0 0 0 0 0 0 0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
Deduce
consequences
A7=0
7. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
Solving CSP: Labeling
0 0 0 0 0 0 0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
Place next
queen
D6=1
8. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
Solving CSP: Constraint Propagation
0 0 0 0 0 0 0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0 0 0 0
0
0
0
0
0
0
0
0
Deduce
consequence
B6=0
9. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
Solving CSP: Labeling + Propagation
0 0 0 0 0 0 0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0 0 0 0
0
0
0
0
0
0
0
0
0
0
0
0
0 0
00
0
0
Cannot place
Queen on E-file
Backtracking to
last decision
10. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
Solving CSP: Backtracking
0 0 0 0 0 0 0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0 0 0 0
0
0
0
0
0
0
0
0
11. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
Solving CSP: Labeling + Propagation
0 0 0 0 0 0 0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0 0 0 0
0
0
0
0
0
0
0
0
0
0
0
0
0 0 0
0
0
12. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
Solving CSP: Labeling + Propagation
0 0 0 0 0 0 0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0 0 0 0
0
0
0
0
0
0
0
0
0
0
0
0
0 0 0
0
0
If you are
smarter, you
can see this is
in wrong place
Backjumping
to preceding
state
13. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
Solving CSP: Backjumping
0 0 0 0 0 0 0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
Continues with
labeling…
14. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
Extensions: Dynamic variables
0 0 0 0 0 0 0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
I5
0,1
I6
0,1
I3
0,1
I4
0,1
I2
0,1
Introducing new
variables while
solving
15. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
Extensions: Complex labeling
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
How many
queens can you
place without
captures?
16. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
Extensions: Complex labeling
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
17. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
Extensions: Complex labeling
0
0
0
0
0
0
0
0
0
0
0 0
0
0
0
0
0
0
0
0
0
0
Placing a new queen
invalidates effects of
previous constraint
propagation
18. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
Challenges for CSP over Models
Dynamic variables
Dynamic constraint management
Native representation for (graph) models
19. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
19
Outline
Introduction
CSP(M) Conclusion
Solving
CSP(M)
20. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
CSP(M)
Described by (M0,C,G,L)
− M0 initial model (typed graph)
− C set of global constraints (graph patterns)
− G set of goals (graph patterns)
− L set of labeling rules (GT rules)
Goal
− Find a model Ms which satisfies all global
constraints and goals.
●One model
●All model
●Optimal model
21. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
Integrated modular avionics (IMA) system
Composed of
− Jobs; Simple Job ,Critical Job
− Partitions; compose of jobs
− Modules; host partitions
− Cabinets; storage of modules
● Max 2
Task
− Allocate predefined Jobs on predefined Partitions using
minimal number of Modules
Running Example
1 1
22. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
Running Example: Constraints
Partition one criticality level
Critical job’s redundant instances on different
Partitions and Modules
Free memory of partition can not be less than
zero
●Attribute constraint
1
1 1
1
12
23. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
CSP(M): Goal and Global Constraint
Graph pattern
Satisfied
− Negative
●No matching
− Positive
●At least one
matching
− Cardinality
●|matching| =
Cardinality
criticalInstanceonSameModule(Job)
J1: JobInstance
Job: CriticalJob
j1: instances
J2: JobInstance
M1: Module
j2: instances
pr1: partitions
jb1: jobs
P1: Partition P1: Partition
jb2: jobs
pr2 : partitions
partitionwithoutModule(P)
P: Partition
M1: Module NEG
p1:partition
s
Global Constraint
Goal
24. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
CSP(M): Goal and Global Constraint
Graph pattern
Satisfied
− Negative
●No matching
− Positive
●At least one
matching
− Cardinality
●|matching| =
Cardinality
criticalInstanceonSameModule(Job)
J1: JobInstance
Job: CriticalJob
j1: instances
J2: JobInstance
M1: Module
j2: instances
pr1: partitions
jb1: jobs
P1: Partition P1: Partition
jb2: jobs
pr2 : partitions
partitionwithoutModule(P)
P: Partition
M1: Module NEG
p1:partition
s
Global Constraint
Goal
No Critical Job
instance pair on the
same Module
No Partition
without Module
25. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
CSP(M): Labeling Rule by GT
GT rule
Applicability
− precondition matches
to model
Priority
− Precedence relation
Execution mode
− Choose (one random)
− Forall (all matchings) M1:Module
allocatePartition(P)
P: Partition
M2: Module
NEG
p1: partitions{NEW}
p2: partitions
createModule()
M : Module
{NEW}
Dynamic models
− Element
creation/deletion
Labeling Rule
26. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
26
Outline
Introduction
CSP(M) Conclusion
Solving
CSP(M)
27. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
Solving CSP(M)
Current State
28. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
Solving CSP(M)
allocatePartition
Next state
Transition
New Elements
29. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
Solving CSP(M)
allocatePartition
Solution:
Satisfies goals and
global constraint
30. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
Solving CSP(M)
allocatePartition
createModule
allocateModule
Goals not satisfied
Global Constraint
violated
backtracks
31. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
Implementation over VIATRA2
Incremental constraint evaluation by
incremental pattern matching
− Cached matchings
− Incrementally updated
Simple state space representation
Typed graph comparison
− DSMDiFF
Backtracking
− Transaction on atomic manipulation operations
32. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
Search Strategies
− Simple Backtracking
− Random Backjumping
− Guided travelsal by Petri-net abstraction
Constraint optimization
− Look-ahead patterns
− Exception priority
Evaluation
− On average computer (Core duo 1.8 GHz, 2 GB of memory)
− Common industrial problem 51 jobs, 7 partitions and 4
cabinets,
● In average first solution in ~120 sec
Optimizations
33. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
Search Strategies
− Simple Backtracking
− Random Backjumping
− Guided travelsal by Petri-net abstraction
Constraint optimization
− Look-ahead patterns
− Exception priority
Evaluation
− On average computer (Core duo 1.8 GHz, 2 GB of memory)
− Common industrial problem 51 jobs, 7 partitions and 4
cabinets,
● In average first solution in ~120 sec
Optimizations
34. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
Search Strategies
− Simple Backtracking
− Random Backjumping
− Guided travelsal by Petri-net abstraction
Constraint optimization
− Look-ahead patterns
− Exception priority
Evaluation
− On average computer (Core duo 1.8 GHz, 2 GB of memory)
− Common industrial problem 51 jobs, 7 partitions and 4
cabinets,
● In average first solution in ~120 sec
Optimizations
35. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
Search Strategies
− Simple Backtracking
− Random Backjumping
− Guided travelsal by Petri-net abstraction
Constraint optimization
− Look-ahead patterns
− Exception priority
Evaluation
− On average computer (Core duo 1.8 GHz, 2 GB of memory)
− Common industrial problem 51 jobs, 7 partitions and 4
cabinets,
● In average first solution in ~120 sec
Optimizations
Restriction on the
number of rule
applications
36. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
Search Strategies
− Simple Backtracking
− Random Backjumping
− Guided travelsal by Petri-net abstraction
Constraint optimization
− Look-ahead patterns
− Exception priority
Evaluation
− On average computer (Core duo 1.8 GHz, 2 GB of memory)
− Common industrial problem 51 jobs, 7 partitions and 4
cabinets,
● In average first solution in ~120 sec
Optimizations
Same Global
Constraint fails
Merge Global constraint
into Labeling rule
precondition
37. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
Search Strategies
− Simple Backtracking
− Random Backjumping
− Guided travelsal by Petri-net abstraction
Constraint optimization
− Look-ahead patterns
− Exception priority
Evaluation
− On average computer (Core duo 1.8 GHz, 2 GB of memory)
− Common industrial problem 51 jobs, 7 partitions and 4
cabinets,
● In average first solution in ~120 sec
Optimizations
38. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
38
Outline
Introduction
CSP(M) Conclusion
Solving
CSP(M)
39. Budapest University of Technology and Economics
Fault-tolerant Systems Research Group
Conclusion
Summary
− General definition of constraint problems over models
● Labeling rules by GT rules
● Goals and constraints by GT patterns
● Dynamic variables
− Implementation over VIATRA2
● Constraint propagation using incremental pattern matching
● Dynamically add/remove constraints and labeling rules
Future work
− Compact state space representation
● Model differentials
● Symbolic state representation
● State comparison
− Automatic look-ahead pattern detection (critical pair
analysis)
− Comparison with Alloy and Korat
Editor's Notes
Global constraint must hold in all traversed state space
Goals are need to be satisfied in the solution model
Labeling rule defines the valid operations to use to reach a solution model
integrated modular avionics (IMA) system composed of Jobs (also referred as applications),
Partitions, Modules and Cabinets.
Jobs are the atomic software blocks of the system defined by their memory requirement.
Based on their criticality level jobs are separated into two sets: critical and simple (non-critical). For critical jobs double or triple modular redundancy is applied while for simple ones only one instance is allowed.
Partitions are complex software components composed of jobs with a predefined free memory space.
Jobs can be allocated to the partition as long as they fit into its memory space. Modules are SW components capable of hosting partitions. Finally, Cabinets are storages for maximum up to two modules used to physically distribute elements of the system.
Additionally a certain number of safety related requirements will also have to be
satisfied: (i) a partition can only host jobs of one criticality level and (ii) instances of
a certain critical job can not be allocated to the same partition and module. The task is
to allocate an IMA system defined by its jobs and partitions over a predefined cabinet
structure and to minimize the number of modules used.
Constraint evaluation.
As matches of patterns are
cached, this reduces the evaluation of constraints and preconditions of labeling rules to
a simple check. This way, the solver has an incrementally maintained up-to-date view of
its constraint store and enabled labeling rules.
Exception priority: restrict rule application
Exception priority: restrict rule application
Exception priority: restrict rule application
Exception priority: restrict rule application
Exception priority: restrict rule application
Exception priority: restrict rule application
For introducing GT related notation, I chose a
Graph transformation requires a metamodel (or a type graph) that defines the abstract syntax of our modeling domain.
Multiplicity declares the number of objects, that at run-time, may participate in an association.
An instance model (or an instance graph) is also needed, which describes a concrete system from our domain.
Exception priority: restrict rule application
To traverse the search space of a constraint program introduced in Sec. 3.2, we define
the solver as a virtual machine that maintains a 4-tuple (CG,CS,AM,LS) as a state.
CG is called the current goal; CS is the constraint store; AM is the actual model; and
finally LS is the labeling store. The (i) current goal stores the subgoals that still need to
be satisfied; the (ii) constraint store holds all constraints the solver has satisfied so far
while the (iii) actual model represents the underlying actual model and finally the (iv)
labeling store contains all enabled labeling rules. An element in the labeling store is a
pair (l,m), where l is a labeling rule and m is a valid match of its precondition LHSl in
AM; formally m : LHSl −!AM.
Initially, the CG, CS and LS are