LoLA is an explicit-state model checker for Petri nets that focuses on standard properties and uses many reduction techniques such as stubborn sets, symmetries, and sweep-line heuristics to efficiently analyze large state spaces. It takes Petri nets as input in the form of place/transition nets or high-level algebraic nets and allows users to specify verification tasks involving properties such as boundedness, reachability, and temporal logics. LoLA is open source and has been used in several case studies to generate experimental results tables exploring the impact of basic design decisions.
Platform-independent static binary code analysis using a meta-assembly languagezynamics GmbH
This document describes a platform-independent static code analysis framework called MonoREIL that uses abstract interpretation. It introduces an intermediate representation called REIL that can be extracted from disassembled binary code. MonoREIL implements abstract interpretation over REIL code to perform analyses like register tracking and detecting negative array indexing without dependence on platform or source code. The framework is intended for offensive security purposes like binary analysis where source is unavailable.
This document discusses return-oriented programming (ROP) techniques for exploiting systems with non-executable memory pages. It provides an overview of ROP, describes algorithms for automatically finding "gadgets" (snippets of code) within a binary that can be chained together to perform tasks, and introduces a ROP compiler called The Wolf that helps chain gadgets while accounting for side effects. The goal is to execute attacker-controlled code on systems with protections like code signing and sandboxing enabled.
- Lexical analyzer reads source program character by character to produce tokens. It returns tokens to the parser one by one as requested.
- A token represents a set of strings defined by a pattern and has a type and attribute to uniquely identify a lexeme. Regular expressions are used to specify patterns for tokens.
- A finite automaton can be used as a lexical analyzer to recognize tokens. Non-deterministic finite automata (NFA) and deterministic finite automata (DFA) are commonly used, with DFA being more efficient for implementation. Regular expressions for tokens are first converted to NFA and then to DFA.
The document provides an overview of different levels of computer languages from high-level languages down to machine language, explaining how high-level languages are compiled into assembly language then machine code using an assembler. It describes the structure and components of assembly language instructions, including operation codes, operands, registers, flags, and different types of instructions. Examples of assembly language instructions are also provided to illustrate arithmetic operations, jumps, flags, and registers.
The document discusses constructing a DFA from a regular expression and NFA. It provides an algorithm for the subset construction which works by treating each DFA state as a set of NFA states. Transitions are determined by taking the epsilon closure of the NFA states reachable on the input symbol from the current set of states. An example applies the algorithm to construct the DFA for the regular expression (a/b)n*abb from its NFA.
The document discusses constructing a DFA from a regular expression and NFA. It provides an algorithm for the subset construction which works by treating each DFA state as a set of NFA states. Transitions are determined by taking the epsilon closure of the NFA states reachable on the input symbol from the current set of states. An example applies the algorithm to construct the DFA for the regular expression (a/b)n*abb from its NFA.
This document discusses lexical analysis in compilers. It begins with an outline of the topics to be covered, including lexical analysis, regular expressions, finite state automata, and the process of converting regular expressions to deterministic finite automata (DFAs). It then provides more details on each phase of a compiler and the role of lexical analysis. Key aspects of lexical analysis like tokenizing source code and classifying tokens are explained. The document also covers implementation of regular expressions using non-deterministic finite automata (NFAs) and their conversion to equivalent DFAs using techniques like epsilon-closure and transition tables.
The document discusses code generation which involves mapping intermediate code to machine code. It describes three key issues in code generator design: instruction selection which determines the best machine instructions to use, register allocation which assigns variables to registers, and evaluation order which determines the order of instructions. The document outlines three algorithms for code generation that involve partitioning code into basic blocks, performing intra-block optimizations, and code selection and assignment.
Platform-independent static binary code analysis using a meta-assembly languagezynamics GmbH
This document describes a platform-independent static code analysis framework called MonoREIL that uses abstract interpretation. It introduces an intermediate representation called REIL that can be extracted from disassembled binary code. MonoREIL implements abstract interpretation over REIL code to perform analyses like register tracking and detecting negative array indexing without dependence on platform or source code. The framework is intended for offensive security purposes like binary analysis where source is unavailable.
This document discusses return-oriented programming (ROP) techniques for exploiting systems with non-executable memory pages. It provides an overview of ROP, describes algorithms for automatically finding "gadgets" (snippets of code) within a binary that can be chained together to perform tasks, and introduces a ROP compiler called The Wolf that helps chain gadgets while accounting for side effects. The goal is to execute attacker-controlled code on systems with protections like code signing and sandboxing enabled.
- Lexical analyzer reads source program character by character to produce tokens. It returns tokens to the parser one by one as requested.
- A token represents a set of strings defined by a pattern and has a type and attribute to uniquely identify a lexeme. Regular expressions are used to specify patterns for tokens.
- A finite automaton can be used as a lexical analyzer to recognize tokens. Non-deterministic finite automata (NFA) and deterministic finite automata (DFA) are commonly used, with DFA being more efficient for implementation. Regular expressions for tokens are first converted to NFA and then to DFA.
The document provides an overview of different levels of computer languages from high-level languages down to machine language, explaining how high-level languages are compiled into assembly language then machine code using an assembler. It describes the structure and components of assembly language instructions, including operation codes, operands, registers, flags, and different types of instructions. Examples of assembly language instructions are also provided to illustrate arithmetic operations, jumps, flags, and registers.
The document discusses constructing a DFA from a regular expression and NFA. It provides an algorithm for the subset construction which works by treating each DFA state as a set of NFA states. Transitions are determined by taking the epsilon closure of the NFA states reachable on the input symbol from the current set of states. An example applies the algorithm to construct the DFA for the regular expression (a/b)n*abb from its NFA.
The document discusses constructing a DFA from a regular expression and NFA. It provides an algorithm for the subset construction which works by treating each DFA state as a set of NFA states. Transitions are determined by taking the epsilon closure of the NFA states reachable on the input symbol from the current set of states. An example applies the algorithm to construct the DFA for the regular expression (a/b)n*abb from its NFA.
This document discusses lexical analysis in compilers. It begins with an outline of the topics to be covered, including lexical analysis, regular expressions, finite state automata, and the process of converting regular expressions to deterministic finite automata (DFAs). It then provides more details on each phase of a compiler and the role of lexical analysis. Key aspects of lexical analysis like tokenizing source code and classifying tokens are explained. The document also covers implementation of regular expressions using non-deterministic finite automata (NFAs) and their conversion to equivalent DFAs using techniques like epsilon-closure and transition tables.
The document discusses code generation which involves mapping intermediate code to machine code. It describes three key issues in code generator design: instruction selection which determines the best machine instructions to use, register allocation which assigns variables to registers, and evaluation order which determines the order of instructions. The document outlines three algorithms for code generation that involve partitioning code into basic blocks, performing intra-block optimizations, and code selection and assignment.
Notation, Regular Expressions in Lexical Specification, Error Handling, Finite Automata State Graphs, Epsilon Moves, Deterministic and Non-Deterministic Automata, Table Implementation of a DFA
The document discusses assembly language instructions and concepts. It covers:
- Basic instruction formats and operand types like registers, memory, and immediates.
- Data transfer instructions like MOV, and addressing modes like direct offset addressing of arrays.
- Arithmetic instructions like ADD, SUB, INC, DEC, and NEG, and how they affect status flags.
- Status flags like zero, carry, sign, auxiliary carry, parity, and overflow flags which provide information about instruction results.
This document discusses runtime environments and storage allocation strategies. It covers:
- How procedure activations are represented at runtime using activation records, control stacks, and activation trees. Activation records store local variables, parameters, return values, and more.
- Different strategies for allocating storage at runtime, including static allocation where sizes are known at compile time, stack allocation for procedure activations and recursion, and heap allocation for dynamic memory.
- How names are bound to values at compile time through environments and at runtime through states. The scope and lifetime of bindings are also discussed.
- Issues related to mapping names to storage locations and values at runtime, including how assignments change the state but not the environment.
-
This document describes the basic functions of an assembler including translating mnemonic operation codes to machine language equivalents, assigning addresses to symbolic labels, and building properly formatted machine instructions. It provides examples of assembly language code and discusses machine-dependent features like instruction formats and addressing modes as well as machine-independent features such as literals, symbol definitions, and program structure.
Assembly Language Programming By Ytha Yu, Charles Marut Chap 6 (Flow Control ...Bilal Amjad
The document discusses various high-level programming constructs like IF-THEN-ELSE, WHILE loops, FOR loops, and CASE statements and how they can be implemented using assembly language instructions. Conditional jumps, unconditional jumps, flags, and other instructions like LOOP, CMP, and JCXZ are used to emulate the flow control and conditional behavior of these high-level constructs. Examples are provided to demonstrate how to write assembly code equivalents for high-level statements like checking if a character is a capital letter, counting characters in a line, and displaying patterns based on conditions.
The document discusses code optimization and code generation in compilers. It covers the position of a code generator in the compiler model, code generation, the target machine architecture, instruction selection, register allocation, basic blocks, control flow graphs, common subexpression elimination, dead code elimination, and next-use information. The target machine has registers, instructions with opcodes and addressing modes, and a simple cost model. Code optimization aims to efficiently map source code to the target instruction set architecture.
The document discusses run-time environments and how compilers support program execution through run-time environments. It covers:
1) The compiler cooperates with the OS and system software through a run-time environment to implement language abstractions during execution.
2) The run-time environment handles storage layout/allocation, variable access, procedure linkage, parameter passing and interfacing with the OS.
3) Memory is typically divided into code, static storage, heap and stack areas, with the stack and heap growing towards opposite ends of memory dynamically during execution.
This document provides an overview of Triton, a concolic execution framework built as a Pin tool. Triton allows for dynamic binary analysis using symbolic execution, taint analysis, and snapshot capabilities. It represents instructions and their semantics using SMT2-LIB format and interfaces with SMT solvers like Z3. Triton guides symbolic execution with information from its taint analysis engine. It also provides a snapshot engine to replay execution traces in memory. The document discusses Triton's internal components and how users can build analysis tools with Triton by installing dependencies and running Pin scripts.
The document discusses assembly language programming concepts including the stack segment, stack, stack instructions, subroutines, macros, and recursive procedures. It provides examples and explanations of these concepts. It also includes sample programs and solutions related to stacks, subroutines, and other assembly language topics.
Scala is a programming language that runs on the JVM and fuses functional and object-oriented paradigms. It aims to provide functional programming for programmers with an imperative mindset. Key features include functions as first-class values, pattern matching, traits for composition, and seamless interoperability with Java. While some features appear to be language features, many are actually implemented via libraries. The Scala community is growing with adoption by companies and increasing support in tools and publications.
Syntax-Directed Translation: Syntax-Directed Definitions, Evaluation Orders for SDD's, Applications of Syntax-Directed Translation, Syntax-Directed Translation Schemes, and Implementing L-Attributed SDD's. Intermediate-Code Generation: Variants of Syntax Trees, Three-Address Code, Types and Declarations, Type Checking, Control Flow, Back patching, Switch-Statements
This document provides an introduction to lexical analysis and the Lex tool. It discusses the phases of a compiler including lexical analysis. The objectives are to understand lexical analysis, the role of lexical analysis, and input buffering. It then reviews concepts like compilers, scanners, parsers, and grammars. The remainder of the document discusses the architecture of a lexical analyzer, the Lex tool, structure of Lex programs, finite automata, deterministic and nondeterministic automata, conversion of regular expressions to finite automata, and provides example code. Homework questions are also provided at the end.
This document provides an overview of intermediate 8086 assembly language programming. It discusses machine code and assembly language, compilers and assemblers, general purpose registers, simple commands, number formats, jumps, labels, logical operations, instructions that affect memory, changing addresses, examples using memory, the instruction pointer, the stack, and PUSH and POP instructions. The document is intended to teach basic concepts of 8086 assembly language programming.
This document discusses behavioral Verilog coding constructs including always and initial blocks, blocking vs non-blocking assignments, and if-else and case statements. It provides examples of coding flip-flops and sequential logic using these constructs. It also discusses simulator mechanics and the stratified event queue model of simulation.
The document discusses the ALGOL family of programming languages, including their history and goals. It describes the three versions of ALGOL: ALGOL 58, which introduced features like types and control structures; ALGOL 60, which refined the syntax and added block structures and recursion; and ALGOL 68, which had a more advanced type system but was difficult to adopt. While ALGOL did not achieve wide commercial success, it was influential in establishing concepts like block structures, recursion, and formal syntax specification that became standard in later languages.
The document defines common CPU terms like CPU, binary code, ALU, and program counter. It describes the four basic steps a CPU takes to process data: fetch, decode, execute, and store. The CPU fetches instructions from memory using the program counter, decodes assembly code into binary, executes instructions by performing calculations with the ALU or moving data, and stores output data back in memory. The fetch-execute cycle has an instruction time to decode instructions and an execution time to run them and store processed data.
Dynamic Binary Analysis and Obfuscated Codes Jonathan Salwan
At this presentation we will talk about how a DBA (Dynamic Binary Analysis) may help a reverse engineer to reverse obfuscated code. We will first introduce some basic obfuscation techniques and then expose how it's possible to break some stuffs (using our open-source DBA framework - Triton) like detect opaque predicates, reconstruct CFG, find the original algorithm, isolate sensible data and many more... Then, we will conclude with a demo and few words about our future work.
LoLA is an open source tool for verifying properties of Petri nets through explicit state space generation. It features many state space reduction techniques and can verify standard properties like boundedness, reachability, and LTL/CTL formulas. LoLA was created to generate experimental results tables and explore basic design decisions like having no GUI and generating a dedicated state space for each property. It has been under development since 1998 and is aimed at helping users verify realistic models efficiently.
The document describes the input language for the LoLA model checker. It allows specifying Petri nets and verification tasks in a high-level algebraic style. Key elements include:
1. Defining sorts, operations, and their interpretations to specify the types and functions used.
2. Declaring high-level places and markings as terms over sorts to represent multiple low-level places and tokens.
3. Specifying high-level transitions as procedures with guards and input/output terms to represent multiple low-level transitions.
4. Providing verification tasks as logical formulas involving state predicates to check properties over the unfolded net.
Go is a general purpose programming language created by Google. It is statically typed, compiled, garbage collected, and memory safe. Go has good support for concurrency with goroutines and channels. It has a large standard library and integrates well with C. Some key differences compared to other languages are its performance, explicit concurrency model, and lack of classes. Common data types in Go include arrays, slices, maps, structs and interfaces.
Incremental pattern matching in the VIATRA2 model transformation systemIstvan Rath
Incremental pattern matching allows model transformations to update target models incrementally based on changes to the source model. The VIATRA model transformation system implements incremental pattern matching using a RETE network to efficiently retrieve matching sets as models change. Benchmark results show near-linear performance for sparse models and constant execution time for certain patterns. Future work includes improving construction algorithms and enabling event-driven live transformations.
Notation, Regular Expressions in Lexical Specification, Error Handling, Finite Automata State Graphs, Epsilon Moves, Deterministic and Non-Deterministic Automata, Table Implementation of a DFA
The document discusses assembly language instructions and concepts. It covers:
- Basic instruction formats and operand types like registers, memory, and immediates.
- Data transfer instructions like MOV, and addressing modes like direct offset addressing of arrays.
- Arithmetic instructions like ADD, SUB, INC, DEC, and NEG, and how they affect status flags.
- Status flags like zero, carry, sign, auxiliary carry, parity, and overflow flags which provide information about instruction results.
This document discusses runtime environments and storage allocation strategies. It covers:
- How procedure activations are represented at runtime using activation records, control stacks, and activation trees. Activation records store local variables, parameters, return values, and more.
- Different strategies for allocating storage at runtime, including static allocation where sizes are known at compile time, stack allocation for procedure activations and recursion, and heap allocation for dynamic memory.
- How names are bound to values at compile time through environments and at runtime through states. The scope and lifetime of bindings are also discussed.
- Issues related to mapping names to storage locations and values at runtime, including how assignments change the state but not the environment.
-
This document describes the basic functions of an assembler including translating mnemonic operation codes to machine language equivalents, assigning addresses to symbolic labels, and building properly formatted machine instructions. It provides examples of assembly language code and discusses machine-dependent features like instruction formats and addressing modes as well as machine-independent features such as literals, symbol definitions, and program structure.
Assembly Language Programming By Ytha Yu, Charles Marut Chap 6 (Flow Control ...Bilal Amjad
The document discusses various high-level programming constructs like IF-THEN-ELSE, WHILE loops, FOR loops, and CASE statements and how they can be implemented using assembly language instructions. Conditional jumps, unconditional jumps, flags, and other instructions like LOOP, CMP, and JCXZ are used to emulate the flow control and conditional behavior of these high-level constructs. Examples are provided to demonstrate how to write assembly code equivalents for high-level statements like checking if a character is a capital letter, counting characters in a line, and displaying patterns based on conditions.
The document discusses code optimization and code generation in compilers. It covers the position of a code generator in the compiler model, code generation, the target machine architecture, instruction selection, register allocation, basic blocks, control flow graphs, common subexpression elimination, dead code elimination, and next-use information. The target machine has registers, instructions with opcodes and addressing modes, and a simple cost model. Code optimization aims to efficiently map source code to the target instruction set architecture.
The document discusses run-time environments and how compilers support program execution through run-time environments. It covers:
1) The compiler cooperates with the OS and system software through a run-time environment to implement language abstractions during execution.
2) The run-time environment handles storage layout/allocation, variable access, procedure linkage, parameter passing and interfacing with the OS.
3) Memory is typically divided into code, static storage, heap and stack areas, with the stack and heap growing towards opposite ends of memory dynamically during execution.
This document provides an overview of Triton, a concolic execution framework built as a Pin tool. Triton allows for dynamic binary analysis using symbolic execution, taint analysis, and snapshot capabilities. It represents instructions and their semantics using SMT2-LIB format and interfaces with SMT solvers like Z3. Triton guides symbolic execution with information from its taint analysis engine. It also provides a snapshot engine to replay execution traces in memory. The document discusses Triton's internal components and how users can build analysis tools with Triton by installing dependencies and running Pin scripts.
The document discusses assembly language programming concepts including the stack segment, stack, stack instructions, subroutines, macros, and recursive procedures. It provides examples and explanations of these concepts. It also includes sample programs and solutions related to stacks, subroutines, and other assembly language topics.
Scala is a programming language that runs on the JVM and fuses functional and object-oriented paradigms. It aims to provide functional programming for programmers with an imperative mindset. Key features include functions as first-class values, pattern matching, traits for composition, and seamless interoperability with Java. While some features appear to be language features, many are actually implemented via libraries. The Scala community is growing with adoption by companies and increasing support in tools and publications.
Syntax-Directed Translation: Syntax-Directed Definitions, Evaluation Orders for SDD's, Applications of Syntax-Directed Translation, Syntax-Directed Translation Schemes, and Implementing L-Attributed SDD's. Intermediate-Code Generation: Variants of Syntax Trees, Three-Address Code, Types and Declarations, Type Checking, Control Flow, Back patching, Switch-Statements
This document provides an introduction to lexical analysis and the Lex tool. It discusses the phases of a compiler including lexical analysis. The objectives are to understand lexical analysis, the role of lexical analysis, and input buffering. It then reviews concepts like compilers, scanners, parsers, and grammars. The remainder of the document discusses the architecture of a lexical analyzer, the Lex tool, structure of Lex programs, finite automata, deterministic and nondeterministic automata, conversion of regular expressions to finite automata, and provides example code. Homework questions are also provided at the end.
This document provides an overview of intermediate 8086 assembly language programming. It discusses machine code and assembly language, compilers and assemblers, general purpose registers, simple commands, number formats, jumps, labels, logical operations, instructions that affect memory, changing addresses, examples using memory, the instruction pointer, the stack, and PUSH and POP instructions. The document is intended to teach basic concepts of 8086 assembly language programming.
This document discusses behavioral Verilog coding constructs including always and initial blocks, blocking vs non-blocking assignments, and if-else and case statements. It provides examples of coding flip-flops and sequential logic using these constructs. It also discusses simulator mechanics and the stratified event queue model of simulation.
The document discusses the ALGOL family of programming languages, including their history and goals. It describes the three versions of ALGOL: ALGOL 58, which introduced features like types and control structures; ALGOL 60, which refined the syntax and added block structures and recursion; and ALGOL 68, which had a more advanced type system but was difficult to adopt. While ALGOL did not achieve wide commercial success, it was influential in establishing concepts like block structures, recursion, and formal syntax specification that became standard in later languages.
The document defines common CPU terms like CPU, binary code, ALU, and program counter. It describes the four basic steps a CPU takes to process data: fetch, decode, execute, and store. The CPU fetches instructions from memory using the program counter, decodes assembly code into binary, executes instructions by performing calculations with the ALU or moving data, and stores output data back in memory. The fetch-execute cycle has an instruction time to decode instructions and an execution time to run them and store processed data.
Dynamic Binary Analysis and Obfuscated Codes Jonathan Salwan
At this presentation we will talk about how a DBA (Dynamic Binary Analysis) may help a reverse engineer to reverse obfuscated code. We will first introduce some basic obfuscation techniques and then expose how it's possible to break some stuffs (using our open-source DBA framework - Triton) like detect opaque predicates, reconstruct CFG, find the original algorithm, isolate sensible data and many more... Then, we will conclude with a demo and few words about our future work.
LoLA is an open source tool for verifying properties of Petri nets through explicit state space generation. It features many state space reduction techniques and can verify standard properties like boundedness, reachability, and LTL/CTL formulas. LoLA was created to generate experimental results tables and explore basic design decisions like having no GUI and generating a dedicated state space for each property. It has been under development since 1998 and is aimed at helping users verify realistic models efficiently.
The document describes the input language for the LoLA model checker. It allows specifying Petri nets and verification tasks in a high-level algebraic style. Key elements include:
1. Defining sorts, operations, and their interpretations to specify the types and functions used.
2. Declaring high-level places and markings as terms over sorts to represent multiple low-level places and tokens.
3. Specifying high-level transitions as procedures with guards and input/output terms to represent multiple low-level transitions.
4. Providing verification tasks as logical formulas involving state predicates to check properties over the unfolded net.
Go is a general purpose programming language created by Google. It is statically typed, compiled, garbage collected, and memory safe. Go has good support for concurrency with goroutines and channels. It has a large standard library and integrates well with C. Some key differences compared to other languages are its performance, explicit concurrency model, and lack of classes. Common data types in Go include arrays, slices, maps, structs and interfaces.
Incremental pattern matching in the VIATRA2 model transformation systemIstvan Rath
Incremental pattern matching allows model transformations to update target models incrementally based on changes to the source model. The VIATRA model transformation system implements incremental pattern matching using a RETE network to efficiently retrieve matching sets as models change. Benchmark results show near-linear performance for sparse models and constant execution time for certain patterns. Future work includes improving construction algorithms and enabling event-driven live transformations.
John Backus proposed moving away from von Neumann programming styles towards functional programming in his 1978 paper. He argued that functional programming using applicative models without state could improve program clarity and reasoning about software. While functional programming ideas did not become mainstream, some concepts like higher-order functions are now common. Functional programming may also help with concurrency, but open questions remain around evaluation models and representing imperative computations.
New compiler design 101 April 13 2024.pdfeliasabdi2024
This document provides an overview of syntax analysis, also known as parsing. It discusses the functions and responsibilities of a parser, context-free grammars, concepts and terminology related to grammars, writing and designing grammars, resolving grammar problems, top-down and bottom-up parsing approaches, typical parser errors and recovery strategies. The document also reviews lexical analysis and context-free grammars as they relate to parsing during compilation.
The document discusses different types of parsing including:
1) Top-down parsing which starts at the root node and builds the parse tree recursively, requiring backtracking for ambiguous grammars.
2) Bottom-up parsing which starts at the leaf nodes and applies grammar rules in reverse to reach the start symbol using shift-reduce parsing.
3) LL(1) and LR parsing which are predictive parsing techniques using parsing tables constructed from FIRST and FOLLOW sets to avoid backtracking.
This document discusses modern Fortran programming languages including Fortran90/95/2003/2008. It provides an outline covering motivation for using modern languages, an overview of features in modern Fortran like free format, attributes, implicit none, loops, and arrays. It discusses how modern Fortran fixes flaws in older versions by allowing allocatable arrays and using structures and derived types.
Lexical analysis is the first phase of compilation where the character stream is converted to tokens. It must be fast. It separates concerns by having a scanner handle tokenization and a parser handle syntax trees. Regular expressions are used to specify patterns for tokens. A regular expression specification can be converted to a finite state automaton and then to a deterministic finite automaton to build a scanner that efficiently recognizes tokens.
The document discusses lexical analysis, which is the first stage of syntax analysis for programming languages. It covers terminology, using finite automata and regular expressions to describe tokens, and how lexical analyzers work. Lexical analyzers extract lexemes from source code and return tokens to the parser. They are often implemented using finite state machines generated from regular grammar descriptions of the lexical patterns in a language.
02 functions, variables, basic input and output of c++Manzoor ALam
This document discusses computer programming functions, variables, and basic input/output in C and C++. It covers:
- Defining and calling functions, function parameters and return values.
- Declaring and assigning values to variables of different data types like int, float, and double.
- Using basic input/output functions like cout and cin to display output and get user input.
- The scope of variables and how they work within and outside of functions.
The document discusses lexical analysis and lexical analyzer generators. It begins by explaining that lexical analysis separates a program into tokens, which simplifies parser design and implementation. It then covers topics like token attributes, patterns and lexemes, regular expressions for specifying patterns, converting regular expressions to nondeterministic finite automata (NFAs) and then deterministic finite automata (DFAs). The document provides examples and algorithms for these conversions to generate a lexical analyzer from token specifications.
The document provides information about regular expressions and finite automata. It discusses how regular expressions are used to describe programming language tokens. It explains how regular expressions map to languages and the basic operations used to build regular expressions like concatenation, alternation, and Kleene closure. The document also discusses deterministic finite automata (DFAs), non-deterministic finite automata (NFAs), and algorithms for converting regular expressions to NFAs and DFAs. It covers minimizing DFAs and using finite automata for lexical analysis in scanners.
- Lexical analyzer reads source program character by character to produce tokens. It returns tokens to the parser one by one as requested.
- A token represents a set of strings defined by a pattern and has a type and attribute to uniquely identify a lexeme. Regular expressions are used to specify patterns for tokens.
- A finite automaton can be used as a lexical analyzer to recognize tokens. Non-deterministic finite automata (NFA) and deterministic finite automata (DFA) are commonly used, with DFA being more efficient for implementation. Regular expressions for tokens are first converted to NFA and then to DFA.
This document discusses feature extraction techniques for time series data from power systems. It describes extracting features from raw time series data to improve model performance. Common time series features extracted by the tsfresh Python package are listed, including absolute energy, autocorrelation, entropy measures, Fourier transforms, and wavelet transforms. Discrete Fourier transforms express a time series as a sum of periodic components, while wavelet transforms find similarity between a signal and shifting/scaling functions.
This document outlines the syllabus for a course on data structures and algorithms using Java. It covers topics such as the role of algorithms and data structures, algorithm design techniques, types of data structures including primitive types, arrays, stacks, queues, linked lists, trees, graphs, and algorithm analysis. Specific algorithms and data structures discussed include sorting, searching, priority queues, stacks, queues, linked lists, trees, graphs, hashing, and complexity theory.
Presentation of GetTogether on Functional ProgrammingFilip De Sutter
This document discusses functional programming and provides examples in Haskell, F# and C#. It begins with an introduction to the author and their background in functional programming languages like Prolog, Haskell and F#. It then covers various functional programming concepts like recursion, pattern matching, avoiding side effects, strong typing and type inference. Examples are provided for sorting lists, calculating factorials and other common functional programming tasks. The document emphasizes the declarative nature of functional programming and compares approaches in Haskell, F# and C#.
This document provides an introduction and overview of the C programming language. It discusses the basic structure of a C program including preprocessor directives, global declarations, functions, and statements. It also covers fundamental C concepts such as variable declarations, data types, constants, comments, and input/output functions. The history and evolution of C from earlier languages like ALGOL and BCPL is presented.
This document provides an introduction and overview of the C programming language. It discusses the basic structure of a C program including preprocessor directives, global declarations, functions, and statements. It also covers fundamental C concepts such as variable declarations, data types, constants, comments, and input/output functions. The history and evolution of C from earlier languages like ALGOL and BCPL is presented.
Invited presentation given by Niels Lohmann on December 3, 2013 in Potsdam, Germany as invited lecture at the Business Process Compliance course at the Hasso-Plattner-Institute.
Where did I go wrong? Explaining errors in process modelsUniversität Rostock
Workshop presentation given by Niels Lohmann on February 20, 2014 in Potsdam, Germany at the Sixth Central-European Workshop on Services and their Composition (ZEUS 2014).
Conference presentation given by Niels Lohmann on December 6, 2011 in Paphos, Cyprus at the Ninth International Conference on Service-Oriented Computing (ICSOC 2011).
Workshop presentation given by Niels Lohmann on December 5, 2011 in Paphos, Cyprus at the 6th International Workshop on Engineering Service-Oriented Applications (WESOA'11).
Compliance by Design for Artifact-Centric Business ProcessesUniversität Rostock
This document discusses an approach called "compliance by design" for ensuring that artifact-centric business processes are compliant with regulations. It involves:
1) Specifying a business process model, artifacts, agents, locations and goals
2) Translating legal texts into compliance rules
3) Modeling the compliance rules and integrating them with the business process model
4) Using tools to generate a compliant business process model that satisfies both behavioral and compliance requirements.
This approach aims to avoid subsequent proofs of compliance by building compliance into the design from the start. It also allows flexibility to change compliance rules without needing to regenerate the entire process model.
The document describes various techniques for implementing a Petri net state space search:
1. It discusses how transitions are fired and states are evaluated by marking changed places and checking enabled transitions.
2. State predicates are stored in negation-free normal form to efficiently check state properties.
3. The state space is managed by representing states as bit vectors and organizing them in a decision tree for fast lookup and insertion.
4. Search organization involves firing transitions, finding/inserting states, and backtracking with a search stack and write-only memory approach.
This document discusses integrating the LoLA model checker as a web service for verifying Petri net properties. It lists soundness checks that LoLA can perform, including classical, weak, and relaxed soundness. It provides URLs for editing Petri nets in Oryx and calling the LoLA web service from the University of Rostock service technology site to verify properties by translating nets from PNML to LoLA format and running LoLA as a system call.
Niels Lohmann explores several case studies applying symbolic systems biology techniques:
1) Analyzing biochemical reaction chains using the tool LoLA for fast reachability queries.
2) Finding hazards in Globally Asynchronous Locally Synchronous (GALS) circuits design using Petri nets and partial order reduction.
3) Verifying service choreographies for deadlocks by translating models to open workflow nets and discovering a design flaw.
LoLA is a tool for verifying properties of Petri nets. This document discusses how to:
1. Choose and manage LoLA configurations to optimally verify properties.
2. Ask the right verification questions in a specific, modular way to efficiently verify properties.
3. Optimize Petri net modeling to take advantage of LoLA's reduction techniques and scale verification.
4. Employ scripts and makefiles to automate calling LoLA and analyzing results.
5. Integrate calling LoLA from other tools using UNIX streams for modular verification.
The document summarizes the stubborn set method for state space reduction in Petri nets. It explains that the method works by defining a stubborn set of transitions in each marking that can fire independently of transitions outside the set. This allows reducing the state space by only exploring firings within each stubborn set, while still preserving properties like deadlocks. The proof for deadlock preservation is also outlined.
The document discusses applying counterexample guided abstraction refinement (CEGAR) to verifying properties of Petri nets. It summarizes using the Petri net state equation to represent reachable markings as solutions to a system of linear equations. It then describes using CEGAR to iteratively check solutions and refine the abstraction by adding increments when solutions are found to be infeasible. The approach is implemented in a tool called Sara which shows better performance than other tools on verification problems involving large Petri nets and parameterized systems.
This document describes a joint research project between the University of Rostock's Computer Science and Electrical Engineering departments. The project aims to develop tools and formal methods for analyzing systems and synthesizing web services for resource-constrained devices. This will be done by applying the Devices Profile for Web Services (DPWS) standard, which allows using web service technology on embedded systems and sensor networks in a way that is compatible with existing enterprise web services. The goal is to enable web service capabilities on more intelligent devices that increasingly communicate with each other.
Workshop presentation given by Niels Lohmann on February 22, 2011 in Karlsruhe, Germany at the Third Central-European Workshop on Services and their Composition (ZEUS 2011).
This document compares Petri nets and state spaces for modeling and verification. It discusses that state spaces allow modeling global state changes over time, while Petri nets consider asynchronous components and causality of events. The document also describes techniques for efficient state space generation from Petri nets, such as checking enabled transitions with constant time, firing transitions with constant effort, backtracking transitions, and storing markings in a set. Reduction techniques like linear algebra, sweep-line methods, symmetries, and stubborn sets are also covered to reduce the state space.
Formale Fundierung und effizientere Implementierung der schrittbasierten TLDA...Universität Rostock
Presentation given by Niels Lohmann on September 23, 2005 in Berlin, Germany; Talk given at the diploma defense ceremony at Humboldt-Universität zu Berlin.
Tool demonstration given by Niels Lohmann on September 1, 2006 in Eindhoven, The Netherlands at the Berlin-Eindhoven Service Technology Colloquium 2006 (B.E.S.T. 2006).
service-technology.org — A tool family for correct business processes and ser...Universität Rostock
Tool demonstration given by Niels Lohmann on September 16, 2010 in Hoboken, NJ, USA at the Eighth International Conference on Business Process Management (BPM 2010).
Invited presentation given by Niels Lohmann on June 27, 2006 in Turku, Finland as part of the Advanced Tutorial on Petri Net Modelling of Business Processes; satellite event of the PETRI NETS 2006/ACSD 2006 conferences.
Workshop presentation given by Niels Lohmann on August 16, 2007 in Eindhoven, The Netherlands at the Berlin-Eindhoven Service Technology Colloquium 2007 (B.E.S.T. 2007).
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
Physiology and chemistry of skin and pigmentation, hairs, scalp, lips and nail, Cleansing cream, Lotions, Face powders, Face packs, Lipsticks, Bath products, soaps and baby product,
Preparation and standardization of the following : Tonic, Bleaches, Dentifrices and Mouth washes & Tooth Pastes, Cosmetics for Nails.
The simplified electron and muon model, Oscillating Spacetime: The Foundation...RitikBhardwaj56
Discover the Simplified Electron and Muon Model: A New Wave-Based Approach to Understanding Particles delves into a groundbreaking theory that presents electrons and muons as rotating soliton waves within oscillating spacetime. Geared towards students, researchers, and science buffs, this book breaks down complex ideas into simple explanations. It covers topics such as electron waves, temporal dynamics, and the implications of this model on particle physics. With clear illustrations and easy-to-follow explanations, readers will gain a new outlook on the universe's fundamental nature.
RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3
Verification with LoLA
1. Verification with LoLA
Niels Lohmann and Karsten Wolf
The Blue Angel
Germany, 1930
Run Lola Run
Germany,1998
2. What is LoLA?
• Explicit state space generation
• Place/Transition nets
• Focus on standard properties
• Many reduction techniques, unique features
• Stream based interface
• Open source
3. Where does it come
from?
• INA - Integrated Net Analyzer by Peter Starke
• grown for long time
• state space and structural techniques
• several net classes
• suboptimal design decisions
• MODULA 2
• Papers needed tables with absolute run times
4. Purpose
• Generate competitive “experimental results”
tables
• Explore impact of basic design decisions
• ... Ship as tool
5. Milestones
• 1998: 1st release
• 1998-2005: State space reduction techniques
• 2000: Presentation at Petri Nets
• 2005-: Case studies, integration
• 2007: Invited talk at Petri Nets
• since 2008: Implementation of software
development processes
6. Basic Design Decisions
• No GUI
• Realistic nets are generated, not
painted
• GUI blocks portability
• Many GUIs available, simple
connection possible
• Do not want user interaction
during verification
7. Basic Design Decisions
• One property, one state space
• as opposed to query languages on state
spaces
• One property, one dedicated reduction
• Benefit from on-the-fly verification
• Generation faster than loading
8. Basic Design Decisions
• Configuration at compile time
• property class, search strategy, reductions
• #define instead of if()
• repeated runs in same configuration
9. Featured Properties
• Boundedness (place) • Reversibility
• Boundedness • Home states
• Reachability (marking) • LTL properties F φ,
GF φ, FG φ (predicate)
• Reachability (predicate)
• CTL (formula)
• Deadlocks
• Death (transition)
• Liveness (predicate)
10. Featured Reductions
• Stubborn Sets • Reduction based on S/T
invariants
• unique: dedicated
techniques for standard
properties
• unique.
• Symmetries • Coverability graphs
• unique: automated
• unique: combination with
other reductions
determination of
symmetries in low level
net
• Sweep-Line
• unique: automated
calculation of a progress
measure
11. Goal of Tutorial
• Can LoLA help you?
• Where (and why) does it perform well?
• How to (optimally) use it, to integrate it
12. Outline
• Introduction • Input Language
• Motivation, • State Space
background, Techniques
history
• Using LoLA
• Preview and
outline • Case Studies
• Basic notions • Integrating LoLA
• First demo • Implementation
13. Basic notions: net
• Net: [P,T,F,W,m0]
• P,T finite, nonempty, disjoint
• F ⊆(P x T) ∪ (T x P)
• W: F →N+
• m0: P →N
• Firing
• t activated in m: (p,t) ∈ F m(p) ≥ W(p,t)
• firing; m [t> m’: m’(p) = m(p) - W(p,t) + W(t,p)
• State space:
• states: reachable markings
• edges: m[t>m’
14. Basic notions: properties
• Place p is ...
• bounded iff there is a k such that, for all reachable m, m(p) < k
• Transition t is ...
• dead iff it is not activated in any reachable marking
• State predicate φ (p <>≤≥=≠ k, φ∧φ, φ∨φ,¬φ) is ...
• reachable iff some reachable marking satisfies v
• live iff, from every reachable marking, a marking is reachable that satisfies φ
• Net ...
• is bounded iff all places are
• is reversible iff the initial marking is reachable from all reachable marking
• has home states iff some marking is reachable from all reachable markings
• is deadlock-free iff every reachable marking activates at least one transition
15. Basic notions: Temporal Logic
• LTL: infinite path (starting in m0) satisfies ...
• F φ : is satisfied at least once
• GF φ: φ is satisfied in infinitely many markings
• FG φ: φ is satisfied forever from some marking on
• CTL: marking m satisfies ...
• AX (EX) φ: φ holds in all (some) immediate successor marking
• AF (EF) φ: every (some) path from m contains a marking satisfying φ
• AG (EG) φ: on every (some) path from m, φ holds in all markings
• A(E) φ U ψ: on every (some) path starting in m, there is a marking that satisfies
ψ such that all preceding markings satisfy φ
16. Basic notions: State Space
• Strongly connected component (scc)
• max set of mutually reachable states
• partitions state space
• form acyclic graph, maximal elements: terminal scc (tscc)
• Properties vs scc:
• reversible: net has one scc
• home states: net has one tscc
• live: satisfiable in all tscc
17. Basic notions: Search
• Depth first
• can be extended easily for detecting cycles and scc
• tends to yield long paths
• Breadth first
• difficult to detect cycles and scc
• yields shortest path
20. Place/Transition Nets
N = [P,T,F,W,m0] treated as variables
PLACE p1, p2, p3, p4;
can be replaced as a whole
MARKING p1:2, p3:1, p1:1;
compatible with computed
markings
{ this is a comment }
TRANSITION t1
CONSUME p1:3, p2: 1;
PRODUCE p3: 2, p1 : 2;
treated as procedures
TRANSITION t2
CONSUME p3 : 1;
PRODUCE ;
only one reference per arc
22. Storage directives
If bounds for some places are known:
PLACE
default, #CAPACITY in
p0;
userconfig.H
SAFE 3: p1, p2;
SAFE 7: p3, p4;
SAFE: p5; = SAFE 1
Only for internal memory allocation, no
capacity!
23. Fairness Constraints
needed for the LTL properties only
(fair CTL is not supported so far)
TRANSITION t1 STRONG FAIR
...
TRANSITION t2 WEAK FAIR
...
TRANSITION t3
...
24. Verification Task Input
• Can be specified inline or as separate file
• For boundedness of places: ANALYSE PLACE p1
• For dead transitions: ANALYSE TRANSITION t2
• For all properties involving state predicate:
FORMULA (p1 > 3 OR p2 <= 7) AND NOT p6 = 1
• For CTL model checking:
• FORMULA EXPATH ALWAYS ALLPATH EVENTUALLY p1 > 3
• FORMULA EXPATH (p1 > 7 UNTIL p2 < 3)
25. High Level Net Input
• Main purpose: To obtain scalable sequences of models
• Deprecated for translation from other formalisms
(problem: semantic conformance)
• Will be unfolded into place/transition net anyway
• Experience: Parsing from UNIX pipe no time issue
• Style: algebraic Petri nets with explicit interpretation
26. Algebraic Petri Nets
• Signature: sorts + sorted operation symbols
• Interpretation: sets of values, n-ary functions
• Places: annotated with sort (type) symbol
• interpretation: set of values (colors)
• Transitions: annotated with set of variables, guard
expression
• interpretation: every valid assignment is firing mode
• Arcs: annotated with terms over the transition variables
• interpretation: map from firing mode of transition into color set of place
• Marking: written as multiset of terms
27. Signature: Sorts and
their interpretation
SORT
a = [ 1 , 5 ]; { 1,2,3,4,5 }
b = BOOLEAN; { TRUE, FALSE }
c = ENUMERATE red blue green END; { red, blue, green }
“successor” canonically defined on each value set
scalar arbitrary each value has
unique text
d = ARRAY [1,3] OF BOOLEAN;
representation
{ [FALSE|FALSE|FALSE], ... , [TRUE|TRUE|TRUE] }
e = RECORD
receiver : a;
sender : b;
END ; { <1|FALSE>, ...., <3|TRUE> }
28. Signature: operations
and their interpretation
SORT phils = [1 , 5 ]; forks = [1 , 5];
FUNCTION leftfork (x : phils) : forks signature
BEGIN
RETURN x
END interpretation
FUNCTION rightfork(x : phils): forks
BEGIN
RETURN x + 1
expressions evaluate
END on all integers,
FUNCTION allthinking () : phils assignments align to
VAR x : phils;
BEGIN
value set (modulo
FOR ALL x DO arithmetic)
RETURN x
END
END result is multiset
29. Statements in function body
EXIT
leave function
RETURN E
add value of E to return multiset, continue
L = E
assignment
S1 ; S2
sequential composition
WHILE E DO S END
while loop
REPEAT S UNTIL E END
until loop
FOR x := E1 TO E2 DO S END
for loop in canonical order of values
FOR ALL x DO S END
for loop through all elements of sort of x
IF E THEN s1 [ELSE S2] END
branch statement
SWITCH E
CASE E1: S1 ... CASE En: Sn
ELSE S
END
multibranch statement
30. Expressions in function body
pointwise for
X X[a + b] X.c[a + b]
arrays and
records
645 TRUE FALSE
A <-> B A -> B A AND B A OR B NOT A
A<B A <= B A>B A >= B A=B A <> B A#B
A+B A*B A-B A/B A MOD B
(E)
[ E1 | E2 | .... | En ] no modulo
bla ( E1, ...., En)
before
function must assignment
return exactly
one value
31. Example: Network
SORT dimensions = [ 1 , 3 ]; row = [ 1 , 3 ]; agent = ARRAY dimensions OF row ;
message = RECORD receiver : agent; sender : agent; END;
FUNCTION X (a:agent;b:agent):message
VAR m : message;
BEGIN
m . receiver = a; m . sender = b; RETURN m
END
FUNCTION N(z:agent):agent
VAR l : dimensions; low : row; high : row;
BEGIN
low = 1; high = low - 1; { remind canonical order }
FOR ALL l DO
IF z [ l ] > 1ow THEN z [ l ] = z [ l ] - 1; RETURN z; z[l]=z[l]+1 END;
IF z [ l ] < high THEN z [ l ] = z [ l ] + 1; RETURN z; z[l]=z[l]-1 END
END
END
32. per value HL Places
tokens of sort forks
PLACE
SAFE p1 : phils, p2 : forks , p3 ;
...
tokens of sort phils low level place
unfolded to:
PLACE SAFE
p1.1, p1.2, p1.3, p1.4, p1.5, p2.1, p2.2, p2.3, p2.4, p2.5, p3;
33. HL Initial Marking
multiterm
without variable
sorts must fit
MARKING th : allphilosophers(),
fo : L(allphilosophers()), unfolded name
th.2 : 3,
p3 : 5;
low level place
34. HL Transitions
valid for all
TRANSITION receive WEAK FAIR instances
VAR sender , receiver : agent;
GUARD is_neighbour( sender , receiver) firing mode
CONSUME channe1 : X ( sender, receiver )
PRODUCE channel : X (N(sender),sender), internal :
receiver
multiterms
unfolded to
TRANSITION receive.[sender=1,receiver=2] WEAK FAIR
CONSUME ....
Only instances with satisfied guards are generated
Isolated places are finally removed
35. HL Verification tasks
parentheses
compulsory
EXISTS x : phils : ( eating . ( x ) > 0 ) AND thinking.1 = 0
ALL y : phils : ( [y = 1] OR fo . ( L(y) ) = 0 )
any expression
50. How to Preserve Properties
Core principle:
outside stubborn(m)
m2 implies
in stubborn(m)
plus property specific requirements
presence of right path justifies absence of left path
43
51. How to Preserve Properties
Core principle:
outside stubborn(m)
m w1 m1 t m2 implies
in stubborn(m)
plus property specific requirements
presence of right path justifies absence of left path
43
52. How to Preserve Properties
Core principle:
outside stubborn(m)
m w1 m1 t m2 implies m t m1 ’ w1 m2
in stubborn(m)
plus property specific requirements
presence of right path justifies absence of left path
43
55. Preservation of Deadlocks
Core principle +
implies
Proof:
Let m w d length(w) = min
1st case: some t of stubborn(m) occurs in w
44
56. Preservation of Deadlocks
Core principle +
implies
Proof:
Let m w d length(w) = min
1st case: some t of stubborn(m) occurs in w
m w1 s1 t m2 w2 d
44
57. Preservation of Deadlocks
Core principle +
implies
Proof:
Let m w d length(w) = min
1st case: some t of stubborn(m) occurs in w
m w1 s1 t m2 w2 d
m t m1 ’ w1 m2 w2 d
44
58. Preservation of Deadlocks
Core principle +
implies
Proof:
Let m w d length(w) = min
1st case: some t of stubborn(m) occurs in w
m w1 s1 t m2 w2 d m1’ in red. TS,
m t m1 ’ w1 m2 w2 d closer to d!
44
59. Preservation of Deadlocks
Core principle +
implies
Proof:
Let m w d length(w) = min
1st case: some t of stubborn(m) occurs in w
m w1 s1 t m2 w2 d m1’ in red. TS,
m t m1 ’ w1 m2 w2 d closer to d!
2nd case: no t of stubborn(m) occurs in w
44
60. Preservation of Deadlocks
Core principle +
implies
Proof:
Let m w d length(w) = min
1st case: some t of stubborn(m) occurs in w
m w1 s1 t m2 w2 d m1’ in red. TS,
m t m1 ’ w1 m2 w2 d closer to d!
2nd case: no t of stubborn(m) occurs in w
m w d
44
61. Preservation of Deadlocks
Core principle +
implies
Proof:
Let m w d length(w) = min
1st case: some t of stubborn(m) occurs in w
m w1 s1 t m2 w2 d m1’ in red. TS,
m t m1 ’ w1 m2 w2 d closer to d!
2nd case: no t of stubborn(m) occurs in w
m w d
t 44
62. Preservation of Deadlocks
Core principle +
implies
Proof:
Let m w d length(w) = min
1st case: some t of stubborn(m) occurs in w
m w1 s1 t m2 w2 d m1’ in red. TS,
m t m1 ’ w1 m2 w2 d closer to d!
2nd case: no t of stubborn(m) occurs in w
m w d
t t 44
63. Preservation of Deadlocks
Core principle +
implies
Proof:
Let m w d length(w) = min
1st case: some t of stubborn(m) occurs in w
m w1 s1 t m2 w2 d m1’ in red. TS,
m t m1 ’ w1 m2 w2 d closer to d!
2nd case: no t of stubborn(m) occurs in w
m w d
t d not a
t 44
deadlock!
64. Preservation of Deadlocks
Core principle + m w m’
implies
Proof:
Let m w d length(w) = min
1st case: some t of stubborn(m) occurs in w
m w1 s1 t m2 w2 d m1’ in red. TS,
m t m1 ’ w1 m2 w2 d closer to d!
2nd case: no t of stubborn(m) occurs in w
m w d
t d not a
t 44
deadlock!
65. Preservation of Deadlocks
Core principle + m w m’
implies
t
Proof:
Let m w d length(w) = min
1st case: some t of stubborn(m) occurs in w
m w1 s1 t m2 w2 d m1’ in red. TS,
m t m1 ’ w1 m2 w2 d closer to d!
2nd case: no t of stubborn(m) occurs in w
m w d
t d not a
t 44
deadlock!
66. Preservation of Deadlocks
Core principle + m w m’
implies t
t
Proof:
Let m w d length(w) = min
1st case: some t of stubborn(m) occurs in w
m w1 s1 t m2 w2 d m1’ in red. TS,
m t m1 ’ w1 m2 w2 d closer to d!
2nd case: no t of stubborn(m) occurs in w
m w d
t d not a
t 44
deadlock!
67. Preservation of LTL/CTL
LTLX:
Core principle
+Visibility: all transitions in stubborn(m) invisible to φ or
stubborn(m) = T
+Proviso: Once in every cycle: stubborn(m) = T
CTLX:
LTL
+ |stubborn(m)| = 1 or stubborn(m) = T
Consequences:
- only local properties yield reduction
- Proviso avoids infinite stuttering
- Proviso known to cause explosion
- Proviso requires cycle detection (e.g. depth first)
- CTL only performant when number of conflicts is small
68. LoLA’s Approaches
Let φ be state predicate Assume m does not satisfy φ
wrup(m, φ ) = some set of transitions such that every path
to an m’ that satisfies φ contains at least
one transition of wrup(m, φ ).
Examples:
wrup(m, “m* reached”) = •p, for some p with m(p) < m*(p)
= p•, for some p with m(p) > m*(p)
wrup(m,p>k) = wrup(m,p≥k) = •p
wrup(m,p<k) = wrup(m,p≤k) = p•
wrup(m, φ1 ∧φ2) = wrup(m, φ1) if m does not satisfy φ1
= wrup(m, φ2) if m does not satisfy φ2
wrup(m, φ1 ∨φ2) = wrup(m, φ1)∪ wrup(m, φ2)
wrup(m, t not dead) = {t} 46
73. Theorem
Reachability of φ:
core principle
+ wrup(m, φ) ⊆ stubborn(m)
orig. φ
red.
in wrup(m, φ)
m1
t 1st in ample(m)
m
m0 47
74. Theorem
Reachability of φ:
core principle
+ wrup(m, φ) ⊆ stubborn(m)
orig. φ
red.
in wrup(m, φ)
m1
t 1st in ample(m)
m
m1 closer to m’ than m
m0 47
75. Effect
• Can be applied to global predicates
• Astonishing goal-orientation
• Has been relaxed by Kristensen/Valmari (wrup must
be contained only once in an scc)
• They perform better if predicate unreachable
• Unrelaxed method better if predicate reachable
• Can be extended to boundedness:
• Bounded net: wrup(m) = {t : |t•|>|•t|}
• Bounded place: wrup(m,p) = •p
relaxed
76. TSCC based properties
Valmari:
core principle
+ weak proviso: Every transition in stubborn(m) at
least once in every tscc of reduced system:
every tscc of original state space visited in reduced
state space
77. TSCC based properties
Idea:
- Construct Valmari’s tscc-preserving state space
- Pick one element of each tscc of reduced state space
- check mutual reachability for home state
- check reachability of m0 for reversibility
- check rechability of φ for liveness of φ
userconfig.H:
twophase TWOPHASE
78. CTL/LTL properties
• CTL: Separate search space for each subformula
• Use wrup for EF and AG
• Use traditional CTL method for other
operators
• LTL: search counterexample path: F φ ➪ G¬φ,
GF φ ➪ FG¬φ, FGφ ➪ GF¬φ
• G ¬φ LTL preserving, but drop Proviso
• FG¬φ,GF¬φ:
• drop Proviso if m satisfies ¬φ
• wrup(m,¬φ) if m satisfies φ
80. Symmetric Behavior
Goal: symmetry in transition system
σ is symmetry if: ΣTS: set of all
σ is bijection R(m0) R(m0) symmetries in R(m0)
m [t> m’ iff ex. t’: σ(m) t’> σ(m’)
σ(m0) = m0
by induction:
m0 m1 m2 ... path
σ(m0) σ(m1) σ(m2) ... path as well
-Id is always symmetry [ΣTS,o] is
-If σ symmetry, so is σ-1 group
-If σ1 and σ2 symmetries, so is σ1 o σ2
53
82. Equivalence of States
Have to detect symmetries prior to state space generation,
typically cannot deduce all of them
but: can always close under inversion and composition
54
83. Equivalence of States
Have to detect symmetries prior to state space generation,
typically cannot deduce all of them
but: can always close under inversion and composition
fix some subgroup Σ ⊆ ΣTS
54
84. Equivalence of States
Have to detect symmetries prior to state space generation,
typically cannot deduce all of them
but: can always close under inversion and composition
fix some subgroup Σ ⊆ ΣTS
m ~ m’ iff ex. σ ∈ Σ such that σ(m) = m’
54
85. Equivalence of States
Have to detect symmetries prior to state space generation,
typically cannot deduce all of them
but: can always close under inversion and composition
fix some subgroup Σ ⊆ ΣTS
m ~ m’ iff ex. σ ∈ Σ such that σ(m) = m’
~ is equivalence relation
54
86. Reduced Transition System
TSΣ = [R(m0)/~ , EΣ , [m0]Σ]
EΣ = { [ [s],[s’] ] | ex. s ∈ [s], ex. s’ ∈ [s’] : [s,s’] ∈ E}
Size of reduced system:
| R(m0)/~ | ≥ | R(m0) | / | Σ |
|Σ | can be exponential in size of Petri net
55
89. Construction of reduced
R := E := ø; dfs(m0);
dfs(m) Approximation
R := R ∪ {m};
FOR ALL t: activated in m DO
m’ = m + Δt;
IF can find σ with σ(m’)∈ R THEN
E := E ∪{[m, t, σ(m’) ]}; The “Orbit-
ELSE Problem”
E := E ∪{[m,t, m’ ]};
dfs(m’);
END
END
58
90. “Traditional” Symmetry
Tools
• Depend on “scalar set” data type
• =, ≠, arrays, for each, no constant
• Cannot model networks other than cliques
• LoLA: can handle all kinds of symmetry in
the net structure
91. PN automorphisms
Bijection σ: P∪T → P∪T is PN automorphism,
iff, for all x,y ∈ P∪T:
- m0(x) = m0(σ(x))
- If [x,y] ∈ F then [σ(x),σ(y)] ∈ F and W([x,y]) = W([σ(x),σ(y)])
Every PN automorphism induces symmetry in state space:
σ(m)(σ(p)) = m(p)
60
93. Schreier-Sims generating set
U1
U2
U3 subgroup induces partition of whole group
pick one element of each class (“orbit”)
Group: all automorphisms
U1: all automorphisms that map p1 to p1
U2: all automorphisms that map p1 to p1, p2 to p2
...
Un: Id
has O(n^2) elements
95. 2 3
Example
1 4
E={2 id, 3 2 ,3 2 3, 2 3
;
id, }
1 g1 4 1 g2 4 1 g3 4 1 g4 4
id o id = id g2 o id =
id o g4 = g2 o g4 =
g1 o id = g3 o id =
g1 o g4 = g3 o g4 =
64
96. Another Example
8 7
5 6
4 3 g = g1 o g2 o g3
1 2
1. Layer: 1 →1 ... 8
2. Layer 1 → 1, 2 → 2,4,5
3. Layer 1 → 1, 2 → 2, 3 → 3,6
7 + 2 + 1 = 10 generators for
8 x 3 x 2 = 48 automorphisms
65
97. Orbit Problem: Approximation
id id
g11 g12 g13
g14-1 g21 g22 g23 g31 g32
g14
given: m searched: canonical representative(m)
66
98. Orbit Problem: Approximation
id id
g11 g12 g13
g14-1 g21 g22 g23 g31 g32
g14
given: m searched: canonical representative(m)
1. m1 := MIN{g1i-1(m), i = ...}
66
99. Orbit Problem: Approximation
id id
g11 g12 g13
g14-1 g21 g22 g23 g31 g32
g14
given: m searched: canonical representative(m)
1. m1 := MIN{g1i-1(m), i = ...}
2. m2 := MIN{g2i-1(m1), i = ...}
66
100. Orbit Problem: Approximation
id id
g11 g12 g13
g14-1 g21 g22 g23 g31 g32
g14
given: m searched: canonical representative(m)
1. m1 := MIN{g1i-1(m), i = ...}
2. m2 := MIN{g2i-1(m1), i = ...}
3. m3 := MIN{g3i-1(m2), i = ...}
66
101. Orbit Problem: Approximation
id id
g11 g12 g13
g14-1 g21 g22 g23 g31 g32
g14
given: m searched: canonical representative(m)
1. m1 := MIN{g1i-1(m), i = ...} ........
2. m2 := MIN{g2i-1(m1), i = ...} n. mn := MIN{gni-1(mn-1), i = ...}
3. m3 := MIN{g3i-1(m2), i = ...}
66
102. Orbit Problem: Approximation
id id
g11 g12 g13
g14-1 g21 g22 g23 g31 g32
g14
given: m searched: canonical representative(m)
1. m1 := MIN{g1i-1(m), i = ...} ........
2. m2 := MIN{g2i-1(m1), i = ...} n. mn := MIN{gni-1(mn-1), i = ...}
3. m3 := MIN{g3i-1(m2), i = ...} canrep(m) := mn
66
109. Summary Symmetries
calculation of symmetries, exact solution of orbit problem:
equivalent to graph isomorphism (NP)
Many other orbit algorithms available in LoLA, even more by
Tommi Junttila
best choice depends on structure of symmetry group
symmetries 69
111. Two approaches
compress states (use place invariants)
save space and time
exempt states from storage (use transition invariants)
space/time tradeoff
71
113. First approach: use place invariants
Let i be place invariant:.
For all reachable m:
i • m = i • m0
72
114. First approach: use place invariants
Let i be place invariant:.
For all reachable m:
i • m = i • m0
i • m0 – Σp’≠p i(p’) • m(p’)
.... and, for a place p with i(p) ≠ 0: m(p) = i(p)
72
115. Example
3 2
invariant 1: [ 1 1 0 0 0 ] invariant 2: [ 0 0 0 1 1 ]
that is, for all reachable markings m:
m(p1) = 1 – m(p2) m(p5) = 2 – m(p4)
only p2,p3,p4 need to be stored (40 % compression)
73
117. Overhead
appears to be:
preprocessing
- time compute invariants
- space |inv| • |places|
state space
construction
- time recover saved
components
74
118. Overhead
appears to be: actually is:
preprocessing
- time compute invariants compute upper triangular
form
- space |inv| • |places| 1bit • |places|
state space
construction
- time recover saved search, insert performed
components on smaller vectors
74
119. State space construction
state
yes/no state
pointer depository
(short
vectors)
state (recover
removed components)
1 0 1
0 0 0
= 1
0 - -2
-1 = 3
1
2 1 1
75
120. State space construction
state
yes/no state
pointer depository
(short
vectors)
state (recover
removed components)
1 0 1 Observe:
0 0 0
= 1
0 - -2
-1 = 3
1
values of i
irrelevant,
2 1 1
supp(i) sufficient!
75
122. Results
1. Space reduction 30% - 55%
2. Preprocessing time insignificant
3. Run time reduction proportional to space reduction
Reason: search and insert operations take
80 – 95 % of overall run time
... are now performed on shorter vectors
4. combination with most other reduction techniques
possible
preduction 77
124. Second approach:
what happens if some states are
removed from the depository?
78
125. Second approach:
what happens if some states are
removed from the depository?
78
126. Second approach:
what happens if some states are
removed from the depository?
construction still terminates as long as
removed states do not form cycles!
78
127. Second approach:
what happens if some states are
removed from the depository?
construction still terminates as long as
removed states do not form cycles!
use structural knowledge about cycles
78
129. Transition invariants
cycle in state space corresponds to transition
invariant
Assume: Set U of transitions s.t. for every transition
invariant i:
U ∩ supp(i) ≠∅
Then: store states that enable transitions in U
do not store other states
U can be determined from triangular form
79
130. Example
3 2
transition invariant: [2,2,3,3]
U = {t}
store only states where t is enabled
80
131. Problems:
1. Too many states enable transitions in U
Solution: combine with partial order reduction
2. Unacceptable run time overhead
Solution 1: heuristically store additional states
Solution 2: remove only non-branching states
81
135. Results
1. Controllable space/time trade-off
2. Combination with partial order reduction compulsory
3. Combination with a few other reduction techniques
possible
4. Only simple properties can be verified (no access to
graph structure of the state space)
85
144. The sweep-line method (extended)
If p is not monotonous:
t
s’
s p(s’) < p(s)
-mark s’ “persistent”
-start new sweep from s’
145. The sweep-line method (extended)
If p is not monotonous:
t
s’
s p(s’) < p(s)
-mark s’ “persistent”
-start new sweep from s’
Consequently: not too often p(s’) < p(s)
146. Setting for LoLA’s measure
-incremental: “transition offsets”
Δ p(t) : m [t> m‘ p(m’) = p(m) + Δ p(t)
-not necessarily monotonous
(in every cycle: one negative Δ p or all Δ p = 0)
147. The measure
partition T into U and TU
in U: all transitions linear independent
in TU: all transitions linear dependent of U
i.e. |U| = rank(C)
-for t in U: Δ p (t) := 1
-for t in TU: Δ p(t) determined by (unique) lin. combination of U
(for t in TU: Δ p(t) >0, =0, <0 )
typical size: |U| 60% - 100% of |T|
156. You will learn how
• to choose and manage LoLA configurations
• to ask the right verification questions
• to optimally model a Petri net
• to employ scripts, makefiles, etc.
• to call LoLA from another tool
157. LoLA Configurations
• Get LoLA:
• http://service-technology.org/files/lola
• Standard Workflow:
• edit userconfig.H
• compile LoLA
setup
158. userconfig.H
• What to check?
• Which reduction
techniques to use?
• Other parameters
159. The optimal configuration
1. Know your net!
• Is it bounded? Do you know the bound? Is it safe?
• Do you have a feeling on the outcome?
• Is the net made of several components?
• Does the net have a lot of concurrency?
2. Experiment!
162. Stubborn Sets
• STUBBORN
• when to use: always
• compatibility: all other techniques
• switch RELAXED to chose more efficient
technique if state/predicate is unreachable
163. Invariant-based Compression
• PREDUCTION
• when to use: always
• compatibility: not with sweep-line method
preduction
164. Symmetries
• SYMMETRY
• when to use: net is made of several
symmetric components
• runtime overhead
• compatibility: not with sweep-line method
• switch SYMMINTEGRATION and
MAXATTEMPT to control time/memory
trade-off
165. Coverability Graph
• COVER
• when to use: mostly clear from the context
• compatibility: stubborn sets and symmetry
• use with BREADTH_FIRST to have
shorter paths to check
166. Cycle Coverage
• CYCLE
• when to use: can help sometimes
• runtime overhead
• use with stubborn sets to reduce number
of successors
• Switches NONBRANCHINGONLY and
MAXUNSAVED to control memory/time
tradeoff
167. Sweep-line
• SWEEP
• when to use: behavior has several acyclic
stages - always worth a try
• compatibility: stubborn set method
• in fact: only use with stubborn set method
to avoid a lot of regress transitions
168. Small State Representation
• SMALLSTATE
• when to use: only for simple reachability
questions
• compatibility: all other techniques
169. Reduction techniques
Not all
combinations
make sense!
LoLA takes
care about
this.
170. Other parameters
• BREADTH_FIRST: search strategy
• CAPACITY: fix a maximal number of tokens per place
• CHECKCAPACITY: check capacity and abort
• MAXPATH: maximal length of paths for FINDPATH
• REPORTFREQUENCY: report firing of transitions
• HASHSIZE: number of hash buckets
• MAXIMALSTATES: maximal size of the statespace
maximalstates
171. Manage configurations
• one binary for each configuration
• fight complexity:
• ask LoLA for its configuration
• predefined standard configurations
• offspring generation
configurations
175. Build script
downloads the sources
and generate a configured
binary with random name
176. You will learn how
• to choose and manage LoLA configurations ✔
• to ask the right verification questions
• to optimally model a Petri net
• to employ scripts, makefiles, etc.
• to call LoLA from another tool
177. Ask the right questions
• be as specific as possible
• ask one aspect at a time
• exploit all knowledge
• transform complex questions
178. Be specific!
• most questions can be formulated with CTL
• LoLA has dedicated routines:
• EF φ - use STATEPREDICATE
• AG EF φ - use LIVEPROP
• yields more efficient reduction
specific
179. Ask one aspect at a time!
• Garavel’s challenge: check quasiliveness of a
net with 776 transitions
• naive way: build one statespace and check each
transition
• Problem: 9794739147610899087361 states
• clever way: build 776 statespaces and check each
transition independently
• all but two state spaces have < 20000 states
180. Use all knowledge!
end of a procedure, see Figure 1. The tasks are modeled by transit
ordering of tasks is modeled by places connecting these transitions.
• original question:
soundness of workflow nets
• naive: AG EF φ i
WF-net
o
• Petri-netty: liveness and Fig. 1. A procedure modeled by a W F-net.
boundedness of short-circuited net
The processing of a case starts the moment we put a token in plac
• Knowledge: net is free-choice and built from
the moment a token appears in place o. One of the main properties
should satisfy is the following:
standard patterns For any case, the procedure will terminate eventually, and at t
• boundedness boils down to 1-safeness
procedure terminates there is a token in place o and all the ot
empty.
This property is called the soundness property. In this paper we p
• clever way: two checks: liveness and 1-safeness
to verify this property using standard Petri-net tools. If we restric
choice Petri nets (cf. Best [8], Desel and Esparza [12]), this propert
polynomial time.
W F-nets have some interesting properties. For example, it turns ou
181. Transform your problem!
• original question: relaxed soundness (every
transition fires in at least one terminating run)
• standard algorithm: build statespace, remove
nonterminating behavior and check transitions
t
• clever way: create special net for each transition t
and check for reachability of marking [o, pt]
183. You will learn how
• to choose and manage LoLA configurations ✔
• to ask the right verification questions ✔
• to optimally model a Petri net
• to employ scripts, makefiles, etc.
• to call LoLA from another tool
184. “optimal” Petri nets
• have verification in mind
• don’t use expensive constructs (reset arcs)
• don’t spoil the reduction techniques
• help LoLA help you
185. High-level guards
• use guards to exclude implausible transition bindings
• results in quicker unfolding
TRANSITION ManInTheMiddle
VAR
bob : bobAgents;
alice : aliceAgents;
bobKey : bobKeys;
aliceKey : aliceKeys;
GUARD
alice <> getMaliceAlice() AND
bob <> getMaliceBob() AND
isSessionKeyForAlice(alice,bob,aliceKey) AND
isSessionKeyForBob(bob,alice,bobKey)
CONSUME
connStateAlice : makeConnectionState(alice,bob,aliceKey,bobKey),
mGoalBobKeys : bobKey;
PRODUCE
goal : 1;
186. Concurrency
• use concurrency where possible
• avoid unnecessary ordering of events
• makes symmetry/stubborn sets applicable
...
initialize initialize initialize
component 1 component 2 component 3
187. erformed only if scope Q is allowed to continue its normal p
Avoid global states
op, the core action of X is bypassed, as captured by the τ -tr
bypassing a normal event can be defined in a similar way.
•
n a fault occurs in scope Q,synchronization changes from to co
avoid excessive the status of Q or “global
state places” rX
X
to_stopQ
X sX
to_continueQ
"bypass" X C
cX
fX
• such nets13. Terminationconcurrency
Figure have no real of a basic activity.
188. Flexible model generation
• model with verification question in mind
• for each question have a dedicated model
with proper abstractions
• implemented in compiler BPEL2oWFN
flexible
189. Scale by structure
• when possible, scale model by structure,
not by the number of tokens
• in LoLA: just increase sort
• rationale: symmetry and stubborn sets
SORT
dimensions = [ 1 , 3 ];
row = [ 1 , 3 ];
190. You will learn how
• to choose and manage LoLA configurations ✔
• to ask the right verification questions ✔
• to optimally model a Petri net ✔
• to employ scripts, makefiles, etc.
• to call LoLA from another tool
191. Script LoLA
• LoLA follows the UNIX philosophy
• every tool does one thing
(and that thing right)
• tools communicate with files/streams
• exit codes tell about outcome of LoLA
• this all allows to quickly build powerful tool chains
192. LoLA’s exit codes
• 0: specified state or deadlock found/net or place
unbounded/home marking exists/net is reversible/
predicate is live/CTL formula true/transition not
dead/liveness property does not hold
• 1: the opposite verification result
• rule of thumb, if the outcome of a verification result
can be supported by a counterexample or witness
path, that case corresponds to return value 0
exit
193. LoLA’s exit codes
• exit code allow for simple workflows in the shell
• (lola1 net.lola && lola2 net.lola && echo
“OK”) || echo “not OK”)
• translation:
• execute lola1
• if the exit code is 0, execute lola2
• if the exit code is again 0, print “OK”
• otherwise, print “not OK”
194. Example: Scripting
• Garavel’s challenge
• quasiliveness of 776 transitions checked in 776 runs
• shell script:
1. extract transitions from net
2. generate analysis task for DEADTRANSITION
("ANALYZE TRANSITION t1")
3. call LoLA
4. evaluate exit code
• DEADTRANSITION succeeds in all but 2 cases
• then use FINDPATH
garavel
195. Example: Makefile
• check for relaxed soundness
• for each transition:
1. create manipulated net
2. generate analysis task
for STATEPREDICATE
("FORMULA (pt = 1 AND o = 1)")
3. call LoLA
4. evaluate exit code
• use Makefile to collect the results
• benefit: parallel execution
relaxed
196. You will learn how
• to choose and manage LoLA configurations ✔
• to ask the right verification questions ✔
• to optimally model a Petri net ✔
• to employ scripts, makefiles, etc. ✔
• to call LoLA from another tool
197. Integrating LoLA into Wendy
• Wendy: a tool to synthesize partners for services
• algorithm needs a lot of small state spaces
• before: calculate them on-the-fly
• now: calculate one big one in advance and
preprocess - helps to avoid “bad” states
• tool of choice for this: LoLA (lola-full)
• benefits:
• modularity
• get Tarjan numbers for free
• interprocess concurrency
wendy
198. Integrating LoLA
• integration is easy when using C:
const char *c = "lola-full tempfile.lola -M";
FILE *pipe = popen(c, "r");
parse_pipe();
pclose(pipe);
• UNIX streams allow parallel generation and parsing of
the state space
199. You will learn how
• to choose and manage LoLA configurations ✔
• to ask the right verification questions ✔
• to optimally model a Petri net ✔
• to employ scripts, makefiles, etc. ✔
• to call LoLA from another tool ✔
202. Reaction chains
• Domain: symbolic system biology
• “Symbolic systems biology is the
qualitative and quantitative study of
biological processes as integrated
systems rather than as isolated parts.”
• Property: reachability
205. Reaction chains
• “For reachability queries on our nets,
answering a reachability query that would
have taken hours using a general purpose
model-checking tool takes on the order of
a second in LoLA — fast enough to permit
interactive use.”
207. GALS circuits
• Domain: asynchronous/
synchronous hardware design
• prototype for IEEE-802.11 chip
• asynchronous hardware is not
clocked - order/timing of events
makes a difference
• problem: glitch
208. Glitch
P(a) = 1
a AND P(c) = 0
c
b Gate
P(b) = 0
1
P(a): 0 1
P(c): 0
P(b): 1
0
ΔT
147
209. Glitch
P(a) = 1
0
a AND P(c) = 0
c
b Gate
P(b) = 0
1
P(a): 0 1
P(c): 0
P(b): 1
0
ΔT
147
210. Glitch
P(a) = 1 0
0
a AND P(c) = 0 0
c
b Gate
P(b) = 0
1
P(a): 0 1
P(c): 0
P(b): 1
0
ΔT
147
211. Glitch
P(a) = 1 0
0
a AND P(c) = 0 0
1 c
b Gate
P(b) = 0
1
P(a): 0 1
P(c): 0
P(b): 1
0
ΔT
147
212. Glitch
P(a) = 1 0
0
a AND P(c) = 0 0 0
1 c
b Gate
P(b) = 0 1
1
P(a): 0 1
P(c): 0
P(b): 1
0
ΔT
147
213. Glitch
P(a) = 1
a AND P(c) = 0
c
b Gate
P(b) = 0
1
P(a): 0 1
P(c): 0
P(b): 1
0
ΔT
148
214. Glitch
P(a) = 1
a AND P(c) = 0
1 c
b Gate
P(b) = 0
1
P(a): 0 1
P(c): 0
P(b): 1
0
ΔT
148
215. Glitch
P(a) = 1
a AND P(c) = 0 1
1 c
b Gate
P(b) = 0 1
1
P(a): 0 1
P(c): 0
P(b): 1
0
ΔT
148
216. Glitch
P(a) = 1
0
a AND P(c) = 0 1
1 c
b Gate
P(b) = 0 1
1
P(a): 0 1
P(c): 0
P(b): 1
0
ΔT
148
217. Glitch
P(a) = 1 0
0
a AND P(c) = 0 1 0
1 c
b Gate
P(b) = 0 1
1
P(a): 0 1
P(c): 0
P(b): 1
0
ΔT
148
218. Glitch
P(a) = 1 0
0
a AND P(c) = 0 1 0
1 c
b Gate
P(b) = 0 1 Hazard
1
P(a): 0 1
P(c): 0
P(b): 1
0
ΔT
148
261. Soundness
• 735 real-world business processes
from IBM customers
• original formalism: UML dialect
from the IBM Websphere Business
Modeler
• translation: compiler UML2oWFN
• original question: can soundness
be verified using model checking
techniques
263. Soundness
• “IBM Soundness” = absence of
• lack of synchronization (= unsafe marking)
• deadlock (= deadlock)
• + certain assumptions on the structure
• for LoLA: two checks
• Is the final marking life?
• Is the net safe?
264. Soundness
for each SESE fragment
choice depends on SESE fragment
always perform both checks
choice depends on net structure
265. Soundness
• execution scheduled and optimized using
Makefiles
• max. 50 ms per check
• “analysis on demand”
• observed effect: structural reduction
techniques do not pay off when using
stubborn sets
soundness
267. Concurrent Programs
• concurrent processes
• shared and global variables
• goal: find Aa. small-model roening, and T . Wahl
650 K aiser, D . K
property
to make a statement on the correctness of
an arbitrary number of instances
|R n |
|R| |R|
(a) (b)
n
m c
268. Concurrent Programs
• problem can be solved by checking for
reachable states in a coverability graph
• challenge: number of places = number of
states of a process
• concurrency only through tokens
• it took a while to beat LoLA
concurrent
270. AI Planning
• setting: smart conference room
• several projectors, canvases, documents,
and lamps
• AI planning problem: Configure the room to
display document A on that canvas.
• original formalism: proprietary
planning language; manually translated
271. AI Planning
• straightforward translation to state predicate
Goals: FORMULA
( LightOn 1 Lamp1 ); LightOn.<Lamp1|TRUE> = 1 AND
( LightOn 1 Lamp2 ); LightOn.<Lamp2|TRUE> = 1 AND
( DocShown 1 Doc1 LW3 ); DocShown.<Doc1|LW3|TRUE> = 1 AND
( DocShown 1 Doc2 LW1 ); DocShown.<Doc2|LW1|TRUE> = 1 AND
( CanvasDown 1 VD1 ); CanvasDown.<VD1|TRUE> = 1
• system is extremely concurrent
• depth-first search actually finds shortest path
planner
291. Plan
• Firing a transition
• Evaluating a state predicate
• Managing the state space
• Organizing search
• Detecting strongly connected components
292. Firing transitions
Marking changed via list of pre-, list of post-places
effort does not depend on size of net
After firing, only some transitions are checked for
enabledness
previously enabled transitions that lost tokens
previously disabled transitions that gained tokens
... managed through explicitly stored lists
293. Checking state predicates
• predicate = boolean combination of
• p {><=≤≥≠} k
• stored in negation-free normal form
φ
φ φ
φ φ
294. Managing the state space
1st state = bit vector
other states = bit vector +decision record
295. Managing the state space
find/insert a marking: one integrated process
dive down into decision tree
on mismatch:
at decision point: switch to next vector
at end: found, no insert
between decision points: insert at point of mismatch
decision records form tree
296. Organizing search
General remarks
Search consists of
- fire transitions ✔
- find/insert marking ✔
- backtracking: fire transition backwards
only „constant“ time
search stack consists of reference to
transition +
list of enabled transitions
state space is „write-only“ memory
297. Organizing search
b) Depth-first search: ability to detect SCC
c) Breadth-first search:
Simulated by bounded depth-first search with
incrementally increased bound
Update of current marking, list of enabled
transitions, etc. through sequence of transition
occurrences
301. Stubborn Sets
• Crucial: Core principle
• Simple method:
–If t enabled, add conflicting transitions
–If t disabled, add pre-transitions of some
unmarked pre-place
place pre-transitions
must be included
transition conflicting
updated at enabledness check
302. The sweep-line method
• constant change successors lie in a small window of
progress values
303. Calculation of Symmetries
7 5 0 2 A1 ∪.... ∪ An = V
A1 B1
9 4 B1 ∪....∪ Bn = V
A2 4 3
1 2 6 B2 σ satisfies C iff
6 9 8
σ(Ai) = Bi (for all i)
07
3 7 B3 13
A3 0 29
5 35
48
..... 50
66
An Bn 74
8 1 81
92
C
306. Abstract Permutation – Examples
PP all permutations that
TT respect node type
{p1} {p1}
...... Elements of some orbit
{pi-1} {pi-1} wrt. Ui in Ui-1
{pi} {pk}
others1 others2
307. Abstract Permutation – Examples
PP all permutations that
TT respect node type
{p1} {p1}
...... Elements of some orbit
{pi-1} {pi-1} wrt. Ui in Ui-1
{pi} {pk}
others1 others2
New problem: given.: abstract permutation C
compute an automorphism that satisfies C
... equivalent to graph isomorphism
308. REFINE
Choose A-B, A’-B’ and arc multiplicity c
# c-neighbors in A’ #c-neighbors in B’
3 2 1 0 0 1 2 3
A B
8 9 8 6
6 1
0 5 4 2
1 2 4 3 7 0
8 7
A’ 2 5 4 B’
1 0
3
Editor's Notes
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
Problem hier: delta T wird beliebig klein \n
Problem hier: delta T wird beliebig klein \n
Problem hier: delta T wird beliebig klein \n
Problem hier: delta T wird beliebig klein \n
Problem hier: delta T wird beliebig klein \n
Problem hier: delta T wird beliebig klein \n
Problem hier: delta T wird beliebig klein \n
Problem hier: delta T wird beliebig klein \n
Problem hier: delta T wird beliebig klein \n
Problem hier: delta T wird beliebig klein \n
Problem hier: delta T wird beliebig klein \n
Problem hier: delta T wird beliebig klein \n
Problem hier: delta T wird beliebig klein \n
Problem hier: delta T wird beliebig klein \n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
Pegelpl&#xE4;tze -> triviale Idee\nFlankenpl&#xE4;tze -> D. Gomm\n
Pegelpl&#xE4;tze -> triviale Idee\nFlankenpl&#xE4;tze -> D. Gomm\n
Pegelpl&#xE4;tze -> triviale Idee\nFlankenpl&#xE4;tze -> D. Gomm\n
Pegelpl&#xE4;tze -> triviale Idee\nFlankenpl&#xE4;tze -> D. Gomm\n
Pegelpl&#xE4;tze -> triviale Idee\nFlankenpl&#xE4;tze -> D. Gomm\n
Pegelpl&#xE4;tze -> triviale Idee\nFlankenpl&#xE4;tze -> D. Gomm\n
Pegelpl&#xE4;tze -> triviale Idee\nFlankenpl&#xE4;tze -> D. Gomm\n
Pegelpl&#xE4;tze -> triviale Idee\nFlankenpl&#xE4;tze -> D. Gomm\n
Pegelpl&#xE4;tze -> triviale Idee\nFlankenpl&#xE4;tze -> D. Gomm\n
Pegelpl&#xE4;tze -> triviale Idee\nFlankenpl&#xE4;tze -> D. Gomm\n
Pegelpl&#xE4;tze -> triviale Idee\nFlankenpl&#xE4;tze -> D. Gomm\n
Pegelpl&#xE4;tze -> triviale Idee\nFlankenpl&#xE4;tze -> D. Gomm\n
Pegelpl&#xE4;tze -> triviale Idee\nFlankenpl&#xE4;tze -> D. Gomm\n
Pegelpl&#xE4;tze -> triviale Idee\nFlankenpl&#xE4;tze -> D. Gomm\n
Pegelpl&#xE4;tze -> triviale Idee\nFlankenpl&#xE4;tze -> D. Gomm\n
Pegelpl&#xE4;tze -> triviale Idee\nFlankenpl&#xE4;tze -> D. Gomm\n
Pegelpl&#xE4;tze -> triviale Idee\nFlankenpl&#xE4;tze -> D. Gomm\n
Pegelpl&#xE4;tze -> triviale Idee\nFlankenpl&#xE4;tze -> D. Gomm\n
Pegelpl&#xE4;tze -> triviale Idee\nFlankenpl&#xE4;tze -> D. Gomm\n
Pegelpl&#xE4;tze -> triviale Idee\nFlankenpl&#xE4;tze -> D. Gomm\n
Pegelpl&#xE4;tze -> triviale Idee\nFlankenpl&#xE4;tze -> D. Gomm\n
Pegelpl&#xE4;tze -> triviale Idee\nFlankenpl&#xE4;tze -> D. Gomm\n
Pegelpl&#xE4;tze -> triviale Idee\nFlankenpl&#xE4;tze -> D. Gomm\n
Pegelpl&#xE4;tze -> triviale Idee\nFlankenpl&#xE4;tze -> D. Gomm\n
Pegelpl&#xE4;tze -> triviale Idee\nFlankenpl&#xE4;tze -> D. Gomm\n
Pegelpl&#xE4;tze -> triviale Idee\nFlankenpl&#xE4;tze -> D. Gomm\n
Pegelpl&#xE4;tze -> triviale Idee\nFlankenpl&#xE4;tze -> D. Gomm\n
Pegelpl&#xE4;tze -> triviale Idee\nFlankenpl&#xE4;tze -> D. Gomm\n
Pegelpl&#xE4;tze -> triviale Idee\nFlankenpl&#xE4;tze -> D. Gomm\n
Pegelpl&#xE4;tze -> triviale Idee\nFlankenpl&#xE4;tze -> D. Gomm\n
Pegelpl&#xE4;tze -> triviale Idee\nFlankenpl&#xE4;tze -> D. Gomm\n
Pegelpl&#xE4;tze -> triviale Idee\nFlankenpl&#xE4;tze -> D. Gomm\n