The document discusses an approach called semi-purification for automatically generating unit test cases from source code. Semi-purification replaces dependencies like global variables and database calls in the source code with function parameters. This allows existing automated test case generation tools to be used by treating the semi-purified code as if it were pure. Challenges discussed include handling shared subroutines, loops, and concurrency. The goal is to increase test coverage for complex, distributed systems with frequent changes like those used at CERN.
ANALYSIS OF POSSIBILITY OF GROWTH OF SEVERAL EPITAXIAL LAYERS SIMULTANEOUSLY ...ijoejournal
We analyzed nonlinear model with varying in space and time coefficients of growth of epitaxial layers
from gas phase in a vertical reactor with account native convection. We formulate several conditions to
increase homogeneity of epitaxial layers with varying of technological process parameters.
Development of a low order stabilised Petrov-Galerkin formulation for a mixed...Chun Hean Lee
This document outlines a presentation on the development of a low-order stabilised Petrov-Galerkin formulation for a mixed conservation law formulation in fast solid dynamics. It discusses the motivation for using such a formulation, which expresses solid dynamics as first-order conservation laws to take advantage of computational fluid dynamics discretization techniques. It then outlines the reversible elastodynamics governing equations, the Petrov-Galerkin spatial and temporal discretization, and a method for conserving angular momentum through a Lagrange multiplier correction procedure. Numerical results and conclusions are also briefly mentioned.
The document discusses fitting a preferential attachment model to the edge distribution of a web host graph. It finds that a Buckley-Osthus preferential attachment model with an initial attractiveness parameter (a) of approximately 0.2 accurately approximates both the degree distribution and edge distribution of the web host graph. This captures the assortativity as well. Other random graph models that produce power-law degree distributions, like the configuration model and Chung-Lu model, do not similarly capture the edge distribution of the real web graph.
Stationary Incompressible Viscous Flow Analysis by a Domain Decomposition MethodADVENTURE Project
This document describes an iterative domain decomposition method for analyzing large-scale stationary incompressible viscous flow problems using finite element analysis. The method decomposes the domain into subdomains and solves the inner degrees of freedom using a skyline solver. Interface degrees of freedom are solved using preconditioned BiCGSTAB or GPBiCG iterative solvers. Numerical examples are provided to demonstrate the method on problems with over 1 million degrees of freedom and compare results to a monolithic finite element method solver.
Hot Stand-By Disaster Recovery Solutions for Ensuring the Resilience of Railw...SERENEWorkshop
Specifications of modern railway control systems often include resilience requirements in order to quickly and safely recovery from disasters (e.g. system-level failures). To that aim, spatial redundancy is required, with main and backup systems installed in fully isolated buildings, together with very short switchover times from main to backup systems in case of disasters. In order to fulfil those requirements, Ansaldo STS has developed a system-level hot stand-by solution allowing to quickly and smoothly switch from the main system to the back-up one, ensuring the necessary continuity of service and transparency to train supervisors and other operators. The functional architecture of such a solution is able to keep aligned the safety-critical nucleuses, typically based on N-modular redundancy (i.e. ‘KooM’ voting), of the main and the back-up systems. Such a coherent alignment must be kept in terms of both interfaced field devices (e.g. interlocking signals, track circuits, switch points, etc.) – on the ‘bottom’ level – and control room Human Machine Interfaces (HMI) – on the ‘top’ level. The solution is based on heterogeneous and redundant network links (copper/fiber Ethernet/HyperRing) at different levels of system architecture. In this speech, the reference architecture and the fault-tolerance functionalities for disaster recovery are provided, considering the requirements of real railway and mass-transit installations.
Serene 2015
Davide Scaramuzza
Abstract: With drones becoming more and more popular, safety is a big concern. A critical situation occurs when a drone temporarily loses its GPS position information, which might lead it to crash. This can happen, for instance, when flying close to buildings where GPS signal is lost. In such situations, it is desirable that the drone can rely on fall-back systems and regain stable flight as soon as possible. In this talk, I will present novel methods to automatically recover and stabilize a quadrotor from any initial condition or execute emergency landing. On the one hand, this new technology will allow quadrotors to be launched by simply tossing them in the air, like a “baseball ball”. On the other hand, it will allow them to recover back into stable flight or land on a safe area after a system failure. Since this technology does not rely on any external infrastructure, such as GPS, it enables the safe use of drones in both indoor and outdoor environments. Thus, it can become relevant for commercial use of drones, such as parcel delivery.
Recent videos:
Automatic failure recovery without GPS: https://youtu.be/pGU1s6Y55JI
Autonomous Landing-site detection and landing: https://youtu.be/phaBKFwfcJ4
SERENE 2014 Workshop: Paper "Simulation Testing and Model Checking: A Case St...SERENEWorkshop
SERENE 2014 - 6th International Workshop on Software Engineering for Resilient Systems
http://serene.disim.univaq.it/
Session 3: Verification and Validation
Paper 2: Simulation Testing and Model Checking: A Case Study Comparing these Approaches
SERENE 2014 Workshop: Paper "Using Instrumentation for Quality Assessment of ...SERENEWorkshop
SERENE 2014 - 6th International Workshop on Software Engineering for Resilient Systems
http://serene.disim.univaq.it/
Session 4: Monitoring
Paper 1: Using Instrumentation for Quality Assessment of Resilient Software in Embedded Systems
ANALYSIS OF POSSIBILITY OF GROWTH OF SEVERAL EPITAXIAL LAYERS SIMULTANEOUSLY ...ijoejournal
We analyzed nonlinear model with varying in space and time coefficients of growth of epitaxial layers
from gas phase in a vertical reactor with account native convection. We formulate several conditions to
increase homogeneity of epitaxial layers with varying of technological process parameters.
Development of a low order stabilised Petrov-Galerkin formulation for a mixed...Chun Hean Lee
This document outlines a presentation on the development of a low-order stabilised Petrov-Galerkin formulation for a mixed conservation law formulation in fast solid dynamics. It discusses the motivation for using such a formulation, which expresses solid dynamics as first-order conservation laws to take advantage of computational fluid dynamics discretization techniques. It then outlines the reversible elastodynamics governing equations, the Petrov-Galerkin spatial and temporal discretization, and a method for conserving angular momentum through a Lagrange multiplier correction procedure. Numerical results and conclusions are also briefly mentioned.
The document discusses fitting a preferential attachment model to the edge distribution of a web host graph. It finds that a Buckley-Osthus preferential attachment model with an initial attractiveness parameter (a) of approximately 0.2 accurately approximates both the degree distribution and edge distribution of the web host graph. This captures the assortativity as well. Other random graph models that produce power-law degree distributions, like the configuration model and Chung-Lu model, do not similarly capture the edge distribution of the real web graph.
Stationary Incompressible Viscous Flow Analysis by a Domain Decomposition MethodADVENTURE Project
This document describes an iterative domain decomposition method for analyzing large-scale stationary incompressible viscous flow problems using finite element analysis. The method decomposes the domain into subdomains and solves the inner degrees of freedom using a skyline solver. Interface degrees of freedom are solved using preconditioned BiCGSTAB or GPBiCG iterative solvers. Numerical examples are provided to demonstrate the method on problems with over 1 million degrees of freedom and compare results to a monolithic finite element method solver.
Hot Stand-By Disaster Recovery Solutions for Ensuring the Resilience of Railw...SERENEWorkshop
Specifications of modern railway control systems often include resilience requirements in order to quickly and safely recovery from disasters (e.g. system-level failures). To that aim, spatial redundancy is required, with main and backup systems installed in fully isolated buildings, together with very short switchover times from main to backup systems in case of disasters. In order to fulfil those requirements, Ansaldo STS has developed a system-level hot stand-by solution allowing to quickly and smoothly switch from the main system to the back-up one, ensuring the necessary continuity of service and transparency to train supervisors and other operators. The functional architecture of such a solution is able to keep aligned the safety-critical nucleuses, typically based on N-modular redundancy (i.e. ‘KooM’ voting), of the main and the back-up systems. Such a coherent alignment must be kept in terms of both interfaced field devices (e.g. interlocking signals, track circuits, switch points, etc.) – on the ‘bottom’ level – and control room Human Machine Interfaces (HMI) – on the ‘top’ level. The solution is based on heterogeneous and redundant network links (copper/fiber Ethernet/HyperRing) at different levels of system architecture. In this speech, the reference architecture and the fault-tolerance functionalities for disaster recovery are provided, considering the requirements of real railway and mass-transit installations.
Serene 2015
Davide Scaramuzza
Abstract: With drones becoming more and more popular, safety is a big concern. A critical situation occurs when a drone temporarily loses its GPS position information, which might lead it to crash. This can happen, for instance, when flying close to buildings where GPS signal is lost. In such situations, it is desirable that the drone can rely on fall-back systems and regain stable flight as soon as possible. In this talk, I will present novel methods to automatically recover and stabilize a quadrotor from any initial condition or execute emergency landing. On the one hand, this new technology will allow quadrotors to be launched by simply tossing them in the air, like a “baseball ball”. On the other hand, it will allow them to recover back into stable flight or land on a safe area after a system failure. Since this technology does not rely on any external infrastructure, such as GPS, it enables the safe use of drones in both indoor and outdoor environments. Thus, it can become relevant for commercial use of drones, such as parcel delivery.
Recent videos:
Automatic failure recovery without GPS: https://youtu.be/pGU1s6Y55JI
Autonomous Landing-site detection and landing: https://youtu.be/phaBKFwfcJ4
SERENE 2014 Workshop: Paper "Simulation Testing and Model Checking: A Case St...SERENEWorkshop
SERENE 2014 - 6th International Workshop on Software Engineering for Resilient Systems
http://serene.disim.univaq.it/
Session 3: Verification and Validation
Paper 2: Simulation Testing and Model Checking: A Case Study Comparing these Approaches
SERENE 2014 Workshop: Paper "Using Instrumentation for Quality Assessment of ...SERENEWorkshop
SERENE 2014 - 6th International Workshop on Software Engineering for Resilient Systems
http://serene.disim.univaq.it/
Session 4: Monitoring
Paper 1: Using Instrumentation for Quality Assessment of Resilient Software in Embedded Systems
This document presents research on assessing risk to determine the optimal level of redundancy needed when moving critical applications to the cloud. It develops fault tree models based on the physical structure of clouds to calculate failure frequency. It factors in varying resource quality and the costs of downtime and VMs. The results show that deploying between 4-6 redundant VMs provides significant availability gains and reduces total costs by lowering risk compared to basic redundancy approaches. The aim was met of leveraging cloud features in modeling to support high-value, mission critical applications on public clouds.
Biological Immunity and Software Resilience: Two Faces of the Same Coin?SERENEWorkshop
The document discusses the similarities between biological immunity and software resilience. It proposes that biological systems are resilient, with the immune system being a prime example due to its ability to adapt, make decisions through distributed agents, and defend the body through learning. An actor-based model is presented as a way to engineer resilience into software by drawing inspiration from immune system principles like replication, containment, and delegation. A bio-inspired architecture is described that uses supervisor actors to detect changes and spawn helper/killer actors to address issues while maintaining system function. Future work areas are identified like automatic failure recognition, dynamic learning, and multi-layer management of failures.
Engineering Cross-Layer Fault Tolerance in Many-Core SystemsSERENEWorkshop
1) The document discusses engineering cross-layer fault tolerance in many-core systems. It proposes a cross-layer approach where fault tolerance is distributed across system layers rather than handled solely within a single layer.
2) A motivating example of cross-layer fault tolerance is discussed using TCP/IP, where errors can be detected and recovered across multiple layers for improved efficiency.
3) The challenges of ensuring cross-layer fault tolerance for many-core systems containing tens to thousands of cores are discussed to improve reliability, performance and energy efficiency.
4) The plan is to implement a case study of a car number plate recognition application to gain experience with cross-layer fault tolerance, and apply order graphs to model performance
This document summarizes a presentation on system-level concurrent error detection. It discusses specifying reliability constraints in system specifications, design methodologies that provide error detection capabilities through redundancy, and a two-level hardware/software partitioning approach that first considers traditional costs and then analyzes reliability constraints. The goal is to adopt design for reliability approaches earlier in the system design process to significantly impact costs like timing, energy and area.
SERENE 2014 School: Measurement-Driven Resilience Design of Cloud-Based Cyber...SERENEWorkshop
SERENE 2014 School on Engineering Resilient Cyber Physical Systems
Talk: Measurement-Driven Resilience Design of Cloud-Based Cyber-Physical Systems, by Imre Kocsis
SERENE 2014 School: Resilience in Cyber-Physical Systems: Challenges and Oppo...SERENEWorkshop
SERENE 2014 School on Engineering Resilient Cyber Physical Systems
Talk: Resilience in Cyber-Physical Systems: Challenges and Opportunities, by Gabor Karsai
RADIANCE is co-located with the 46th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (www.dsn.org), and it will take place in Toulouse, France, June 28 - July 1st, 2016.
CECRIS Logo RADIANCE is supported by the CECRIS FP7 project (CErtification of CRItical Systems) and DEVASSES FP7 Project (DEsign, Verification and VAlidation of large scale, dynamic Service SystEmS).
Critical systems are becoming more and more complex and heterogeneous, integrating previously separated systems and including design solutions ranging from the introduction of software Off The Shelf (OTS) to the adoption of loosely-integrated and composable services. Innovative dependability assessment solutions and certification processes are thus needed to deal with such complexity, calling for new solutions for the efficient, automated, and possibly continuous assessment and certification.
The RADIANCE workshop aims to discuss novel dependability assessment approaches for complex systems and to promote their adoption in real-world systems through industrial and academic research. RADIANCE aims to promote and foster discussion on novel ideas, constituting a forum where researchers can share both real problems and innovative solutions for the assessment of complex systems.
Topics include, but are not limited to:
Assessment of integrated systems including software OTS
Agile development in critical systems: assessment challenges and approaches
Natural language requirements for software development
Software assessment to cope with increasing system complexity
Certification of complex and integrated systems
Dynamic and evolving systems: new needs for verification, validation and certification
Automated verification and validation of critical systems
Model-driven approaches for the assessment of dependable and secure systems
Experimental assessment of dependability and security
Dependable and secure services
Open issues, practical experiences and real-world case studies
Important Dates:
Submission deadline: March 21, 2016
Author notification: April 18, 2016
Camera-ready: TBD
Workshop: June 28th, 2016
SERENE 2014 Workshop: Paper "Adaptive Domain-Specific Service Monitoring"SERENEWorkshop
- An adaptive service monitoring approach that considers domain-specific errors, such as codec errors for streamed media, in addition to generic errors.
- The approach adapts the monitoring frequency for a particular service and error type based on the historical error rate to reduce monitoring costs.
- An evaluation using real-world data from Smart TV services found that the adaptive approach reduced monitoring costs by 30% with negligible impact on error detection quality.
SERENE 2014 Workshop: Paper "Advanced Modelling, Simulation and Verification ...SERENEWorkshop
SERENE 2014 - 6th International Workshop on Software Engineering for Resilient Systems
http://serene.disim.univaq.it/
Session 3: Verification and Validation
Paper 3: Advanced Modelling, Simulation and Verification for Future Traffic Regulation Optimisation
SERENE 2014 Workshop: Paper "Combined Error Propagation Analysis and Runtime ...SERENEWorkshop
SERENE 2014 - 6th International Workshop on Software Engineering for Resilient Systems
http://serene.disim.univaq.it/
Session 4: Monitoring
Paper 3: Combined Error Propagation Analysis and Runtime Event Detection in Process-driven Systems
SERENE 2014 Workshop: Paper "Verification and Validation of a Pressure Contro...SERENEWorkshop
SERENE 2014 - 6th International Workshop on Software Engineering for Resilient Systems
http://serene.disim.univaq.it/
Session 3: Verification and Validation
Paper 1: Verification and Validation of a Pressure Control Unit for Hydraulic Systems
SERENE 2014 Workshop: Panel on "Views on Runtime Resilience Assessment of Dyn...SERENEWorkshop
The document summarizes a panel discussion on views of runtime resilience assessment of dynamic software systems held at SERENE 2014 in Budapest, Hungary. The panelists represented different domains related to resilience assessment, software engineering, dynamic systems design, and dependable computing. They discussed key challenges around metrics for characterizing resilience, defining dynamic workloads and changeloads, monitoring unbounded and dynamic systems, maintaining accurate runtime models, and standardizing resilience assessment techniques. The panelists emphasized the need for predictive monitoring and adaptation, rather than just detection, to ensure resilience in increasingly complex and evolving software systems.
SERENE 2014 Workshop: Paper "Formal Fault Tolerance Analysis of Algorithms fo...SERENEWorkshop
SERENE 2014 - 6th International Workshop on Software Engineering for Resilient Systems
http://serene.disim.univaq.it/
Session 2: Analysis of Resilience
Paper : Formal Fault Tolerance Analysis of Algorithms for Redundant Systems in Early Design Stages
This presentation shows an overview of the main concepts introduced in the EDBT2015 Summer School, which took place in Palamos. For each area, we summarize the main issues and current approaches. We also describe the challenges and main activities that were undertaken in the summer school
This document summarizes an academic lecture on convolutional neural network architectures. It begins with an overview of common CNN components like convolution layers, pooling layers, and normalization techniques. It then reviews the architectures of seminal CNN models including AlexNet, VGG, GoogLeNet, ResNet and others. As an example, it walks through the architecture of AlexNet in detail, explaining the parameters and output sizes of each layer. The document provides a high-level history of architectural innovations that drove improved performance on the ImageNet challenge.
This document proposes an approach to improve geographic information (GI) interoperability through emergent semantics. It describes using structure preserving semantic matching (SPSM) to find correspondences between semantically related nodes in graph-like representations (e.g. schemas, ontologies) while preserving structural properties. An example matching geo-services requests is provided. Evaluation on synthesized datasets showed average precision and recall of 0.78, demonstrating the potential of the approach. Future work will include extensive evaluation and extending the approach to fully developed spatial data infrastructure ontologies.
Localized methods for diffusions in large graphsDavid Gleich
I describe a few ongoing research projects on diffusions in large graphs and how we can create efficient matrix computations in order to determine them efficiently.
Many biological systems exhibit heterogeneiety on a population level. This heterogeneity can be captured by describing the temporal evolution of the probability of an individual in the population to be in a certain state as partial differential equation. To tune parameters of such a partial differential equation to experimental data, a partial differential equation constrained optimisation problem has to be solved. Hence, for biological systems with a large number of states, a high-dimensional partial differential equation has to be solved. This can easily render the optimisation problem intractable, As there are no well-established, efficient integration schemes for high dimensional partial differential equations available. In this talk we will present techniques to translate the partial differential equation constrained optimization problem into a hierarchical, ordinary differential equation constrained optimization problem given a certain set of assumptions. We will present these assumptionas as well as the derivation of the hierarchical, ordinary differential equation constrained optimisation problem. Moreover we will present numerical schemes for the computation of the respective objective function and its gradient. Eventually we will also present numerical schemes to solve the constrained optimisation problem and apply these techniques to small and large scale biological applications for which experimental data is available.
This document provides an overview of graph edit distance, including its definition, history, and algorithms. It begins by defining an edit path as a sequence of node/edge insertions, deletions, and substitutions that transforms one graph into another. The graph edit distance is the cost of the lowest cost edit path. It describes tree search algorithms used to explore the space of possible edit paths efficiently. It also explains how edit paths can be modeled as assignment problems that are solved using techniques like the Hungarian algorithm to find approximations of the graph edit distance.
This document presents research on assessing risk to determine the optimal level of redundancy needed when moving critical applications to the cloud. It develops fault tree models based on the physical structure of clouds to calculate failure frequency. It factors in varying resource quality and the costs of downtime and VMs. The results show that deploying between 4-6 redundant VMs provides significant availability gains and reduces total costs by lowering risk compared to basic redundancy approaches. The aim was met of leveraging cloud features in modeling to support high-value, mission critical applications on public clouds.
Biological Immunity and Software Resilience: Two Faces of the Same Coin?SERENEWorkshop
The document discusses the similarities between biological immunity and software resilience. It proposes that biological systems are resilient, with the immune system being a prime example due to its ability to adapt, make decisions through distributed agents, and defend the body through learning. An actor-based model is presented as a way to engineer resilience into software by drawing inspiration from immune system principles like replication, containment, and delegation. A bio-inspired architecture is described that uses supervisor actors to detect changes and spawn helper/killer actors to address issues while maintaining system function. Future work areas are identified like automatic failure recognition, dynamic learning, and multi-layer management of failures.
Engineering Cross-Layer Fault Tolerance in Many-Core SystemsSERENEWorkshop
1) The document discusses engineering cross-layer fault tolerance in many-core systems. It proposes a cross-layer approach where fault tolerance is distributed across system layers rather than handled solely within a single layer.
2) A motivating example of cross-layer fault tolerance is discussed using TCP/IP, where errors can be detected and recovered across multiple layers for improved efficiency.
3) The challenges of ensuring cross-layer fault tolerance for many-core systems containing tens to thousands of cores are discussed to improve reliability, performance and energy efficiency.
4) The plan is to implement a case study of a car number plate recognition application to gain experience with cross-layer fault tolerance, and apply order graphs to model performance
This document summarizes a presentation on system-level concurrent error detection. It discusses specifying reliability constraints in system specifications, design methodologies that provide error detection capabilities through redundancy, and a two-level hardware/software partitioning approach that first considers traditional costs and then analyzes reliability constraints. The goal is to adopt design for reliability approaches earlier in the system design process to significantly impact costs like timing, energy and area.
SERENE 2014 School: Measurement-Driven Resilience Design of Cloud-Based Cyber...SERENEWorkshop
SERENE 2014 School on Engineering Resilient Cyber Physical Systems
Talk: Measurement-Driven Resilience Design of Cloud-Based Cyber-Physical Systems, by Imre Kocsis
SERENE 2014 School: Resilience in Cyber-Physical Systems: Challenges and Oppo...SERENEWorkshop
SERENE 2014 School on Engineering Resilient Cyber Physical Systems
Talk: Resilience in Cyber-Physical Systems: Challenges and Opportunities, by Gabor Karsai
RADIANCE is co-located with the 46th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (www.dsn.org), and it will take place in Toulouse, France, June 28 - July 1st, 2016.
CECRIS Logo RADIANCE is supported by the CECRIS FP7 project (CErtification of CRItical Systems) and DEVASSES FP7 Project (DEsign, Verification and VAlidation of large scale, dynamic Service SystEmS).
Critical systems are becoming more and more complex and heterogeneous, integrating previously separated systems and including design solutions ranging from the introduction of software Off The Shelf (OTS) to the adoption of loosely-integrated and composable services. Innovative dependability assessment solutions and certification processes are thus needed to deal with such complexity, calling for new solutions for the efficient, automated, and possibly continuous assessment and certification.
The RADIANCE workshop aims to discuss novel dependability assessment approaches for complex systems and to promote their adoption in real-world systems through industrial and academic research. RADIANCE aims to promote and foster discussion on novel ideas, constituting a forum where researchers can share both real problems and innovative solutions for the assessment of complex systems.
Topics include, but are not limited to:
Assessment of integrated systems including software OTS
Agile development in critical systems: assessment challenges and approaches
Natural language requirements for software development
Software assessment to cope with increasing system complexity
Certification of complex and integrated systems
Dynamic and evolving systems: new needs for verification, validation and certification
Automated verification and validation of critical systems
Model-driven approaches for the assessment of dependable and secure systems
Experimental assessment of dependability and security
Dependable and secure services
Open issues, practical experiences and real-world case studies
Important Dates:
Submission deadline: March 21, 2016
Author notification: April 18, 2016
Camera-ready: TBD
Workshop: June 28th, 2016
SERENE 2014 Workshop: Paper "Adaptive Domain-Specific Service Monitoring"SERENEWorkshop
- An adaptive service monitoring approach that considers domain-specific errors, such as codec errors for streamed media, in addition to generic errors.
- The approach adapts the monitoring frequency for a particular service and error type based on the historical error rate to reduce monitoring costs.
- An evaluation using real-world data from Smart TV services found that the adaptive approach reduced monitoring costs by 30% with negligible impact on error detection quality.
SERENE 2014 Workshop: Paper "Advanced Modelling, Simulation and Verification ...SERENEWorkshop
SERENE 2014 - 6th International Workshop on Software Engineering for Resilient Systems
http://serene.disim.univaq.it/
Session 3: Verification and Validation
Paper 3: Advanced Modelling, Simulation and Verification for Future Traffic Regulation Optimisation
SERENE 2014 Workshop: Paper "Combined Error Propagation Analysis and Runtime ...SERENEWorkshop
SERENE 2014 - 6th International Workshop on Software Engineering for Resilient Systems
http://serene.disim.univaq.it/
Session 4: Monitoring
Paper 3: Combined Error Propagation Analysis and Runtime Event Detection in Process-driven Systems
SERENE 2014 Workshop: Paper "Verification and Validation of a Pressure Contro...SERENEWorkshop
SERENE 2014 - 6th International Workshop on Software Engineering for Resilient Systems
http://serene.disim.univaq.it/
Session 3: Verification and Validation
Paper 1: Verification and Validation of a Pressure Control Unit for Hydraulic Systems
SERENE 2014 Workshop: Panel on "Views on Runtime Resilience Assessment of Dyn...SERENEWorkshop
The document summarizes a panel discussion on views of runtime resilience assessment of dynamic software systems held at SERENE 2014 in Budapest, Hungary. The panelists represented different domains related to resilience assessment, software engineering, dynamic systems design, and dependable computing. They discussed key challenges around metrics for characterizing resilience, defining dynamic workloads and changeloads, monitoring unbounded and dynamic systems, maintaining accurate runtime models, and standardizing resilience assessment techniques. The panelists emphasized the need for predictive monitoring and adaptation, rather than just detection, to ensure resilience in increasingly complex and evolving software systems.
SERENE 2014 Workshop: Paper "Formal Fault Tolerance Analysis of Algorithms fo...SERENEWorkshop
SERENE 2014 - 6th International Workshop on Software Engineering for Resilient Systems
http://serene.disim.univaq.it/
Session 2: Analysis of Resilience
Paper : Formal Fault Tolerance Analysis of Algorithms for Redundant Systems in Early Design Stages
This presentation shows an overview of the main concepts introduced in the EDBT2015 Summer School, which took place in Palamos. For each area, we summarize the main issues and current approaches. We also describe the challenges and main activities that were undertaken in the summer school
This document summarizes an academic lecture on convolutional neural network architectures. It begins with an overview of common CNN components like convolution layers, pooling layers, and normalization techniques. It then reviews the architectures of seminal CNN models including AlexNet, VGG, GoogLeNet, ResNet and others. As an example, it walks through the architecture of AlexNet in detail, explaining the parameters and output sizes of each layer. The document provides a high-level history of architectural innovations that drove improved performance on the ImageNet challenge.
This document proposes an approach to improve geographic information (GI) interoperability through emergent semantics. It describes using structure preserving semantic matching (SPSM) to find correspondences between semantically related nodes in graph-like representations (e.g. schemas, ontologies) while preserving structural properties. An example matching geo-services requests is provided. Evaluation on synthesized datasets showed average precision and recall of 0.78, demonstrating the potential of the approach. Future work will include extensive evaluation and extending the approach to fully developed spatial data infrastructure ontologies.
Localized methods for diffusions in large graphsDavid Gleich
I describe a few ongoing research projects on diffusions in large graphs and how we can create efficient matrix computations in order to determine them efficiently.
Many biological systems exhibit heterogeneiety on a population level. This heterogeneity can be captured by describing the temporal evolution of the probability of an individual in the population to be in a certain state as partial differential equation. To tune parameters of such a partial differential equation to experimental data, a partial differential equation constrained optimisation problem has to be solved. Hence, for biological systems with a large number of states, a high-dimensional partial differential equation has to be solved. This can easily render the optimisation problem intractable, As there are no well-established, efficient integration schemes for high dimensional partial differential equations available. In this talk we will present techniques to translate the partial differential equation constrained optimization problem into a hierarchical, ordinary differential equation constrained optimization problem given a certain set of assumptions. We will present these assumptionas as well as the derivation of the hierarchical, ordinary differential equation constrained optimisation problem. Moreover we will present numerical schemes for the computation of the respective objective function and its gradient. Eventually we will also present numerical schemes to solve the constrained optimisation problem and apply these techniques to small and large scale biological applications for which experimental data is available.
This document provides an overview of graph edit distance, including its definition, history, and algorithms. It begins by defining an edit path as a sequence of node/edge insertions, deletions, and substitutions that transforms one graph into another. The graph edit distance is the cost of the lowest cost edit path. It describes tree search algorithms used to explore the space of possible edit paths efficiently. It also explains how edit paths can be modeled as assignment problems that are solved using techniques like the Hungarian algorithm to find approximations of the graph edit distance.
Fosdem 2013 petra selmer flexible querying of graph dataPetra Selmer
These are the slides from a talk I presented at the Graph Processing room at FOSDEM 2013, in which I discussed my PhD topic: a query language allowing for the flexible querying of complex paths within graph structured data
The document discusses using machine learning techniques like Gaussian processes (GPs) to optimize the configuration of software systems. It notes that software performance landscapes are often complex, with non-linear interactions between parameters and non-convex response surfaces. Measurements are also subject to noise. The document introduces an approach called TL4CO that uses multi-task Gaussian processes to model software performance across different versions/deployments, allowing it to leverage data from other versions to improve optimization. This helps address challenges in DevOps where new versions are continuously delivered.
1. The document discusses analyses of multiple location trials to evaluate genotype performance across environments. It describes factors to consider in determining the number and location of environments for trials and statistical analyses for multiple experiments. 2. Key analyses covered include the Bartlett test for homogeneity of error variances, analysis of variance models for sites and years, and joint regression analysis to evaluate genotype by environment interactions. 3. Joint regression analysis fits linear regressions between genotype performances in each environment and the mean performance across environments to identify which interactions are linear versus non-linear.
The remote sensing working group has investigated methodology for atmospheric remotesensing retrievals, which are mathematical and computational procedures for inferring the state of the atmosphere from remote sensing observations. Satellite data with fine spatial and temporal
resolution present opportunities to combine information across satellite pixels using spatiotemporal statistical modeling. We present examples of this approach at the process level of a hierarchical model, with a nonlinear radiative transfer model incorporated into the likelihood. In
this framework, we assess the impact of various statistical properties on the relative performance of a multi-pixel retrieval strategy versus an operational one-at-a-time approach. The prospect of adopting the approach is illustrated in the context of estimating atmospheric carbon dioxide concentration with data from NASA's Orbiting Carbon Observatory-2 (OCO-2).
This document discusses finite-difference calculus techniques used to approximate values of functions and derivatives at discrete points in reservoir simulation models. It introduces common finite-difference operators - including forward, backward, central, shift, and average operators - and examines their relationships to derivative operators in Taylor series expansions. Examples are provided to demonstrate calculating finite-difference approximations of first and second derivatives in 1D and 2D. The document also covers solving the Poisson equation and time-independent partial differential equations using finite-difference methods.
Challenges for coupling approaches for classical linear elasticity and bond-b...Patrick Diehl
The document presents coupling approaches for combining classical linear elasticity models with non-local peridynamic models for applications in computational mechanics. It describes three coupling methods - matching displacements (MDCM), matching stresses (MSCM), and variable horizon (VHCM). Numerical examples are presented to compare the accuracy of the three methods on manufactured solutions using cubic and quartic polynomials, demonstrating that all methods converge with refinement but VHCM typically has the lowest error.
The document discusses recursion, which is a programming technique where a method calls itself to fulfill its purpose. It provides examples of recursion in math concepts like factorials and describes how to implement recursion in Java methods. Recursion involves a base case, which terminates the recursive calls, and a recursive case that calls the method again. While recursion and iteration can solve the same problems, recursion may be more elegant for some problems while iteration is more efficient. The document also discusses analyzing the efficiency of recursive algorithms.
Applying reinforcement learning to single and multi-agent economic problemsanucrawfordphd
This document discusses applying reinforcement learning techniques to economic problems. It provides an overview of reinforcement learning and how it can be used to learn optimal policies for problems modeled as Markov decision processes. As an example, it discusses how reinforcement learning can be applied to learn policies for single-agent and multi-agent water storage problems. It also describes some specific reinforcement learning algorithms like fitted Q-iteration that are well-suited for economic problems.
The document proposes a K-Main Routes (KMR) algorithm to summarize spatial network activity. KMR finds a set of k routes that maximize activity coverage on a network. It introduces design decisions like inactive node pruning, Network Voronoi activity assignment, and divide-and-conquer summary path recomputation to improve runtime. The algorithm is evaluated analytically, experimentally on synthetic and real data, and through a case study comparing to geometry-based methods. KMR summarizes network activity for applications like crime analysis and environmental modeling.
The document discusses different machine learning algorithms for instance-based learning. It describes k-nearest neighbor classification which classifies new instances based on the labels of the k closest training examples. It also covers locally weighted regression which approximates the target function based on nearby training data. Radial basis function networks are discussed as another approach using localized kernel functions to provide a global approximation of the target function. Case-based reasoning is presented as using rich symbolic representations of instances and reasoning over retrieved similar past cases to solve new problems.
The document provides an overview of recursive and iterative algorithms. It discusses key differences between recursive and iterative algorithms such as definition, application, termination, usage, code size, and time complexity. Examples of recursive algorithms like recursive sum, factorial, binary search, tower of Hanoi, and permutation generator are presented along with pseudocode. Analysis of recursive algorithms like recursive sum, factorial, binary search, Fibonacci number, and tower of Hanoi is demonstrated to determine their time complexities. The document also discusses iterative algorithms, proving an algorithm's correctness, the brute force approach, and store and reuse methods.
Similar to Considering Execution Environment Resilience: A White-Box Approach (20)
Digital Twins Computer Networking Paper Presentation.pptxaryanpankaj78
A Digital Twin in computer networking is a virtual representation of a physical network, used to simulate, analyze, and optimize network performance and reliability. It leverages real-time data to enhance network management, predict issues, and improve decision-making processes.
Accident detection system project report.pdfKamal Acharya
The Rapid growth of technology and infrastructure has made our lives easier. The
advent of technology has also increased the traffic hazards and the road accidents take place
frequently which causes huge loss of life and property because of the poor emergency facilities.
Many lives could have been saved if emergency service could get accident information and
reach in time. Our project will provide an optimum solution to this draw back. A piezo electric
sensor can be used as a crash or rollover detector of the vehicle during and after a crash. With
signals from a piezo electric sensor, a severe accident can be recognized. According to this
project when a vehicle meets with an accident immediately piezo electric sensor will detect the
signal or if a car rolls over. Then with the help of GSM module and GPS module, the location
will be sent to the emergency contact. Then after conforming the location necessary action will
be taken. If the person meets with a small accident or if there is no serious threat to anyone’s
life, then the alert message can be terminated by the driver by a switch provided in order to
avoid wasting the valuable time of the medical rescue team.
Impartiality as per ISO /IEC 17025:2017 StandardMuhammadJazib15
This document provides basic guidelines for imparitallity requirement of ISO 17025. It defines in detial how it is met and wiudhwdih jdhsjdhwudjwkdbjwkdddddddddddkkkkkkkkkkkkkkkkkkkkkkkwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwioiiiiiiiiiiiii uwwwwwwwwwwwwwwwwhe wiqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq gbbbbbbbbbbbbb owdjjjjjjjjjjjjjjjjjjjj widhi owqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq uwdhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhwqiiiiiiiiiiiiiiiiiiiiiiiiiiiiw0pooooojjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjj whhhhhhhhhhh wheeeeeeee wihieiiiiii wihe
e qqqqqqqqqqeuwiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiqw dddddddddd cccccccccccccccv s w c r
cdf cb bicbsad ishd d qwkbdwiur e wetwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww w
dddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddfffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffw
uuuuhhhhhhhhhhhhhhhhhhhhhhhhe qiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii iqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc ccccccccccccccccccccccccccccccccccc bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbu uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuum
m
m mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm m i
g i dijsd sjdnsjd ndjajsdnnsa adjdnawddddddddddddd uw
Road construction is not as easy as it seems to be, it includes various steps and it starts with its designing and
structure including the traffic volume consideration. Then base layer is done by bulldozers and levelers and after
base surface coating has to be done. For giving road a smooth surface with flexibility, Asphalt concrete is used.
Asphalt requires an aggregate sub base material layer, and then a base layer to be put into first place. Asphalt road
construction is formulated to support the heavy traffic load and climatic conditions. It is 100% recyclable and
saving non renewable natural resources.
With the advancement of technology, Asphalt technology gives assurance about the good drainage system and with
skid resistance it can be used where safety is necessary such as outsidethe schools.
The largest use of Asphalt is for making asphalt concrete for road surfaces. It is widely used in airports around the
world due to the sturdiness and ability to be repaired quickly, it is widely used for runways dedicated to aircraft
landing and taking off. Asphalt is normally stored and transported at 150’C or 300’F temperature
Open Channel Flow: fluid flow with a free surfaceIndrajeet sahu
Open Channel Flow: This topic focuses on fluid flow with a free surface, such as in rivers, canals, and drainage ditches. Key concepts include the classification of flow types (steady vs. unsteady, uniform vs. non-uniform), hydraulic radius, flow resistance, Manning's equation, critical flow conditions, and energy and momentum principles. It also covers flow measurement techniques, gradually varied flow analysis, and the design of open channels. Understanding these principles is vital for effective water resource management and engineering applications.
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Tools & Techniques for Commissioning and Maintaining PV Systems W-Animations ...Transcat
Join us for this solutions-based webinar on the tools and techniques for commissioning and maintaining PV Systems. In this session, we'll review the process of building and maintaining a solar array, starting with installation and commissioning, then reviewing operations and maintenance of the system. This course will review insulation resistance testing, I-V curve testing, earth-bond continuity, ground resistance testing, performance tests, visual inspections, ground and arc fault testing procedures, and power quality analysis.
Fluke Solar Application Specialist Will White is presenting on this engaging topic:
Will has worked in the renewable energy industry since 2005, first as an installer for a small east coast solar integrator before adding sales, design, and project management to his skillset. In 2022, Will joined Fluke as a solar application specialist, where he supports their renewable energy testing equipment like IV-curve tracers, electrical meters, and thermal imaging cameras. Experienced in wind power, solar thermal, energy storage, and all scales of PV, Will has primarily focused on residential and small commercial systems. He is passionate about implementing high-quality, code-compliant installation techniques.
Levelised Cost of Hydrogen (LCOH) Calculator ManualMassimo Talia
The aim of this manual is to explain the
methodology behind the Levelized Cost of
Hydrogen (LCOH) calculator. Moreover, this
manual also demonstrates how the calculator
can be used for estimating the expenses associated with hydrogen production in Europe
using low-temperature electrolysis considering different sources of electricity
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...PriyankaKilaniya
Energy efficiency has been important since the latter part of the last century. The main object of this survey is to determine the energy efficiency knowledge among consumers. Two separate districts in Bangladesh are selected to conduct the survey on households and showrooms about the energy and seller also. The survey uses the data to find some regression equations from which it is easy to predict energy efficiency knowledge. The data is analyzed and calculated based on five important criteria. The initial target was to find some factors that help predict a person's energy efficiency knowledge. From the survey, it is found that the energy efficiency awareness among the people of our country is very low. Relationships between household energy use behaviors are estimated using a unique dataset of about 40 households and 20 showrooms in Bangladesh's Chapainawabganj and Bagerhat districts. Knowledge of energy consumption and energy efficiency technology options is found to be associated with household use of energy conservation practices. Household characteristics also influence household energy use behavior. Younger household cohorts are more likely to adopt energy-efficient technologies and energy conservation practices and place primary importance on energy saving for environmental reasons. Education also influences attitudes toward energy conservation in Bangladesh. Low-education households indicate they primarily save electricity for the environment while high-education households indicate they are motivated by environmental concerns.
Supermarket Management System Project Report.pdfKamal Acharya
Supermarket management is a stand-alone J2EE using Eclipse Juno program.
This project contains all the necessary required information about maintaining
the supermarket billing system.
The core idea of this project to minimize the paper work and centralize the
data. Here all the communication is taken in secure manner. That is, in this
application the information will be stored in client itself. For further security the
data base is stored in the back-end oracle and so no intruders can access it.
Blood finder application project report (1).pdfKamal Acharya
Blood Finder is an emergency time app where a user can search for the blood banks as
well as the registered blood donors around Mumbai. This application also provide an
opportunity for the user of this application to become a registered donor for this user have
to enroll for the donor request from the application itself. If the admin wish to make user
a registered donor, with some of the formalities with the organization it can be done.
Specialization of this application is that the user will not have to register on sign-in for
searching the blood banks and blood donors it can be just done by installing the
application to the mobile.
The purpose of making this application is to save the user’s time for searching blood of
needed blood group during the time of the emergency.
This is an android application developed in Java and XML with the connectivity of
SQLite database. This application will provide most of basic functionality required for an
emergency time application. All the details of Blood banks and Blood donors are stored
in the database i.e. SQLite.
This application allowed the user to get all the information regarding blood banks and
blood donors such as Name, Number, Address, Blood Group, rather than searching it on
the different websites and wasting the precious time. This application is effective and
user friendly.
4. Considering
Execution
Environment
Resilience:
A White-Box
Approach
S.Klikovits,
D. Lawrence,
M. Gonzalez-
Berges,
D. Buchs
Welcome to CERN
LHC, experiments, infrastructure (e.g. power grid)
large-scale, widespread, complex systems
many types of hard- and software
> 100 subsystems,
10,000s of devices,
1,000,000s of parameters
thousands of physicists/engineers/workers
high employee turnover
3 / 21
5. Considering
Execution
Environment
Resilience:
A White-Box
Approach
S.Klikovits,
D. Lawrence,
M. Gonzalez-
Berges,
D. Buchs
Welcome to CERN
LHC, experiments, infrastructure (e.g. power grid)
large-scale, widespread, complex systems
many types of hard- and software
> 100 subsystems,
10,000s of devices,
1,000,000s of parameters
thousands of physicists/engineers/workers
high employee turnover
high reliability and resilience expectations
3 / 21
9. Considering
Execution
Environment
Resilience:
A White-Box
Approach
S.Klikovits,
D. Lawrence,
M. Gonzalez-
Berges,
D. Buchs
Testing
1 f ( x ){
2 i f GLOBAL_VAR:
3 return dbGet ( x )
4 e l s e :
5 return −1
6 }
f(x)
1 test_f (){
2 dbSet ( " t e s t " ,5) // prepare
3 GLOBAL_VAR = True
4 x = f ( " t e s t " ) // act
5 a s s e r t ( x == 5) // a s s e r t
6 }
Test case for f(x)
6 / 21
13. Considering
Execution
Environment
Resilience:
A White-Box
Approach
S.Klikovits,
D. Lawrence,
M. Gonzalez-
Berges,
D. Buchs
Semi-purification
replace dependencies with parameters
1 f ( x ){
2 i f GLOBAL_VAR :
3 return dbGet(x)
4 e l s e :
5 return −1
6 }
A non-pure function
1 f_sp ( x , a ,b){
2 i f a :
3 return b
4 e l s e :
5 return −1
6 }
Semi-purified f(x)
1 test_f_sp (){
2 x = f ( " t e s t " , True , 5 ) // act
3 a s s e r t ( x == 5) // a s s e r t
4 }
Test case
9 / 21
14. Considering
Execution
Environment
Resilience:
A White-Box
Approach
S.Klikovits,
D. Lawrence,
M. Gonzalez-
Berges,
D. Buchs
Semi-purification (cont.)
replace dependencies with parameters
1 functionA ( x ){
2 a = functionB ( x )
3 return a
4 }
5
6 functionB ( x ){
7 b = GLOBAL_VAR
8 b++
9 return b
10 }
Function with SRC
1 functionA_sp ( x , y){
2 a = functionB ( x , y)
3 return a
4 }
5
6 functionB_sp ( x , y){
7 b = y
8 b++
9 return b
10 }
Semi-purified w. SRC
10 / 21
15. Considering
Execution
Environment
Resilience:
A White-Box
Approach
S.Klikovits,
D. Lawrence,
M. Gonzalez-
Berges,
D. Buchs
Semi-purification (cont.)
replace dependencies with parameters
1 functionA ( x ){
2 a = functionB ( x )
3 return a
4 }
5
6 functionB ( x ){
7 b = GLOBAL_VAR
8 b++
9 return b
10 }
Function with SRC
1 functionA_sp ( x , y){
2 a = functionB ( x , y)
3 return a
4 }
5
6 functionB_sp ( x , y){
7 b = y
8 b++
9 return b
10 }
Semi-purified w. SRC
10 / 21
16. Considering
Execution
Environment
Resilience:
A White-Box
Approach
S.Klikovits,
D. Lawrence,
M. Gonzalez-
Berges,
D. Buchs
Semi-purification: Concept
code contains dependencies
global variables, data base values, subroutine calls,
other resources
manual way: test doubles (mocks, stubs, fakes, . . . )
[ME06]
remove dependencies
based on localization [SW03, SK13]
input parameters instead of dependencies
use any ATCG (black- and white-box)
11 / 21
27. Considering
Execution
Environment
Resilience:
A White-Box
Approach
S.Klikovits,
D. Lawrence,
M. Gonzalez-
Berges,
D. Buchs
SP: Technical Challenges (cont.)
Loops
1 s l e e p U n t i l R e a d y (a){ // a =bool
2
3 while a : // replaces dbGet(notReadyDP)
4 s l e e p (5) // s l e e p f o r 5 seconds
5
6 }
A semi-purified loop
Test Cases:
a: False ⇒ loop not executed
a: True ⇒ endless loop
15 / 21
28. Considering
Execution
Environment
Resilience:
A White-Box
Approach
S.Klikovits,
D. Lawrence,
M. Gonzalez-
Berges,
D. Buchs
SP: Technical Challenges (cont.)
Loops
1 s l e e p U n t i l R e a d y (a){ [bool]
2 i = 0
3 while a[i] : // replaces dbGet(notReadyDP)
4 s l e e p (5) // s l e e p f o r 5 seconds
5 i++
6 }
A semi-purified loop
Test Cases:
a: [False]⇒ loop not executed
a: [True, True, . . . , False] ⇒ loop execution
Open questions:
how long should the list be?
how to modify correctly?
test modified code or w. threads?
15 / 21
35. Considering
Execution
Environment
Resilience:
A White-Box
Approach
S.Klikovits,
D. Lawrence,
M. Gonzalez-
Berges,
D. Buchs
References
[ME06] Meszaros, G.:
XUnit Test Patterns: Refactoring Test Code, Chapter 23.
Test Double Patterns, pages 521–590.
Prentice Hall PTR, Upper Saddle River, NJ, USA, (2006)
[SC03] Sward, R. E., Chamillard, A. T.:
Re-engineering Global Variables in Ada.
In: Proc. 2004 ACM SIGAda international conference on
Ada, pp. 29–34, ACM, New York, (2003).
[SK13] Sankaranarayanan, H., Kulkarni, P.:
Source-to-Source Refactoring and Elimination of Global
Variables in C Programs.
In: Journal of Software Engineering and Applications,
Vol. 6 No. 5, pp. 264–273, (2013).
20 / 21
36. Considering Execution Environment Resilience:
A White-Box Approach
Stefan Klikovits1,2, David PY Lawrence1,3,
Manuel Gonzalez-Berges2, Didier Buchs1
1Université de Genève, Carouge, Switzerland
2CERN, Geneva, Switzerland
3Honeywell International Sarl., Rolle, Switzerland
Tuesday 8th September, 2015