This document contains a summary of 14 experiments conducted for the Numerical Analysis and Statistical Techniques lab course. Each experiment involves writing a C program to solve a different numerical problem, such as finding the largest of three numbers, solving quadratic equations, numerical integration using trapezoidal and Simpson's rules, and solving differential equations using numerical methods like Runge-Kutta. For each experiment, the document includes the aim, flowchart, C program code, and sample output.
This document discusses pointers in C programming. It begins by defining pointers as variables that store memory addresses rather than values, and describes some of their applications. It then explains the basic concepts of how variables are stored in memory with unique addresses. The rest of the document provides examples and explanations of pointer declarations, accessing variables through pointers, pointer arithmetic, passing pointers to functions, and other pointer-related topics in C.
- Six Sigma is a quality methodology that aims for near perfection with 3.4 defects per million opportunities. It was developed by Motorola in 1987.
- Key concepts include process capability index (Cp), process variation, and specification limits. A Cp of 2.0 or higher is needed to achieve Six Sigma quality.
- The DMAIC methodology is used for improving existing processes and focuses on defining problems, measuring processes, analyzing causes, improving processes, and controlling future performance. DFSS designs new processes at Six Sigma quality levels using approaches like DMADV.
Formal methods help improve the quality and reliability of software by providing proof of correctness. However, ensuring the correctness of verification tools that apply these formal methods, is itself a much harder problem. A typical way to justify the correctness is to provide soundness proofs based on semantic models. For program verifiers these soundness proofs are quite large and complex. In this thesis, we introduce certified reasoning to provide machine checked proofs of various components of an automated verification system. We develop new certified decision procedures (Omega++) and certified proofs (for compatible sharing) and integrate with an existing automated verification system (HIP/SLEEK). We show that certified reasoning improves the correctness and expressivity of automated verification without sacrificing on performance.
Sixteen (16) simple rules for building robust machine learning models. Invited talk for the AMA call of the Research Data Alliance (RDA) Early Career and Engagement Interest Group (ECEIG).
This document provides an overview of Six Sigma methodology. It discusses that Six Sigma aims to reduce defects to 3.4 per million opportunities by using statistical methods. The Six Sigma methodology uses the DMAIC process which stands for Define, Measure, Analyze, Improve, and Control. It also outlines several statistical tools used in Six Sigma like check sheets, Pareto charts, histograms, scatter diagrams and control charts. Process capability and its measures like Cp, Cpk are also explained. The document provides examples to demonstrate how to calculate these metrics and interpret them.
This document provides an overview of Six Sigma methodology. It discusses that Six Sigma aims to reduce defects to 3.4 per million opportunities by using statistical methods. The Six Sigma methodology uses the DMAIC process which stands for Define, Measure, Analyze, Improve, and Control. It also outlines several statistical tools used in Six Sigma like check sheets, Pareto charts, histograms, scatter diagrams, and control charts. Process capability and its measures like Cp, Cpk are also defined. The document aims to explain the key concepts and tools used in Six Sigma to improve quality and processes.
This document outlines the schedule and content for an advanced econometrics and Stata training course taking place from October 17-26, 2019 in Beijing, China. The course will cover topics including single and multi-regression, hypothesis testing, panel data models, time series models, stochastic frontier analysis, data envelopment analysis, and difference-in-differences. Data envelopment analysis will be the focus of sessions 13 and 14, covering concepts such as efficiency measurement, variable returns to scale, and incorporating environmental variables.
This document contains a summary of 14 experiments conducted for the Numerical Analysis and Statistical Techniques lab course. Each experiment involves writing a C program to solve a different numerical problem, such as finding the largest of three numbers, solving quadratic equations, numerical integration using trapezoidal and Simpson's rules, and solving differential equations using numerical methods like Runge-Kutta. For each experiment, the document includes the aim, flowchart, C program code, and sample output.
This document discusses pointers in C programming. It begins by defining pointers as variables that store memory addresses rather than values, and describes some of their applications. It then explains the basic concepts of how variables are stored in memory with unique addresses. The rest of the document provides examples and explanations of pointer declarations, accessing variables through pointers, pointer arithmetic, passing pointers to functions, and other pointer-related topics in C.
- Six Sigma is a quality methodology that aims for near perfection with 3.4 defects per million opportunities. It was developed by Motorola in 1987.
- Key concepts include process capability index (Cp), process variation, and specification limits. A Cp of 2.0 or higher is needed to achieve Six Sigma quality.
- The DMAIC methodology is used for improving existing processes and focuses on defining problems, measuring processes, analyzing causes, improving processes, and controlling future performance. DFSS designs new processes at Six Sigma quality levels using approaches like DMADV.
Formal methods help improve the quality and reliability of software by providing proof of correctness. However, ensuring the correctness of verification tools that apply these formal methods, is itself a much harder problem. A typical way to justify the correctness is to provide soundness proofs based on semantic models. For program verifiers these soundness proofs are quite large and complex. In this thesis, we introduce certified reasoning to provide machine checked proofs of various components of an automated verification system. We develop new certified decision procedures (Omega++) and certified proofs (for compatible sharing) and integrate with an existing automated verification system (HIP/SLEEK). We show that certified reasoning improves the correctness and expressivity of automated verification without sacrificing on performance.
Sixteen (16) simple rules for building robust machine learning models. Invited talk for the AMA call of the Research Data Alliance (RDA) Early Career and Engagement Interest Group (ECEIG).
This document provides an overview of Six Sigma methodology. It discusses that Six Sigma aims to reduce defects to 3.4 per million opportunities by using statistical methods. The Six Sigma methodology uses the DMAIC process which stands for Define, Measure, Analyze, Improve, and Control. It also outlines several statistical tools used in Six Sigma like check sheets, Pareto charts, histograms, scatter diagrams and control charts. Process capability and its measures like Cp, Cpk are also explained. The document provides examples to demonstrate how to calculate these metrics and interpret them.
This document provides an overview of Six Sigma methodology. It discusses that Six Sigma aims to reduce defects to 3.4 per million opportunities by using statistical methods. The Six Sigma methodology uses the DMAIC process which stands for Define, Measure, Analyze, Improve, and Control. It also outlines several statistical tools used in Six Sigma like check sheets, Pareto charts, histograms, scatter diagrams, and control charts. Process capability and its measures like Cp, Cpk are also defined. The document aims to explain the key concepts and tools used in Six Sigma to improve quality and processes.
This document outlines the schedule and content for an advanced econometrics and Stata training course taking place from October 17-26, 2019 in Beijing, China. The course will cover topics including single and multi-regression, hypothesis testing, panel data models, time series models, stochastic frontier analysis, data envelopment analysis, and difference-in-differences. Data envelopment analysis will be the focus of sessions 13 and 14, covering concepts such as efficiency measurement, variable returns to scale, and incorporating environmental variables.
One of the most important, yet often overlooked, aspects of predictive modeling is the transformation of data to create model inputs, better known as feature engineering (FE). This talk will go into the theoretical background behind FE, showing how it leverages existing data to produce better modeling results. It will then detail some important FE techniques that should be in every data scientist’s tool kit.
Exploiting Hierarchy in the Abstraction-Based Verification of Statecharts Usi...Akos Hajdu
Presentation of our paper at the 14th International Workshop on Formal Engineering approaches to Software Components and Architectures (FESCA 2017). Uppsala, Sweden
This document proposes an objective, comprehensive methodology for overseeing defense acquisition programs and portfolios using quantitative metrics. It summarizes the current state of various technology projects across different directorates and phases of the acquisition cycle. Key metrics are proposed to assess individual project and portfolio performance in terms of technical success, cost, and schedule. Visualizations of performance vectors and matrices are suggested to help managers evaluate programs. The goal is to provide decision-makers with empirical data to optimize resources and ensure technologies deliver promised capabilities to warfighters.
Quality is defined as customers' perception of how well a product or service meets their expectations. There are three types of quality: quality of design, quality of performance, and quality of conformance. Statistical quality control uses statistical techniques to control, improve, and maintain quality. Control charts are used to determine if a process is in or out of control by monitoring for random or assignable variation. Process capability indices like Cp and Cpk compare process variability to specification limits to determine if a process is capable of meeting specifications.
Statistical Process Control (SPC) is a method to maintain good product quality, control costs, and improve processes. It involves collecting data on key quality characteristics (4Ms+1E: Man, Machine, Method, Material, Environment) and analyzing the data using control charts to detect process variations. Control charts establish control limits to monitor a process over time, and identify processes that are unstable or out of control. Process improvement tools like Six Sigma aim to reduce process variations and keep performance stable through methods like DMAIC (Define, Measure, Analyze, Improve, Control). SPC helps detect machine faults, gain production awareness, and achieve good business reputation through consistent quality.
The document describes a project to reduce inventory discrepancies between an E-book system and SAP system at a factory. It involved defining key metrics, measuring current performance, analyzing causes of discrepancies, designing database functions to compare transactions, and verifying improvements. The project reduced defects per unit by 12.58%, tariff loss by 89.36% (US$47,252), and process cycle time to 3 days, meeting all project goals.
Industrial plant optimization in reduced dimensional spacesCapstone
This document summarizes an industrial plant optimization lecture given in Toronto. It discusses the history of optimization in oil refining from early adoption in the 1950s to modern real-time optimization (RTO). RTO aims to capture opportunities from changing plant conditions by modeling the plant with engineering equations and optimizing the model in parallel with plant operation. While RTO provides benefits, reconciling measurements, non-linear constraints, and operator acceptance present technical and behavioral challenges. New approaches using projection methods to model plants from historical operating data in reduced dimensional spaces are discussed as alternatives to traditional modeling that may better represent operator preferences and familiarity.
The document provides an overview of six sigma and statistical process control (SPC). It defines variation and explains the importance of understanding and controlling it. The objectives of SPC are outlined, including appreciating variation, understanding normal distribution and different types of process variation. Control charts are introduced as a tool to monitor processes and identify special causes of variation. The importance of objective data use is discussed.
The document discusses three major problems in verification: specifying properties to check, specifying the environment, and computational complexity. It then presents several approaches to addressing these problems, including using coverage metrics tailored to detection ability, sequential equivalence checking to avoid testbenches, and "perspective-based verification" using minimal abstract models focused on specific property classes. This allows verification earlier in design when changes are more tractable and catches bugs before implementation.
This document provides an introduction to business intelligence and data analytics. It discusses key concepts such as data sources, data warehouses, data marts, data mining, and data analytics. It also covers topics like univariate analysis, measures of dispersion, heterogeneity measures, confidence intervals, cross validation, and ROC curves. The document aims to introduce fundamental techniques and metrics used in business intelligence and data mining.
Heuristic design of experiments w meta gradient searchGreg Makowski
Once you have started learning about predictive algorithms, and the basic knowledge discovery in databases process, what is the next level of detail to learn for a consulting project?
* Give examples of the many model training parameters
* Track results in a "model notebook"
* Use a model metric that combines both accuracy and generalization to rank models
* How to strategically search over the model training parameters - use a gradient descent approach
* One way to describe an arbitrarily complex predictive system is by using sensitivity analysis
Software analytics focuses on analyzing and modeling a rich source of software data using well-established data analytics techniques in order to glean actionable insights for improving development practices, productivity, and software quality. However, if care is not taken when analyzing and modeling software data, the predictions and insights that are derived from analytical models may be inaccurate and unreliable. The goal of this hands-on tutorial is to guide participants on how to (1) analyze software data using statistical techniques like correlation analysis, hypothesis testing, effect size analysis, and multiple comparisons, (2) develop accurate, reliable, and reproducible analytical models, (3) interpret the models to uncover relationships and insights, and (4) discuss pitfalls associated with analytical techniques including hands-on examples with real software data. R will be the primary programming language. Code samples will be available in a public GitHub repository. Participants will do exercises via either RStudio or Jupyter Notebook through Binder.
Application of Machine Learning in AgricultureAman Vasisht
With the growing trend of machine learning, it is needless to say how machine learning can help reap benefits in agriculture. It will be boon for the farmer welfare.
This document summarizes the analysis of data from a pharmaceutical company to model and predict the output variable (titer) from input variables in a biochemical drug production process. Several statistical models were evaluated including linear regression, random forest, and MARS. The analysis involved developing blackbox models using only controlled input variables, snapshot models using all input variables at each time point, and history models incorporating changes in input variables over time to predict titer values. Model performance was compared using cross-validation.
Keynote: Machine Learning for Design Automation at DAC 2018Manish Pandey
Manish Pandey gave a keynote talk on transforming EDA with machine learning and discussed opportunities and challenges. He described how machine learning can be applied across different design abstraction levels from formal verification to silicon engineering. Pandey also discussed using machine learning techniques like reinforcement learning and word embeddings to optimize formal verification, simulation, and mask synthesis. Finally, he outlined challenges with data availability and model development for machine learning in EDA.
Heterogeneous Defect Prediction ( ESEC/FSE 2015)Sung Kim
This document describes a technique called Heterogeneous Defect Prediction (HDP) that aims to perform cross-project defect prediction even when the source and target projects have different sets of metrics (i.e. heterogeneous metrics). HDP first selects informative metrics from the source and target projects, then matches metrics between the projects that have similar distributions. It uses the matched metrics to build a prediction model on the source project and apply it to the target project. The technique is evaluated on multiple public defect datasets and is shown to outperform whole-project defect prediction and other cross-project defect prediction baselines in most cases, demonstrating the potential of HDP to reuse existing defect data more widely.
Credit card fraud is a growing problem that affects card holders around the world. Fraud detection has been an interesting topic in machine learning. Nevertheless, current state of the art credit card fraud detection algorithms miss to include the real costs of credit card fraud as a measure to evaluate algorithms. In this paper a new comparison measure that realistically represents the monetary gains and losses due to fraud detection is proposed. Moreover, using the proposed cost measure a cost sensitive method based on Bayes minimum risk is presented. This method is compared with state of the art algorithms and shows improvements up to 23% measured by cost. The results of this paper are based on real life transactional data provided by a large European card processing company.
Quality andc apability hand out 091123200010 Phpapp01jasonhian
The document outlines key concepts in quality management and Six Sigma methodology. It discusses definitions of quality, total quality management (TQM), and Six Sigma. Six Sigma aims to reduce defects through eliminating variation and achieving near zero defect levels. It uses a Define-Measure-Analyze-Improve-Control (DMAIC) methodology. Statistical process control charts and process capability indices are also introduced to measure quality performance. An example of Mumbai's successful lunch delivery system achieving over 5-sigma quality levels is provided.
This document discusses various evaluation metrics for binary and multi-class classification models. It explains that metrics are important for quantifying model performance, tracking progress, and debugging. For binary classifiers, it describes point metrics like accuracy, precision, recall from the confusion matrix. Summary metrics like AUROC and AUPRC are discussed as ways to evaluate models across all thresholds. The tradeoff between precision and recall is illustrated. Class imbalance issues and choosing appropriate metrics are also covered.
solc-verify: A Modular Verifier for Solidity Smart ContractsAkos Hajdu
Solc-Verify is a modular verifier for Solidity smart contracts. It allows users to annotate contracts with specification properties like preconditions, postconditions, and invariants in a custom annotation language. The verifier translates the annotated Solidity contract and specifications to Boogie and sends the resulting program to an SMT solver to automatically verify the properties. Solc-Verify aims to provide a practical verification tool that balances soundness, precision, expressiveness, and user-friendliness. It was presented as a way to help find bugs in smart contracts by formally verifying user-specified high-level properties.
Software Verification with Abstraction-Based MethodsAkos Hajdu
This document discusses abstraction-based software verification using Counterexample-Guided Abstraction Refinement (CEGAR). CEGAR iteratively refines an abstract model of the software by analyzing counterexamples generated from model checking the abstract model against a property. The document introduces CEGAR and describes how it uses abstraction, model checking, concretization, and refinement. It also discusses the Theta Verification Framework, a configurable CEGAR framework that can handle different formal models, abstract domains, and refinement strategies.
More Related Content
Similar to Towards Evaluating Size Reduction Techniques for Software Model Checking
One of the most important, yet often overlooked, aspects of predictive modeling is the transformation of data to create model inputs, better known as feature engineering (FE). This talk will go into the theoretical background behind FE, showing how it leverages existing data to produce better modeling results. It will then detail some important FE techniques that should be in every data scientist’s tool kit.
Exploiting Hierarchy in the Abstraction-Based Verification of Statecharts Usi...Akos Hajdu
Presentation of our paper at the 14th International Workshop on Formal Engineering approaches to Software Components and Architectures (FESCA 2017). Uppsala, Sweden
This document proposes an objective, comprehensive methodology for overseeing defense acquisition programs and portfolios using quantitative metrics. It summarizes the current state of various technology projects across different directorates and phases of the acquisition cycle. Key metrics are proposed to assess individual project and portfolio performance in terms of technical success, cost, and schedule. Visualizations of performance vectors and matrices are suggested to help managers evaluate programs. The goal is to provide decision-makers with empirical data to optimize resources and ensure technologies deliver promised capabilities to warfighters.
Quality is defined as customers' perception of how well a product or service meets their expectations. There are three types of quality: quality of design, quality of performance, and quality of conformance. Statistical quality control uses statistical techniques to control, improve, and maintain quality. Control charts are used to determine if a process is in or out of control by monitoring for random or assignable variation. Process capability indices like Cp and Cpk compare process variability to specification limits to determine if a process is capable of meeting specifications.
Statistical Process Control (SPC) is a method to maintain good product quality, control costs, and improve processes. It involves collecting data on key quality characteristics (4Ms+1E: Man, Machine, Method, Material, Environment) and analyzing the data using control charts to detect process variations. Control charts establish control limits to monitor a process over time, and identify processes that are unstable or out of control. Process improvement tools like Six Sigma aim to reduce process variations and keep performance stable through methods like DMAIC (Define, Measure, Analyze, Improve, Control). SPC helps detect machine faults, gain production awareness, and achieve good business reputation through consistent quality.
The document describes a project to reduce inventory discrepancies between an E-book system and SAP system at a factory. It involved defining key metrics, measuring current performance, analyzing causes of discrepancies, designing database functions to compare transactions, and verifying improvements. The project reduced defects per unit by 12.58%, tariff loss by 89.36% (US$47,252), and process cycle time to 3 days, meeting all project goals.
Industrial plant optimization in reduced dimensional spacesCapstone
This document summarizes an industrial plant optimization lecture given in Toronto. It discusses the history of optimization in oil refining from early adoption in the 1950s to modern real-time optimization (RTO). RTO aims to capture opportunities from changing plant conditions by modeling the plant with engineering equations and optimizing the model in parallel with plant operation. While RTO provides benefits, reconciling measurements, non-linear constraints, and operator acceptance present technical and behavioral challenges. New approaches using projection methods to model plants from historical operating data in reduced dimensional spaces are discussed as alternatives to traditional modeling that may better represent operator preferences and familiarity.
The document provides an overview of six sigma and statistical process control (SPC). It defines variation and explains the importance of understanding and controlling it. The objectives of SPC are outlined, including appreciating variation, understanding normal distribution and different types of process variation. Control charts are introduced as a tool to monitor processes and identify special causes of variation. The importance of objective data use is discussed.
The document discusses three major problems in verification: specifying properties to check, specifying the environment, and computational complexity. It then presents several approaches to addressing these problems, including using coverage metrics tailored to detection ability, sequential equivalence checking to avoid testbenches, and "perspective-based verification" using minimal abstract models focused on specific property classes. This allows verification earlier in design when changes are more tractable and catches bugs before implementation.
This document provides an introduction to business intelligence and data analytics. It discusses key concepts such as data sources, data warehouses, data marts, data mining, and data analytics. It also covers topics like univariate analysis, measures of dispersion, heterogeneity measures, confidence intervals, cross validation, and ROC curves. The document aims to introduce fundamental techniques and metrics used in business intelligence and data mining.
Heuristic design of experiments w meta gradient searchGreg Makowski
Once you have started learning about predictive algorithms, and the basic knowledge discovery in databases process, what is the next level of detail to learn for a consulting project?
* Give examples of the many model training parameters
* Track results in a "model notebook"
* Use a model metric that combines both accuracy and generalization to rank models
* How to strategically search over the model training parameters - use a gradient descent approach
* One way to describe an arbitrarily complex predictive system is by using sensitivity analysis
Software analytics focuses on analyzing and modeling a rich source of software data using well-established data analytics techniques in order to glean actionable insights for improving development practices, productivity, and software quality. However, if care is not taken when analyzing and modeling software data, the predictions and insights that are derived from analytical models may be inaccurate and unreliable. The goal of this hands-on tutorial is to guide participants on how to (1) analyze software data using statistical techniques like correlation analysis, hypothesis testing, effect size analysis, and multiple comparisons, (2) develop accurate, reliable, and reproducible analytical models, (3) interpret the models to uncover relationships and insights, and (4) discuss pitfalls associated with analytical techniques including hands-on examples with real software data. R will be the primary programming language. Code samples will be available in a public GitHub repository. Participants will do exercises via either RStudio or Jupyter Notebook through Binder.
Application of Machine Learning in AgricultureAman Vasisht
With the growing trend of machine learning, it is needless to say how machine learning can help reap benefits in agriculture. It will be boon for the farmer welfare.
This document summarizes the analysis of data from a pharmaceutical company to model and predict the output variable (titer) from input variables in a biochemical drug production process. Several statistical models were evaluated including linear regression, random forest, and MARS. The analysis involved developing blackbox models using only controlled input variables, snapshot models using all input variables at each time point, and history models incorporating changes in input variables over time to predict titer values. Model performance was compared using cross-validation.
Keynote: Machine Learning for Design Automation at DAC 2018Manish Pandey
Manish Pandey gave a keynote talk on transforming EDA with machine learning and discussed opportunities and challenges. He described how machine learning can be applied across different design abstraction levels from formal verification to silicon engineering. Pandey also discussed using machine learning techniques like reinforcement learning and word embeddings to optimize formal verification, simulation, and mask synthesis. Finally, he outlined challenges with data availability and model development for machine learning in EDA.
Heterogeneous Defect Prediction ( ESEC/FSE 2015)Sung Kim
This document describes a technique called Heterogeneous Defect Prediction (HDP) that aims to perform cross-project defect prediction even when the source and target projects have different sets of metrics (i.e. heterogeneous metrics). HDP first selects informative metrics from the source and target projects, then matches metrics between the projects that have similar distributions. It uses the matched metrics to build a prediction model on the source project and apply it to the target project. The technique is evaluated on multiple public defect datasets and is shown to outperform whole-project defect prediction and other cross-project defect prediction baselines in most cases, demonstrating the potential of HDP to reuse existing defect data more widely.
Credit card fraud is a growing problem that affects card holders around the world. Fraud detection has been an interesting topic in machine learning. Nevertheless, current state of the art credit card fraud detection algorithms miss to include the real costs of credit card fraud as a measure to evaluate algorithms. In this paper a new comparison measure that realistically represents the monetary gains and losses due to fraud detection is proposed. Moreover, using the proposed cost measure a cost sensitive method based on Bayes minimum risk is presented. This method is compared with state of the art algorithms and shows improvements up to 23% measured by cost. The results of this paper are based on real life transactional data provided by a large European card processing company.
Quality andc apability hand out 091123200010 Phpapp01jasonhian
The document outlines key concepts in quality management and Six Sigma methodology. It discusses definitions of quality, total quality management (TQM), and Six Sigma. Six Sigma aims to reduce defects through eliminating variation and achieving near zero defect levels. It uses a Define-Measure-Analyze-Improve-Control (DMAIC) methodology. Statistical process control charts and process capability indices are also introduced to measure quality performance. An example of Mumbai's successful lunch delivery system achieving over 5-sigma quality levels is provided.
This document discusses various evaluation metrics for binary and multi-class classification models. It explains that metrics are important for quantifying model performance, tracking progress, and debugging. For binary classifiers, it describes point metrics like accuracy, precision, recall from the confusion matrix. Summary metrics like AUROC and AUPRC are discussed as ways to evaluate models across all thresholds. The tradeoff between precision and recall is illustrated. Class imbalance issues and choosing appropriate metrics are also covered.
Similar to Towards Evaluating Size Reduction Techniques for Software Model Checking (20)
solc-verify: A Modular Verifier for Solidity Smart ContractsAkos Hajdu
Solc-Verify is a modular verifier for Solidity smart contracts. It allows users to annotate contracts with specification properties like preconditions, postconditions, and invariants in a custom annotation language. The verifier translates the annotated Solidity contract and specifications to Boogie and sends the resulting program to an SMT solver to automatically verify the properties. Solc-Verify aims to provide a practical verification tool that balances soundness, precision, expressiveness, and user-friendliness. It was presented as a way to help find bugs in smart contracts by formally verifying user-specified high-level properties.
Software Verification with Abstraction-Based MethodsAkos Hajdu
This document discusses abstraction-based software verification using Counterexample-Guided Abstraction Refinement (CEGAR). CEGAR iteratively refines an abstract model of the software by analyzing counterexamples generated from model checking the abstract model against a property. The document introduces CEGAR and describes how it uses abstraction, model checking, concretization, and refinement. It also discusses the Theta Verification Framework, a configurable CEGAR framework that can handle different formal models, abstract domains, and refinement strategies.
A Preliminary Analysis on the Effect of Randomness in a CEGAR FrameworkAkos Hajdu
Randomized factors like search strategy and variable naming in a CEGAR verification framework introduce significant variations in output metrics like execution time, refinement iterations, and counterexample length compared to deterministic configurations. Some randomized configurations were able to verify models that deterministic ones could not, showing potential for randomized strategies. Further analysis is needed to better understand how and why randomization influences verification success.
Theta: a Framework for Abstraction Refinement-Based Model CheckingAkos Hajdu
Theta is an open source framework for abstraction refinement-based model checking. It is generic, supporting various formalisms like symbolic transition systems, control flow automata, and timed automata. Theta is also modular, with reusable components that can be combined. It is configurable, allowing different abstraction refinement algorithms and strategies to be used. The goal of Theta is to facilitate the development, evaluation, and combination of abstraction refinement approaches for formal verification.
Exploratory Analysis of the Performance of a Configurable CEGAR FrameworkAkos Hajdu
Presentation of our paper at the 24th Minisymposium of the Department of Measurement and Information Systems of the Budapest University of Technology and Economics (Minisy@DMIS 2017). Budapest, Hungary
A Configurable CEGAR Framework with Interpolation-Based RefinementsAkos Hajdu
The document describes a configurable CEGAR (counterexample-guided abstraction refinement) framework that uses different abstraction and refinement strategies. It presents three approaches for the initial abstraction: predicate abstraction, explicit value abstraction, and a combined approach. It then details the model checking, counterexample concretization, and refinement steps. The framework was implemented and evaluated on industrial PLC models, Fischer's mutual exclusion algorithm, and hardware models, demonstrating that the combined approach and Craig interpolation often performed best.
The document describes a new approach for finding optimal trajectories in Petri nets using the CEGAR (counter-example guided abstraction refinement) technique. CEGAR works by analyzing an abstract model of the Petri net and refining the abstraction if a counter-example is found. The new approach assigns costs to transitions to optimize the trajectory and uses a priority queue to search the solution space. An implementation in a Petri net modeling and analysis framework shows the approach can solve trajectory optimization problems like the traveling salesman problem.
Extensions to the CEGAR Approach on Petri NetsAkos Hajdu
This document summarizes research on extending the CEGAR (counter-example guided abstraction refinement) approach for reachability analysis of Petri nets. The researchers examined the correctness and completeness of the original CEGAR algorithm, improved it by addressing issues and extending the set of decidable problems, and implemented the algorithm in a framework. They evaluated it on several models, showing it can solve problems the original algorithm could not and performs better than alternative saturation-based algorithms in many cases.
New Search Strategies for the Petri Net CEGAR ApproachAkos Hajdu
This document summarizes a talk on new search strategies for the Petri net CEGAR (counter-example guided abstraction refinement) approach. It introduces the CEGAR approach applied to reachability analysis of Petri nets using state equations and abstraction. A new iteration strategy is presented that extends increment constraints by "lending" tokens between T-invariants. Different search strategies like depth-first, breadth-first, and a complex strategy are examined. An evaluation compares the algorithms on models and shows results of the different search strategies.
ESA/ACT Science Coffee: Diego Blas - Gravitational wave detection with orbita...Advanced-Concepts-Team
Presentation in the Science Coffee of the Advanced Concepts Team of the European Space Agency on the 07.06.2024.
Speaker: Diego Blas (IFAE/ICREA)
Title: Gravitational wave detection with orbital motion of Moon and artificial
Abstract:
In this talk I will describe some recent ideas to find gravitational waves from supermassive black holes or of primordial origin by studying their secular effect on the orbital motion of the Moon or satellites that are laser ranged.
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
ESPP presentation to EU Waste Water Network, 4th June 2024 “EU policies driving nutrient removal and recycling
and the revised UWWTD (Urban Waste Water Treatment Directive)”
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
Unlocking the mysteries of reproduction: Exploring fecundity and gonadosomati...AbdullaAlAsif1
The pygmy halfbeak Dermogenys colletei, is known for its viviparous nature, this presents an intriguing case of relatively low fecundity, raising questions about potential compensatory reproductive strategies employed by this species. Our study delves into the examination of fecundity and the Gonadosomatic Index (GSI) in the Pygmy Halfbeak, D. colletei (Meisner, 2001), an intriguing viviparous fish indigenous to Sarawak, Borneo. We hypothesize that the Pygmy halfbeak, D. colletei, may exhibit unique reproductive adaptations to offset its low fecundity, thus enhancing its survival and fitness. To address this, we conducted a comprehensive study utilizing 28 mature female specimens of D. colletei, carefully measuring fecundity and GSI to shed light on the reproductive adaptations of this species. Our findings reveal that D. colletei indeed exhibits low fecundity, with a mean of 16.76 ± 2.01, and a mean GSI of 12.83 ± 1.27, providing crucial insights into the reproductive mechanisms at play in this species. These results underscore the existence of unique reproductive strategies in D. colletei, enabling its adaptation and persistence in Borneo's diverse aquatic ecosystems, and call for further ecological research to elucidate these mechanisms. This study lends to a better understanding of viviparous fish in Borneo and contributes to the broader field of aquatic ecology, enhancing our knowledge of species adaptations to unique ecological challenges.
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
Describing and Interpreting an Immersive Learning Case with the Immersion Cub...Leonel Morgado
Current descriptions of immersive learning cases are often difficult or impossible to compare. This is due to a myriad of different options on what details to include, which aspects are relevant, and on the descriptive approaches employed. Also, these aspects often combine very specific details with more general guidelines or indicate intents and rationales without clarifying their implementation. In this paper we provide a method to describe immersive learning cases that is structured to enable comparisons, yet flexible enough to allow researchers and practitioners to decide which aspects to include. This method leverages a taxonomy that classifies educational aspects at three levels (uses, practices, and strategies) and then utilizes two frameworks, the Immersive Learning Brain and the Immersion Cube, to enable a structured description and interpretation of immersive learning cases. The method is then demonstrated on a published immersive learning case on training for wind turbine maintenance using virtual reality. Applying the method results in a structured artifact, the Immersive Learning Case Sheet, that tags the case with its proximal uses, practices, and strategies, and refines the free text case description to ensure that matching details are included. This contribution is thus a case description method in support of future comparative research of immersive learning cases. We then discuss how the resulting description and interpretation can be leveraged to change immersion learning cases, by enriching them (considering low-effort changes or additions) or innovating (exploring more challenging avenues of transformation). The method holds significant promise to support better-grounded research in immersive learning.
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
Towards Evaluating Size Reduction Techniques for Software Model Checking
1. 1
Budapest University of Technology and Economics
Department of Measurement and Information Systems
Towards Evaluating Size Reduction Techniques
for Software Model Checking
Gyula Sallai1, Ákos Hajdu1,2, Tamás Tóth1, Zoltán Micskei1
1Department of Measurement and Information Systems,
Budapest University of Technology and Economics
2MTA-BME Lendület Cyber-Physical Systems Research Group,
Budapest, Hungary
VPT 2017, Uppsala, Sweden, 29.04.2017.
3. 3
Software model checking
Proving correctness formally
o Problem: state space explosion
Formal model (CFA) Formalized property
Ok Counterexample
Model checker
We focus on
assertionsSource code
4. 4
Motivation
Integrated, configurable workflow
o From source code to verification results
o Enhanced by size reduction techniques
• Compiler techniques
• Slicing
o Supported by a verification framework
• Based on abstraction and CEGAR
• Highly configurable
Evaluation
o Impact of size reduction on verification
×
7. 7
Size reduction techniques
Compiler optimizations
o Constant folding and propagation
o Dead branch elimination
o Function inlining
int x = 5 * 2;
int y = x + 2;
int x = 10;
int y = 12;
int add(int x, int y) { return x + y; }
x = add(y, z); x = y + z;
x = false;
if (x) {
...
}
x = false;
8. 8
Size reduction techniques
Program slicing
o Slice: subprogram that produces the same output and
assigns the same values to a set of variables
0: int i = 0;
1: int x = 0;
2: while (i < 11) {
3: x = x + i;
4: i = i + 1;
}
5: assert(i != 0);
0: int i = 0;
1: int x = 0;
2: while (i < 11) {
3: x = x + i;
4: i = i + 1;
}
5: assert(i != 0);
Criterion: value of i at statement 5
9. 9
Size reduction techniques
Backward slicing
o Retain all instructions crucial to criterion
• Data flow and control dependencies
o Accurate slices
Thin slicing
o Retain data flow dependency only
• Replace control dependencies with abstract predicates
o Spurious counterexample refinement of slice
Value slicing
o Middle ground between backward and thin
• Retain variables determining control criterions
10. 10
Size reduction techniques
int u = 0;
int t = 0;
int x = 0;
while (t < 1000) {
int s = nondet();
int y;
if (s == 1) {
y = x * 2;
} else {
y = x - 1;
}
assert(y != 0);
x = x + y;
t = t + 1;
u = u + t;
}
printf("u=%d", u);
int u = 0;
int t = 0;
int x = 0;
while (t < 1000) {
int s = nondet();
int y;
if (s == 1) {
y = x * 2;
} else {
y = x - 1;
}
assert(y != 0);
x = x + y;
t = t + 1;
u = u + t;
}
printf("u=%d", u);
int u = 0;
int t = 0;
int x = 0;
while (φ1) {
int s = nondet();
int y;
if (φ2) {
y = x * 2;
} else {
y = x - 1;
}
assert(y != 0);
x = x + y;
t = t + 1;
u = u + t;
}
printf("u=%d", u);
int u = 0;
int t = 0;
int x = 0;
while (φ1) {
int s = nondet();
int y;
if (s == 1) {
y = x * 2;
} else {
y = x - 1;
}
assert(y != 0);
x = x + y;
t = t + 1;
u = u + t;
}
printf("u=%d", u);
Original Backward Thin Value
11. 11
Verification
CEGAR
o Counterexample-Guided Abstraction Refinement
o Configurable framework
Abstractor Refiner
Model, property
Initial precision Abstract
counterexample
Refined precision
Ok Counterexample
Domain
Exploration strategy
Refinement strategy
13. 13
Objects
Models: SV-COMP examples
o Locks: locking mechanisms
• 100-150 LOC, many smaller slices
o ECA: event-driven systems
• 500-600 LOC, one slice
o SSH-simplified: server-client systems
• 500-600 LOC, one slice
Requirement: reachability of assertion violation
14. 14
Environment
Algorithms
o Slicing: None / Backward / Value / Thin
o Compiler optimizations: True / False
o Domain: Predicate abstraction
o Refinement: Sequence interpolation
o Exploration strategy: BFS / DFS
A configuration
o Slicing + optimizations + exploration strategy
o E.g.: BTD Backward, True, DFS
15. 15
Results
Initial CFA size with different slicing / optimization configurations
Optimizations
do not give
large reductions
Backward
slicing may yield
large reductions
Thin and value
slicing allow
even more
reductions
16. 16
Results
Effect of slice refinement: initial and final CFA size
No refinement
is needed
Final CFA size increases
due to refinements
17. 17
Results
Verification time – locks (ms)
Easy with
any kind
of slicing
Infeasible or
hard without
slicing
BFS fails
sometimes
18. 18
Results
Verification time – ECA/SSH (ms)
Diverse results:
supports the need
for a configurable
framework
Verified by a single
configuration
19. 19
Results
Comparison of verification and optimization time
Optimization time
is negligible for
larger programs
Backward slicing is
quick, thin and value
requires more time
21. 21
Conclusions
Workflow for software verification
o Enhanced by size reduction techniques
o Supported by a configurable
verification framework
Experimental evaluation
o Different configurations are more
suitable for different tasks
Future work
o Extend supported elements of C
o Interprocedural slicing
o LLVM support
Parsing
Size reduction
Verification
0: int i = 0;
1: int x = 0;
2: while (i < 11) {
3: x = x + i;
4: i = i + 1;
}
5: assert(i != 0);
hajdua@mit.bme.hu
inf.mit.bme.hu/en/members/hajdua