Database systems administration traning 02Shahid Riaz
This document covers database concepts including select operations, projection, arithmetic operators, operator precedence, null values, column aliases, and assignment questions involving queries. It discusses selecting specific and multiple columns, using arithmetic operators like addition and subtraction, enforcing operator precedence with parentheses, null values in expressions, and renaming columns with aliases. It provides examples of queries to display concatenated name and job, filter for a department number, and select name and salary by job title.
A Benchmark for Interpretability Methods in Deep Neural NetworksSungchul Kim
This document proposes two interpretability evaluation frameworks: ROAR (RemOve And Retrain) and KAR (Keep And Retrain). ROAR evaluates feature importance by removing salient features from inputs based on a trained model's saliency map, regenerating training and test data, and retraining the model. KAR keeps salient features and retrains instead of removing. The document introduces common interpretability methods like gradients, integrated gradients, and ensembling methods. It then explains the ROAR and KAR mechanisms and provides example results, concluding the frameworks can validate feature importance estimates and model reliability.
Using multiple Feature Models to specify configuration options for Electrical...Jaime Chavarriaga
Jaime Chavarriaga, Carlos Rangel, Carlos Noguera, Rubby Casallas, Viviane Jonckers.
Using Multiple Feature Models to specify configuration options for Electrical Transformers: An Experience Report.
SPLC 2015. pp 216-224. 2015
http://doi.acm.org/10.1145/2791060.2791091
Testing of Cyber-Physical Systems: Diversity-driven StrategiesLionel Briand
Lionel Briand discusses strategies for testing cyber-physical systems using diversity-driven approaches. He outlines challenges in verifying controllers and decision-making components in cyber-physical systems due to large input spaces and expensive model execution. Briand proposes maximizing diversity of test cases to improve fault detection. He describes using diversity of input signals, output signals, and failure patterns to generate test cases. Search algorithms are used to find test cases that maximize diversity or reveal specific failure patterns. The strategies are shown to significantly outperform coverage-based and random testing on Simulink models.
Intro to LV in 3 Hours for Control and Sim 8_5.pptxDeepakJangid87
This document provides an introduction to using LabVIEW for virtual instrumentation, control design, and simulation. It discusses using LabVIEW for applications in signal processing, embedded systems, control systems, and measurements. The topics covered include reviewing the LabVIEW environment, the design process of modeling, control design, simulation, optimization, and deployment. Simulation allows testing controllers and incorporating real-world nonlinearities. Constructing models graphically and textually is demonstrated. PID control and designing a PID controller with the Control Design Toolkit is also summarized. Exercises guide creating and displaying a transfer function model and constructing a PID controller.
Automated and Scalable Solutions for Software Testing: The Essential Role of ...Lionel Briand
1) Modeling plays an essential role in enabling automated and scalable software testing solutions across many industrial domains like automotive, aerospace, and healthcare.
2) Models of requirements, system architecture, and environment behavior can be used to guide test generation, derive oracles, and enable early system testing through simulation.
3) Effective test automation solutions combine models with techniques like optimization, constraint solving, and natural language processing to address challenges of scalability, oracle generation, and exploring large test input spaces.
The document discusses dimensionality reduction techniques. It begins by explaining the curse of dimensionality, where adding more features can hurt performance due to the exponential increase in the number of examples needed. It then introduces dimensionality reduction as a solution, where the data can be represented using fewer dimensions/features through feature selection, linear/non-linear transformations, or combinations. Principal component analysis (PCA) and singular value decomposition (SVD) are described as common linear dimensionality reduction methods. The document also discusses nonlinear techniques like kernel PCA and multi-dimensional scaling, as well as uses of dimensionality reduction like in image and natural language processing applications.
This document summarizes a research project on real time pose control of a 6-RSS parallel robot. The project involved obtaining the exact pose of the end-effector through kinematics modeling of the parallel robot, dynamic modeling of the actuators, and designing a controller for real time pose control. Kinematics were modeled using both analytical inverse and forward kinematics methods. Actuator dynamics were modeled linearly and nonlinearly, and parameters were identified using genetic algorithms and multi-objective optimization. Real time pose control was tested in simulation using open-loop and closed-loop path tracking with a PID controller.
Database systems administration traning 02Shahid Riaz
This document covers database concepts including select operations, projection, arithmetic operators, operator precedence, null values, column aliases, and assignment questions involving queries. It discusses selecting specific and multiple columns, using arithmetic operators like addition and subtraction, enforcing operator precedence with parentheses, null values in expressions, and renaming columns with aliases. It provides examples of queries to display concatenated name and job, filter for a department number, and select name and salary by job title.
A Benchmark for Interpretability Methods in Deep Neural NetworksSungchul Kim
This document proposes two interpretability evaluation frameworks: ROAR (RemOve And Retrain) and KAR (Keep And Retrain). ROAR evaluates feature importance by removing salient features from inputs based on a trained model's saliency map, regenerating training and test data, and retraining the model. KAR keeps salient features and retrains instead of removing. The document introduces common interpretability methods like gradients, integrated gradients, and ensembling methods. It then explains the ROAR and KAR mechanisms and provides example results, concluding the frameworks can validate feature importance estimates and model reliability.
Using multiple Feature Models to specify configuration options for Electrical...Jaime Chavarriaga
Jaime Chavarriaga, Carlos Rangel, Carlos Noguera, Rubby Casallas, Viviane Jonckers.
Using Multiple Feature Models to specify configuration options for Electrical Transformers: An Experience Report.
SPLC 2015. pp 216-224. 2015
http://doi.acm.org/10.1145/2791060.2791091
Testing of Cyber-Physical Systems: Diversity-driven StrategiesLionel Briand
Lionel Briand discusses strategies for testing cyber-physical systems using diversity-driven approaches. He outlines challenges in verifying controllers and decision-making components in cyber-physical systems due to large input spaces and expensive model execution. Briand proposes maximizing diversity of test cases to improve fault detection. He describes using diversity of input signals, output signals, and failure patterns to generate test cases. Search algorithms are used to find test cases that maximize diversity or reveal specific failure patterns. The strategies are shown to significantly outperform coverage-based and random testing on Simulink models.
Intro to LV in 3 Hours for Control and Sim 8_5.pptxDeepakJangid87
This document provides an introduction to using LabVIEW for virtual instrumentation, control design, and simulation. It discusses using LabVIEW for applications in signal processing, embedded systems, control systems, and measurements. The topics covered include reviewing the LabVIEW environment, the design process of modeling, control design, simulation, optimization, and deployment. Simulation allows testing controllers and incorporating real-world nonlinearities. Constructing models graphically and textually is demonstrated. PID control and designing a PID controller with the Control Design Toolkit is also summarized. Exercises guide creating and displaying a transfer function model and constructing a PID controller.
Automated and Scalable Solutions for Software Testing: The Essential Role of ...Lionel Briand
1) Modeling plays an essential role in enabling automated and scalable software testing solutions across many industrial domains like automotive, aerospace, and healthcare.
2) Models of requirements, system architecture, and environment behavior can be used to guide test generation, derive oracles, and enable early system testing through simulation.
3) Effective test automation solutions combine models with techniques like optimization, constraint solving, and natural language processing to address challenges of scalability, oracle generation, and exploring large test input spaces.
The document discusses dimensionality reduction techniques. It begins by explaining the curse of dimensionality, where adding more features can hurt performance due to the exponential increase in the number of examples needed. It then introduces dimensionality reduction as a solution, where the data can be represented using fewer dimensions/features through feature selection, linear/non-linear transformations, or combinations. Principal component analysis (PCA) and singular value decomposition (SVD) are described as common linear dimensionality reduction methods. The document also discusses nonlinear techniques like kernel PCA and multi-dimensional scaling, as well as uses of dimensionality reduction like in image and natural language processing applications.
This document summarizes a research project on real time pose control of a 6-RSS parallel robot. The project involved obtaining the exact pose of the end-effector through kinematics modeling of the parallel robot, dynamic modeling of the actuators, and designing a controller for real time pose control. Kinematics were modeled using both analytical inverse and forward kinematics methods. Actuator dynamics were modeled linearly and nonlinearly, and parameters were identified using genetic algorithms and multi-objective optimization. Real time pose control was tested in simulation using open-loop and closed-loop path tracking with a PID controller.
The document discusses the STL algorithms in C++. It begins by defining what algorithms and STL algorithms are. It then covers the different classes of STL algorithms including non-modifying sequence operations, mutating sequence operations, sorting operations, general C algorithms, and general numeric operations. Specific algorithms like for_each, transform, all_of, any_of and none_of are discussed in more detail through examples. The document aims to explain what STL algorithms are and how they can be used to operate on sequences and containers in C++.
This document discusses software variability management. It begins by defining software variability as the ability of a software system or artifact to be efficiently extended, changed, customized or configured for a particular context. It then provides examples of variability in iOS and GCC versions. Next, it discusses challenges in managing copies of software with variability and advantages of software product lines. Key aspects of software product line engineering covered include commonalities and variabilities, product derivation, and an example of enterprise resource planning systems. The document concludes by summarizing feature modeling and variability realization techniques including feature models, binding times, and design patterns.
hint co hint-based configuration of co-simulationsmehmor
Simulation-based analyses of Cyber-Physical Systems are fundamental in industrial design and testing approaches. The utility of analyses relies on the correct configuration of the simulation tools, which can be
highly complicated. System engineers can normally judge the results, and either evaluate multiple simulation
algorithms, or change the models. However, this is not possible in a co-simulation approach. Co-simulation is a
technique to perform full-system simulation, by combining multiple black-box simulators, each responsible
for a part of the system. In this paper, we demonstrate the difficulty of correctly configuring a co-simulation
scenario using an industrial case study. We propose an approach to tackle this challenge by allowing multiple
engineers, specialized in different domains, to encode some of their experience in the form of hints. These
hints, together with state-of-the-art best practices, are then used to semi-automatically guide the configuration
process of the co-simulation. We report the application of this approach to a use case proposed by our industrial
partners, and discuss some of the lessons learned.
The document discusses query processing and optimization. It describes the basic concepts including query processing, query optimization, and the phases of query processing. It also explains relational algebra operations like selection, projection, joins, and additional operations. The document then covers topics like query decomposition, analysis, normalization, simplification, and restructuring during query optimization. It discusses cost estimation and algorithms for implementing relational algebra operations and file organization.
This document discusses using reduced 3D models in vibrational design processes. It presents tools for model reduction including variable separation, parametric models, and domain decomposition. These tools combine finite element modeling, experimental modal analysis, and reduced order models to efficiently simulate complex systems for design studies while controlling accuracy.
Predictable reactive state management - ngrxIlia Idakiev
This document provides an overview and introduction to Predictable Reactive State Management using NGRX. It begins with an introduction to the speaker and then outlines the schedule which includes topics like functional programming, RxJS, Angular change detection, Redux, and NGRX. It then discusses how functional programming concepts like pure functions, immutable data, and declarative programming relate to Angular and libraries like RxJS and NGRX. Specific NGRX concepts like actions, reducers, and selectors are introduced. Examples are provided for building an NGRX application with a single reducer handling the state updates. Additional resources are listed at the end.
The document discusses design patterns and provides examples of common patterns like Iterator, Singleton, and Adapter. It describes design patterns as reusable solutions to common programming problems and explains how they help achieve goals like code reuse and facilitating software evolution. Key principles of design patterns discussed are programming to interfaces, composition over inheritance, and delegation. Common patterns are categorized into creational, structural and behavioral groups.
Augmenting Machine Learning with Databricks Labs AutoML ToolkitDatabricks
<p>Instead of better understanding and optimizing their machine learning models, data scientists spend a majority of their time training and iterating through different models even in cases where there the data is reliable and clean. Important aspects of creating an ML model include (but are not limited to) data preparation, feature engineering, identifying the correct models, training (and continuing to train) and optimizing their models. This process can be (and often is) laborious and time-consuming.</p><p>In this session, we will explore this process and then show how the AutoML toolkit (from Databricks Labs) can significantly simplify and optimize machine learning. We will demonstrate all of this financial loan risk data with code snippets and notebooks that will be free to download.</p>
The document provides an overview of the OPTIMICA Compiler Toolkit. It discusses the toolkit's capabilities for transient and steady-state time domain simulation and optimization. It also outlines Modelon's model-based development workflow using model libraries, authoring, compilers, solvers, and other technologies. Key features of the toolkit include Modelica and FMI-based computation, dynamic and steady-state simulation, optimization capabilities, and scripting APIs.
This document discusses design patterns and principles. It begins by defining design patterns as repeatable solutions to common design problems. It then covers several design patterns including Singleton, Strategy, Adapter, Template, Factory, Abstract Factory, and Observer patterns. It also discusses low-level principles like Tell Don't Ask and high-level principles like the Single Responsibility Principle. Finally, it provides examples of how to implement some of the patterns and principles in code.
Apache Calcite: One Frontend to Rule Them AllMichael Mior
Apache Calcite is an open source framework that allows for a unified query interface over heterogeneous data sources. It provides an ANSI-compliant SQL parser, a logical query optimizer, and acts as a middleware layer that can integrate data from multiple sources. Calcite uses a relational algebra approach and has pluggable adapters that allow it to connect to different backends like MySQL, MongoDB, and streaming data sources. It supports features like SQL queries, views, optimization rules, and works across both batch and streaming data. The project aims to continue adding new capabilities like geospatial queries and improved cost modeling.
This document provides a walkthrough of using machine learning to build a model for predicting customer choice and churn. It discusses the key steps in the machine learning process including problem formulation, feature engineering, model learning, label preparation, and model deployment. For the case study of building a B2C prediction model, it describes collecting demographic and behavioral features from customer data and using gradient boosting machines for model learning. The goal is to build a model that can accurately predict individual customer choices like renewals over 30 and 120-day windows.
Nafems15 Technical meeting on system modelingSDTools
This presentation illustrates the main mechanisms of model reduction used in generating efficient system models that can be used in vibration design. Examples from automotive, aeronautics and train industries are used as illustrations.
The document discusses various techniques for decomposing systems, including:
1. Decomposing algorithms and software systems into smaller subroutines and modules to simplify logic and improve structure. This includes techniques like structured analysis.
2. Decomposing a system vertically by concerns or functionally to create smaller and more focused services and classes.
3. Considering factors like communication style, data persistence, and deployment scenarios when decomposing a monolith application into microservices. Principles like the "Scale Cube" can guide this.
4. Tips for a gradual and careful decomposition include starting with loosely coupled components, focusing on single functions, automating processes, and cross-training developers. Rushing or choosing
IPL: An Integration Property Language for Multi-Model Cyber-Physical SystemsIvan Ruchkin
Our talk from the 22nd International Symposium on Formal Methods. Full paper: http://www.cs.cmu.edu/~iruchkin/docs/ruchkin18-ipl.pdf
Abstract: "Design and verification of modern systems requires diverse models, which often come from a variety of disciplines, and it is challenging to manage their heterogeneity -- especially in the case of cyber-physical systems. To check consistency between models, recent approaches map these models to flexible static abstractions, such as architectural views. This model integration approach, however, comes at a cost of reduced expressiveness because complex behaviors of the models are abstracted away. As a result, it may be impossible to automatically verify important behavioral properties across multiple models, leaving systems vulnerable to subtle bugs. This paper introduces the Integration Property Language (IPL) that improves integration expressiveness using modular verification of properties that depend on detailed behavioral semantics while retaining the ability for static system-wide reasoning. We prove that the verification algorithm is sound and analyze its termination conditions. Furthermore, we perform a case study on a mobile robot to demonstrate IPL is practically useful and evaluate its performance. "
The document discusses design patterns, including their basic structure and purpose. A design pattern is a generalized solution to a commonly occurring problem in software design that optimizes certain quality of service aspects. The basic structure of a design pattern includes its name, purpose, solution, and consequences. When facing design problems, developers can use pattern hatching to locate relevant patterns by identifying design criteria and alternatives. Developers can also create their own patterns through pattern mining. Patterns are applied to software through pattern instantiation. An example debouncing pattern addresses intermittent contact issues in mechanical devices.
The document discusses design patterns, including their basic structure and purpose. A design pattern is a generalized solution to a commonly occurring problem in software design that optimizes certain quality of service aspects. The basic structure of a design pattern includes its name, purpose, solution, and consequences. When facing design problems, developers can use pattern hatching to locate relevant patterns by identifying design criteria and alternatives. Developers can also create their own patterns through pattern mining. Patterns are applied to software through pattern instantiation. An example debouncing pattern addresses intermittent contact issues in mechanical devices.
Argumentation in Artificial Intelligence: From Theory to Practice (Practice)Mauro Vallati
Part on Practice of the IJCAI 2017 Tutorial titled "Argumentation in Artificial Intelligence: From Theory to Practice", from Federico Cerutti and Mauro Vallati
The Current State of the Art of Regression TestingJohn Reese
The document surveys 159 papers on test suite minimization, regression test selection, and test case prioritization techniques. It finds that the majority of studies used small codebases with under 10,000 lines of code and fewer than 1,000 test cases. Graph-walking is identified as the most predominant regression test selection technique. Prioritization approaches focus on coverage-based and history-based methods. Future work opportunities include integrating regression testing with test data generation, considering other domains beyond white-box testing, and providing more tool support.
Baha Majid WCA4Z IBM Z Customer Council Boston June 2024.pdfBaha Majid
IBM watsonx Code Assistant for Z, our latest Generative AI-assisted mainframe application modernization solution. Mainframe (IBM Z) application modernization is a topic that every mainframe client is addressing to various degrees today, driven largely from digital transformation. With generative AI comes the opportunity to reimagine the mainframe application modernization experience. Infusing generative AI will enable speed and trust, help de-risk, and lower total costs associated with heavy-lifting application modernization initiatives. This document provides an overview of the IBM watsonx Code Assistant for Z which uses the power of generative AI to make it easier for developers to selectively modernize COBOL business services while maintaining mainframe qualities of service.
The document discusses the STL algorithms in C++. It begins by defining what algorithms and STL algorithms are. It then covers the different classes of STL algorithms including non-modifying sequence operations, mutating sequence operations, sorting operations, general C algorithms, and general numeric operations. Specific algorithms like for_each, transform, all_of, any_of and none_of are discussed in more detail through examples. The document aims to explain what STL algorithms are and how they can be used to operate on sequences and containers in C++.
This document discusses software variability management. It begins by defining software variability as the ability of a software system or artifact to be efficiently extended, changed, customized or configured for a particular context. It then provides examples of variability in iOS and GCC versions. Next, it discusses challenges in managing copies of software with variability and advantages of software product lines. Key aspects of software product line engineering covered include commonalities and variabilities, product derivation, and an example of enterprise resource planning systems. The document concludes by summarizing feature modeling and variability realization techniques including feature models, binding times, and design patterns.
hint co hint-based configuration of co-simulationsmehmor
Simulation-based analyses of Cyber-Physical Systems are fundamental in industrial design and testing approaches. The utility of analyses relies on the correct configuration of the simulation tools, which can be
highly complicated. System engineers can normally judge the results, and either evaluate multiple simulation
algorithms, or change the models. However, this is not possible in a co-simulation approach. Co-simulation is a
technique to perform full-system simulation, by combining multiple black-box simulators, each responsible
for a part of the system. In this paper, we demonstrate the difficulty of correctly configuring a co-simulation
scenario using an industrial case study. We propose an approach to tackle this challenge by allowing multiple
engineers, specialized in different domains, to encode some of their experience in the form of hints. These
hints, together with state-of-the-art best practices, are then used to semi-automatically guide the configuration
process of the co-simulation. We report the application of this approach to a use case proposed by our industrial
partners, and discuss some of the lessons learned.
The document discusses query processing and optimization. It describes the basic concepts including query processing, query optimization, and the phases of query processing. It also explains relational algebra operations like selection, projection, joins, and additional operations. The document then covers topics like query decomposition, analysis, normalization, simplification, and restructuring during query optimization. It discusses cost estimation and algorithms for implementing relational algebra operations and file organization.
This document discusses using reduced 3D models in vibrational design processes. It presents tools for model reduction including variable separation, parametric models, and domain decomposition. These tools combine finite element modeling, experimental modal analysis, and reduced order models to efficiently simulate complex systems for design studies while controlling accuracy.
Predictable reactive state management - ngrxIlia Idakiev
This document provides an overview and introduction to Predictable Reactive State Management using NGRX. It begins with an introduction to the speaker and then outlines the schedule which includes topics like functional programming, RxJS, Angular change detection, Redux, and NGRX. It then discusses how functional programming concepts like pure functions, immutable data, and declarative programming relate to Angular and libraries like RxJS and NGRX. Specific NGRX concepts like actions, reducers, and selectors are introduced. Examples are provided for building an NGRX application with a single reducer handling the state updates. Additional resources are listed at the end.
The document discusses design patterns and provides examples of common patterns like Iterator, Singleton, and Adapter. It describes design patterns as reusable solutions to common programming problems and explains how they help achieve goals like code reuse and facilitating software evolution. Key principles of design patterns discussed are programming to interfaces, composition over inheritance, and delegation. Common patterns are categorized into creational, structural and behavioral groups.
Augmenting Machine Learning with Databricks Labs AutoML ToolkitDatabricks
<p>Instead of better understanding and optimizing their machine learning models, data scientists spend a majority of their time training and iterating through different models even in cases where there the data is reliable and clean. Important aspects of creating an ML model include (but are not limited to) data preparation, feature engineering, identifying the correct models, training (and continuing to train) and optimizing their models. This process can be (and often is) laborious and time-consuming.</p><p>In this session, we will explore this process and then show how the AutoML toolkit (from Databricks Labs) can significantly simplify and optimize machine learning. We will demonstrate all of this financial loan risk data with code snippets and notebooks that will be free to download.</p>
The document provides an overview of the OPTIMICA Compiler Toolkit. It discusses the toolkit's capabilities for transient and steady-state time domain simulation and optimization. It also outlines Modelon's model-based development workflow using model libraries, authoring, compilers, solvers, and other technologies. Key features of the toolkit include Modelica and FMI-based computation, dynamic and steady-state simulation, optimization capabilities, and scripting APIs.
This document discusses design patterns and principles. It begins by defining design patterns as repeatable solutions to common design problems. It then covers several design patterns including Singleton, Strategy, Adapter, Template, Factory, Abstract Factory, and Observer patterns. It also discusses low-level principles like Tell Don't Ask and high-level principles like the Single Responsibility Principle. Finally, it provides examples of how to implement some of the patterns and principles in code.
Apache Calcite: One Frontend to Rule Them AllMichael Mior
Apache Calcite is an open source framework that allows for a unified query interface over heterogeneous data sources. It provides an ANSI-compliant SQL parser, a logical query optimizer, and acts as a middleware layer that can integrate data from multiple sources. Calcite uses a relational algebra approach and has pluggable adapters that allow it to connect to different backends like MySQL, MongoDB, and streaming data sources. It supports features like SQL queries, views, optimization rules, and works across both batch and streaming data. The project aims to continue adding new capabilities like geospatial queries and improved cost modeling.
This document provides a walkthrough of using machine learning to build a model for predicting customer choice and churn. It discusses the key steps in the machine learning process including problem formulation, feature engineering, model learning, label preparation, and model deployment. For the case study of building a B2C prediction model, it describes collecting demographic and behavioral features from customer data and using gradient boosting machines for model learning. The goal is to build a model that can accurately predict individual customer choices like renewals over 30 and 120-day windows.
Nafems15 Technical meeting on system modelingSDTools
This presentation illustrates the main mechanisms of model reduction used in generating efficient system models that can be used in vibration design. Examples from automotive, aeronautics and train industries are used as illustrations.
The document discusses various techniques for decomposing systems, including:
1. Decomposing algorithms and software systems into smaller subroutines and modules to simplify logic and improve structure. This includes techniques like structured analysis.
2. Decomposing a system vertically by concerns or functionally to create smaller and more focused services and classes.
3. Considering factors like communication style, data persistence, and deployment scenarios when decomposing a monolith application into microservices. Principles like the "Scale Cube" can guide this.
4. Tips for a gradual and careful decomposition include starting with loosely coupled components, focusing on single functions, automating processes, and cross-training developers. Rushing or choosing
IPL: An Integration Property Language for Multi-Model Cyber-Physical SystemsIvan Ruchkin
Our talk from the 22nd International Symposium on Formal Methods. Full paper: http://www.cs.cmu.edu/~iruchkin/docs/ruchkin18-ipl.pdf
Abstract: "Design and verification of modern systems requires diverse models, which often come from a variety of disciplines, and it is challenging to manage their heterogeneity -- especially in the case of cyber-physical systems. To check consistency between models, recent approaches map these models to flexible static abstractions, such as architectural views. This model integration approach, however, comes at a cost of reduced expressiveness because complex behaviors of the models are abstracted away. As a result, it may be impossible to automatically verify important behavioral properties across multiple models, leaving systems vulnerable to subtle bugs. This paper introduces the Integration Property Language (IPL) that improves integration expressiveness using modular verification of properties that depend on detailed behavioral semantics while retaining the ability for static system-wide reasoning. We prove that the verification algorithm is sound and analyze its termination conditions. Furthermore, we perform a case study on a mobile robot to demonstrate IPL is practically useful and evaluate its performance. "
The document discusses design patterns, including their basic structure and purpose. A design pattern is a generalized solution to a commonly occurring problem in software design that optimizes certain quality of service aspects. The basic structure of a design pattern includes its name, purpose, solution, and consequences. When facing design problems, developers can use pattern hatching to locate relevant patterns by identifying design criteria and alternatives. Developers can also create their own patterns through pattern mining. Patterns are applied to software through pattern instantiation. An example debouncing pattern addresses intermittent contact issues in mechanical devices.
The document discusses design patterns, including their basic structure and purpose. A design pattern is a generalized solution to a commonly occurring problem in software design that optimizes certain quality of service aspects. The basic structure of a design pattern includes its name, purpose, solution, and consequences. When facing design problems, developers can use pattern hatching to locate relevant patterns by identifying design criteria and alternatives. Developers can also create their own patterns through pattern mining. Patterns are applied to software through pattern instantiation. An example debouncing pattern addresses intermittent contact issues in mechanical devices.
Argumentation in Artificial Intelligence: From Theory to Practice (Practice)Mauro Vallati
Part on Practice of the IJCAI 2017 Tutorial titled "Argumentation in Artificial Intelligence: From Theory to Practice", from Federico Cerutti and Mauro Vallati
The Current State of the Art of Regression TestingJohn Reese
The document surveys 159 papers on test suite minimization, regression test selection, and test case prioritization techniques. It finds that the majority of studies used small codebases with under 10,000 lines of code and fewer than 1,000 test cases. Graph-walking is identified as the most predominant regression test selection technique. Prioritization approaches focus on coverage-based and history-based methods. Future work opportunities include integrating regression testing with test data generation, considering other domains beyond white-box testing, and providing more tool support.
Similar to Implementing Operations to combine Feature Models: The Partial lntersection Case (20)
Baha Majid WCA4Z IBM Z Customer Council Boston June 2024.pdfBaha Majid
IBM watsonx Code Assistant for Z, our latest Generative AI-assisted mainframe application modernization solution. Mainframe (IBM Z) application modernization is a topic that every mainframe client is addressing to various degrees today, driven largely from digital transformation. With generative AI comes the opportunity to reimagine the mainframe application modernization experience. Infusing generative AI will enable speed and trust, help de-risk, and lower total costs associated with heavy-lifting application modernization initiatives. This document provides an overview of the IBM watsonx Code Assistant for Z which uses the power of generative AI to make it easier for developers to selectively modernize COBOL business services while maintaining mainframe qualities of service.
14 th Edition of International conference on computer visionShulagnaSarkar2
About the event
14th Edition of International conference on computer vision
Computer conferences organized by ScienceFather group. ScienceFather takes the privilege to invite speakers participants students delegates and exhibitors from across the globe to its International Conference on computer conferences to be held in the Various Beautiful cites of the world. computer conferences are a discussion of common Inventions-related issues and additionally trade information share proof thoughts and insight into advanced developments in the science inventions service system. New technology may create many materials and devices with a vast range of applications such as in Science medicine electronics biomaterials energy production and consumer products.
Nomination are Open!! Don't Miss it
Visit: computer.scifat.com
Award Nomination: https://x-i.me/ishnom
Conference Submission: https://x-i.me/anicon
For Enquiry: Computer@scifat.com
Odoo releases a new update every year. The latest version, Odoo 17, came out in October 2023. It brought many improvements to the user interface and user experience, along with new features in modules like accounting, marketing, manufacturing, websites, and more.
The Odoo 17 update has been a hot topic among startups, mid-sized businesses, large enterprises, and Odoo developers aiming to grow their businesses. Since it is now already the first quarter of 2024, you must have a clear idea of what Odoo 17 entails and what it can offer your business if you are still not aware of it.
This blog covers the features and functionalities. Explore the entire blog and get in touch with expert Odoo ERP consultants to leverage Odoo 17 and its features for your business too.
An Overview of Odoo ERP
Odoo ERP was first released as OpenERP software in February 2005. It is a suite of business applications used for ERP, CRM, eCommerce, websites, and project management. Ten years ago, the Odoo Enterprise edition was launched to help fund the Odoo Community version.
When you compare Odoo Community and Enterprise, the Enterprise edition offers exclusive features like mobile app access, Odoo Studio customisation, Odoo hosting, and unlimited functional support.
Today, Odoo is a well-known name used by companies of all sizes across various industries, including manufacturing, retail, accounting, marketing, healthcare, IT consulting, and R&D.
The latest version, Odoo 17, has been available since October 2023. Key highlights of this update include:
Enhanced user experience with improvements to the command bar, faster backend page loading, and multiple dashboard views.
Instant report generation, credit limit alerts for sales and invoices, separate OCR settings for invoice creation, and an auto-complete feature for forms in the accounting module.
Improved image handling and global attribute changes for mailing lists in email marketing.
A default auto-signature option and a refuse-to-sign option in HR modules.
Options to divide and merge manufacturing orders, track the status of manufacturing orders, and more in the MRP module.
Dark mode in Odoo 17.
Now that the Odoo 17 announcement is official, let’s look at what’s new in Odoo 17!
What is Odoo ERP 17?
Odoo 17 is the latest version of one of the world’s leading open-source enterprise ERPs. This version has come up with significant improvements explained here in this blog. Also, this new version aims to introduce features that enhance time-saving, efficiency, and productivity for users across various organisations.
Odoo 17, released at the Odoo Experience 2023, brought notable improvements to the user interface and added new functionalities with enhancements in performance, accessibility, data analysis, and management, further expanding its reach in the market.
The Rising Future of CPaaS in the Middle East 2024Yara Milbes
Explore "The Rising Future of CPaaS in the Middle East in 2024" with this comprehensive PPT presentation. Discover how Communication Platforms as a Service (CPaaS) is transforming communication across various sectors in the Middle East.
Why Apache Kafka Clusters Are Like Galaxies (And Other Cosmic Kafka Quandarie...Paul Brebner
Closing talk for the Performance Engineering track at Community Over Code EU (Bratislava, Slovakia, June 5 2024) https://eu.communityovercode.org/sessions/2024/why-apache-kafka-clusters-are-like-galaxies-and-other-cosmic-kafka-quandaries-explored/ Instaclustr (now part of NetApp) manages 100s of Apache Kafka clusters of many different sizes, for a variety of use cases and customers. For the last 7 years I’ve been focused outwardly on exploring Kafka application development challenges, but recently I decided to look inward and see what I could discover about the performance, scalability and resource characteristics of the Kafka clusters themselves. Using a suite of Performance Engineering techniques, I will reveal some surprising discoveries about cosmic Kafka mysteries in our data centres, related to: cluster sizes and distribution (using Zipf’s Law), horizontal vs. vertical scalability, and predicting Kafka performance using metrics, modelling and regression techniques. These insights are relevant to Kafka developers and operators.
Malibou Pitch Deck For Its €3M Seed Roundsjcobrien
French start-up Malibou raised a €3 million Seed Round to develop its payroll and human resources
management platform for VSEs and SMEs. The financing round was led by investors Breega, Y Combinator, and FCVC.
Measures in SQL (SIGMOD 2024, Santiago, Chile)Julian Hyde
SQL has attained widespread adoption, but Business Intelligence tools still use their own higher level languages based upon a multidimensional paradigm. Composable calculations are what is missing from SQL, and we propose a new kind of column, called a measure, that attaches a calculation to a table. Like regular tables, tables with measures are composable and closed when used in queries.
SQL-with-measures has the power, conciseness and reusability of multidimensional languages but retains SQL semantics. Measure invocations can be expanded in place to simple, clear SQL.
To define the evaluation semantics for measures, we introduce context-sensitive expressions (a way to evaluate multidimensional expressions that is consistent with existing SQL semantics), a concept called evaluation context, and several operations for setting and modifying the evaluation context.
A talk at SIGMOD, June 9–15, 2024, Santiago, Chile
Authors: Julian Hyde (Google) and John Fremlin (Google)
https://doi.org/10.1145/3626246.3653374
UI5con 2024 - Keynote: Latest News about UI5 and it’s EcosystemPeter Muessig
Learn about the latest innovations in and around OpenUI5/SAPUI5: UI5 Tooling, UI5 linter, UI5 Web Components, Web Components Integration, UI5 2.x, UI5 GenAI.
Recording:
https://www.youtube.com/live/MSdGLG2zLy8?si=INxBHTqkwHhxV5Ta&t=0
The Key to Digital Success_ A Comprehensive Guide to Continuous Testing Integ...kalichargn70th171
In today's business landscape, digital integration is ubiquitous, demanding swift innovation as a necessity rather than a luxury. In a fiercely competitive market with heightened customer expectations, the timely launch of flawless digital products is crucial for both acquisition and retention—any delay risks ceding market share to competitors.
Enhanced Screen Flows UI/UX using SLDS with Tom KittPeter Caitens
Join us for an engaging session led by Flow Champion, Tom Kitt. This session will dive into a technique of enhancing the user interfaces and user experiences within Screen Flows using the Salesforce Lightning Design System (SLDS). This technique uses Native functionality, with No Apex Code, No Custom Components and No Managed Packages required.
Most important New features of Oracle 23c for DBAs and Developers. You can get more idea from my youtube channel video from https://youtu.be/XvL5WtaC20A
Flutter is a popular open source, cross-platform framework developed by Google. In this webinar we'll explore Flutter and its architecture, delve into the Flutter Embedder and Flutter’s Dart language, discover how to leverage Flutter for embedded device development, learn about Automotive Grade Linux (AGL) and its consortium and understand the rationale behind AGL's choice of Flutter for next-gen IVI systems. Don’t miss this opportunity to discover whether Flutter is right for your project.
E-Invoicing Implementation: A Step-by-Step Guide for Saudi Arabian CompaniesQuickdice ERP
Explore the seamless transition to e-invoicing with this comprehensive guide tailored for Saudi Arabian businesses. Navigate the process effortlessly with step-by-step instructions designed to streamline implementation and enhance efficiency.
3. Context
Configuration Systems …
3
• Software where a user
can select the options
and features for the
intended product
• It can be generated from
Feature Models
and UI specifications
4. Configuration Systems
for complex products…
• A lot of features and
constraints
• Multiple domains and
interactions
• Multiple standards and
regulations
• Multiple product lines
(sharing domains and
standards)
4
6. for complex products
with multiple standards / regulations
Multiple standards
and norms must be
supported
… just for Colombia,
there many national
and proprietary
standards for each
product family.
7. One Configuration System
for Multiple Countries/Standards
Customer
Customer
Requests
Sales Engr
Bid Engr
Bids
Proposals
I want an electrical
transformer with
Power of 15KVA
a Low Voltage of 214V
and a High Voltage of 4160V
To be installed in
Buenos Aires
Ummm !!
Which
configurator
should I use?
9. Feature Models
A compact representation of
configuration options and constraints
r
a b
f2f1 f3 f4 f5
Or
Group
Alternative
Group
Optional
Feature
Mandatory
Feature
15. Operations of Feature Models
• Schobbens et al.
• Intersection
• (Strict) union
• Reduced product
• Acher et al.
• Insert
• Aggregate
• Union merge
• Strict Union merge
• Intersection merge
• Diff merge
• Slice
15
Acher et al.
Schobbens et al.
16. Intersection Merge
If we applied to standards,
the standard becomes mandatory
16
Transformer
X
Power
Y Z
Transformer
(Standard X)
W
Power
X Y
∩ =
Transformer
Power
X Y
18. Conditional Intersection Merge
the standard remains optional
18
Transformer
X
Power
Y Z
Standard X
W
Power
X Y
∩ 𝑐 =
Transformer
PowerStandard X
X Y Z
𝑆𝑡𝑎𝑛𝑑𝑎𝑟𝑑𝑋 ⇒ (𝑋 ∨ 𝑌)
20. Conditional Intersection Merge
𝒇𝒎 𝒓 = 𝒇𝒎 𝒅 ∪ ( 𝒇 𝒔 ⊗ ( 𝒇𝒎 𝒅 ∩ 𝒇𝒎 𝒔 ))
20
Include a feature for the standard as
mandatory
Add the configurations with the standards to the existing
configurations in the domain (without the standard)
Enforces the constraints
of the standard
22. Concerns
Note that models are created
“by hand” by groups of company engineers
22
• Hierarchy of the Resulting Model
• Try to keep the structure of the
model for the product line
• Models will be easier to review by
the modelers
• We will gain confidence on the
operations
• Scalability/Performance
26. ❶ Composing Semantic-based
operations
𝒇𝒎 𝒓 = 𝒇𝒎 𝒅 ∪ ( 𝒇 𝒔 ⊗ ( 𝒇𝒎 𝒅 ∩ 𝒇𝒎 𝒔 ))
26
Hierarchy
• The hierarchy is
recalculated two times
• It may result very different
than the inputs
Performance
• Other approaches requires
only one (or none)
structure recalculation
27. ❷ Semantic-based approach
27
𝒇𝒎 𝒓 = 𝒇𝒎 𝒅 ∪ ( 𝒇 𝒔 ⊗ ( 𝒇𝒎 𝒅 ∩ 𝒇𝒎 𝒔 ))
𝝓 𝒓 = 𝝓 𝟏 ∧ 𝒏𝒐𝒕 𝓕 𝒇𝒎𝟐 ∖ 𝓕 𝒇𝒎𝟏 ∧ 𝒇 𝒔 ⇒ 𝝓 𝟐 ∧ 𝒏𝒐𝒕 𝓕 𝒇𝒎𝟏 ∖ 𝓕 𝒇𝒎𝟐
Calculate a CNF formula
for the result
Derive a new
structure based on
the resulting formula
28. ❷ Semantic-based approach
28
𝒇𝒎 𝒓 = 𝒇𝒎 𝒅 ∪ ( 𝒇 𝒔 ⊗ ( 𝒇𝒎 𝒅 ∩ 𝒇𝒎 𝒔 ))
𝝓 𝒓 = 𝝓 𝟏 ∧ 𝒏𝒐𝒕 𝓕 𝒇𝒎𝟐 ∖ 𝓕 𝒇𝒎𝟏 ∧ 𝒇 𝒔 ⇒ 𝝓 𝟐 ∧ 𝒏𝒐𝒕 𝓕 𝒇𝒎𝟏 ∖ 𝓕 𝒇𝒎𝟐
Calculate a CNF formula
for the result
Derive a new
structure based on
the resulting formula
Hierarchy
• The hierarchy is
recalculated one time
• It may result very different
than the inputs
Performance
• It is faster than using
individual operations
FAMILIAR does not support
directly these operations
29. ❸ Reference-based approach
29
Define a view based
on one of the feature
models
Include all the
operand models
Introduce constraints
that relate features
in the view with
those in the operands
30. ❸ Reference-based approach
30
Define a view based
on one of the feature
models
Include all the
operand models
Introduce constraints
that relate features
in the view with
those in the operands
Hierarchy
• The resulting hierarchy is
different than the inputs
• The features are “repeated”
and may confuse modelers
Performance
• It does not recalculate the
hierarchy
• It is very fast
31. ❹ Hybrid Approach
31
Create a reference
Based solution
Slice the model
- Formula calculation
- Structure derivation
32. ❹ Hybrid Approach
32
Create a reference
Based solution
Slice the model
- Formula calculation
- Structure derivation
Hierarchy
• The hierarchy is
recalculated one time
• It may result very different
than the inputs
Performance
• It uses the slice operation, a
costly operation (existential
quantification and
hierarchy recalculation)
33. ❺ “Adding constraints” approach
33
• Add a feature for the Standard
• Introduce new constraints
𝝓 𝒓 = 𝝓 𝟏 ∧ 𝒏𝒐𝒕 𝓕 𝒇𝒎𝟐 ∖ 𝓕 𝒇𝒎𝟏 ∧ 𝒇 𝒔 ⇒ 𝝓 𝟐 ∧ 𝒏𝒐𝒕 𝓕 𝒇𝒎𝟏 ∖ 𝓕 𝒇𝒎𝟐
First Feature Model
Constraints to Add
𝒇𝒎 𝒓 = 𝒇𝒎 𝒅 ∪ ( 𝒇 𝒔 ⊗ ( 𝒇𝒎 𝒅 ∩ 𝒇𝒎 𝒔 ))
34. ❺ “Adding constraints” approach
34
• Add a feature for the Standard
• Introduce new constraints
𝝓 𝒓 = 𝝓 𝟏 ∧ 𝒏𝒐𝒕 𝓕 𝒇𝒎𝟐 ∖ 𝓕 𝒇𝒎𝟏 ∧ 𝒇 𝒔 ⇒ 𝝓 𝟐 ∧ 𝒏𝒐𝒕 𝓕 𝒇𝒎𝟏 ∖ 𝓕 𝒇𝒎𝟐
First Feature Model
Constraints to Add
𝒇𝒎 𝒓 = 𝒇𝒎 𝒅 ∪ ( 𝒇 𝒔 ⊗ ( 𝒇𝒎 𝒅 ∩ 𝒇𝒎 𝒔 ))Hierarchy
• The hierarchy is the same
of the first input model
• The operation is
“asymmetric”
Performance
• It does not recalculate the
hierarchy
36. Conclusions
• A new operation: Conditional Intersection
• Different approaches to implement it
• Composing semantic operations
• Semantic based implementations
• Reference-based implementations
• Hybrid implementation
• “Adding constraints” implementation
• “Adding constraints” has some advantages:
• Uses the hierarchy of one of the models (easier to
understand)
• Does not require the recalculation of the feature
model hierarchy
36
Editor's Notes
Hello
My name is Jaime Chavarriaga.
I will present here part of our work implementing automated operations to combine feature models.
In concrete, I will present different alternatives to implement a new operation we named “conditional intersection merge”
There are complex products such as cars, planes and electrical devices that must comply with different standards and regulations around the world.
Consider for example, your own computers.
The plug you use to connect your computer and obtain electricity may be different from one country to the other. It may have a different form, a different voltage, and a different current.
complex products such as the Cars I mentioned before.
On one hand, these products has a lot of features and constraints. Therefore, the models are hard to build.
On the other hand, these products involve multiple domains and interactions among the models, and multiple standards and regulations. The models may be very large and, again, hard to build and review.
In addition, companies manufacturing these products want to reuse these models among multiple product families.
For instance, use the same standard in multiple families of products.
This is not easy either.
There are complex products such as cars, planes and electrical devices that must comply with different standards and regulations around the world.
Consider for example, your own computers.
The plug you use to connect your computer and obtain electricity may be different from one country to the other. It may have a different form, a different voltage, and a different current.
complex products such as the Cars I mentioned before.
On one hand, these products has a lot of features and constraints. Therefore, the models are hard to build.
On the other hand, these products involve multiple domains and interactions among the models, and multiple standards and regulations. The models may be very large and, again, hard to build and review.
In addition, companies manufacturing these products want to reuse these models among multiple product families.
For instance, use the same standard in multiple families of products.
This is not easy either.
There are complex products such as cars, planes and electrical devices that must comply with different standards and regulations around the world.
Consider for example, your own computers.
The plug you use to connect your computer and obtain electricity may be different from one country to the other. It may have a different form, a different voltage, and a different current.
complex products such as the Cars I mentioned before.
On one hand, these products has a lot of features and constraints. Therefore, the models are hard to build.
On the other hand, these products involve multiple domains and interactions among the models, and multiple standards and regulations. The models may be very large and, again, hard to build and review.
In addition, companies manufacturing these products want to reuse these models among multiple product families.
For instance, use the same standard in multiple families of products.
This is not easy either.
You may have seen a transformer.
Although it looks like simple boxes, transformers are complex products designed and manufactured by multiple engineers and experts.
They must take the power from a circuit, change some of their properties and put it on another circuit.
They can be used to put energy on the distribution network or to get that energy and put it in houses or buildings.
Transformers must consider the specifications of the network where they will be installed.
Properties such as the voltage or the current may vary from one network to the other.
You may find multiple public and private distribution networks in any country.
Nowadays, each distribution network defines which standards and regulations the transformers must comply.
In Colombia, public networks usually follows ICONTEC standards.
USA networks follows IEEE and ANSI standards, but each state may have different regulations.
Europe follows IEC and European Union standards.
In addition, private networks may define their own regulations.
As an example, only in Colombia there are many standards that may apply to medium-size transformers.
There are some specifications that apply if you want to install the transformer in the electrical network of Codensa in a city like Bogotá, or in the private network of the petroleum facilities of Ecopetrol across all the country.
The specifications vary.
There are many other standards defined by ICONTEC.
Consider a company like a Siemens that sells transformers in many countries and supporting a large number of standards.
If you create a configuration system for each standard you will end with hundreds of systems
To support selling of Electrical Transformers we are using Feature-based configurations.
That is, we are creating Configuration Systems based on Feature models.
We use Feature Models to represent the configuration options of the transformers and the constraints in the standards.
In addition, we use automated operations to reason on these models.
Transformers must consider the specifications of the network where they will be installed.
Properties such as the voltage or the current may vary from one network to the other.
You may find multiple public and private distribution networks in any country.
Nowadays, each distribution network defines which standards and regulations the transformers must comply.
In Colombia, public networks usually follows ICONTEC standards.
USA networks follows IEEE and ANSI standards, but each state may have different regulations.
Europe follows IEC and European Union standards.
In addition, private networks may define their own regulations.
As an example, only in Colombia there are many standards that may apply to medium-size transformers.
There are some specifications that apply if you want to install the transformer in the electrical network of Codensa in a city like Bogotá, or in the private network of the petroleum facilities of Ecopetrol across all the country.
The specifications vary.
There are many other standards defined by ICONTEC.
Transformers must consider the specifications of the network where they will be installed.
Properties such as the voltage or the current may vary from one network to the other.
You may find multiple public and private distribution networks in any country.
Nowadays, each distribution network defines which standards and regulations the transformers must comply.
In Colombia, public networks usually follows ICONTEC standards.
USA networks follows IEEE and ANSI standards, but each state may have different regulations.
Europe follows IEC and European Union standards.
In addition, private networks may define their own regulations.
As an example, only in Colombia there are many standards that may apply to medium-size transformers.
There are some specifications that apply if you want to install the transformer in the electrical network of Codensa in a city like Bogotá, or in the private network of the petroleum facilities of Ecopetrol across all the country.
The specifications vary.
There are many other standards defined by ICONTEC.
This is an example of an intersection.
We have two feature models: one representing a set of products that a company may produce and another representing an standard. Which features are mandatory which other are prohibited
Note that there are options that can be built but are not part of the standard.
And options in the standard that cannot be built in the factory
The intersection represents the products that can be built that are compliant to the standard.
Here , we have a problem.
The standard becomes mandatory.
In the final model you cannot configure a product that is not compliant to the standard.
Why this is a problem?
This is an example of an intersection.
We have two feature models: one representing a set of products that a company may produce and another representing an standard. Which features are mandatory which other are prohibited
Note that there are options that can be built but are not part of the standard.
And options in the standard that cannot be built in the factory
The intersection represents the products that can be built that are compliant to the standard.
Here , we have a problem.
The standard becomes mandatory.
In the final model you cannot configure a product that is not compliant to the standard.
Why this is a problem?
There are many operations on feature model that allow us to combine and merge feature models.
For instance, Schobbens and Acher have proposed operations for the union and the intersection.
This is an example of an intersection.
We have two feature models: one representing a set of products that a company may produce and another representing an standard. Which features are mandatory which other are prohibited
Note that there are options that can be built but are not part of the standard.
And options in the standard that cannot be built in the factory
The intersection represents the products that can be built that are compliant to the standard.
Here , we have a problem.
The standard becomes mandatory.
In the final model you cannot configure a product that is not compliant to the standard.
Why this is a problem?
To support selling of Electrical Transformers we are using Feature-based configurations.
That is, we are creating Configuration Systems based on Feature models.
We use Feature Models to represent the configuration options of the transformers and the constraints in the standards.
In addition, we use automated operations to reason on these models.
Mathieu Acher defined different approaches to implement operations on feature models.
There are semantic-based approaches, reference-based approaches and some hybrid approaches
We have defined other approaches to implement our operation: