•

4 likes•1,225 views

Companion presentation to similar paper at SAICSIT 2015 (Southern African Institute for Computer Scientist and Information Technologists Annual Conference 2015).

Report

Share

Report

Share

Download to read offline

Unit 3

The document provides information about Unit III of the syllabus, which covers dynamic and implementation UML diagrams. It discusses different types of dynamic diagrams like interaction diagrams (sequence diagram, collaboration diagram), state machine diagrams, and activity diagrams. It also discusses implementation diagrams like package diagrams, component diagrams, and deployment diagrams. Chapters from the third edition textbook related to these diagrams are listed. The document then provides more details on types of UML diagrams including structural diagrams and behavioral diagrams. It focuses on interaction diagrams, describing sequence diagrams and communication diagrams in detail. Examples and notation for drawing interaction diagrams are also explained.

A Novel Cosine Approximation for High-Speed Evaluation of DCT

This article presents a novel cosine approximation for high-speed evaluation of DCT (Discrete Cosine Transform) using Ramanujan Ordered Numbers. The proposed method uses the Ramanujan ordered number to convert the angles of the cosine function to integers. Evaluation of these angles is by using a 4th degree Polynomial that approximates the cosine function with error of approximation in the order of 10^-3. The evaluation of the cosine function is explained through the computation of the DCT coefficients. High-speed evaluation at the algorithmic level is measured in terms of the computational complexity of the algorithm. The proposed algorithm of cosine approximation increases the overhead on the number of adders by 13.6%. This algorithm avoids floating-point multipliers and requires N/2log2N shifts and (3N/2 log2 N)- N + 1 addition operations to evaluate an N-point DCT coefficients thereby improving the speed of computation of the coefficients .

An Interactive Decomposition Algorithm for Two-Level Large Scale Linear Multi...

This paper extended TOPSIS (Technique for Order Preference by Similarity Ideal Solution) method for solving
Two-Level Large Scale Linear Multiobjective Optimization Problems with Stochastic Parameters in the righthand
side of the constraints (TL-LSLMOP-SP)rhs of block angular structure. In order to obtain a compromise (
satisfactory) solution to the (TL-LSLMOP-SP)rhs of block angular structure using the proposed TOPSIS
method, a modified formulas for the distance function from the positive ideal solution (PIS ) and the distance
function from the negative ideal solution (NIS) are proposed and modeled to include all the objective functions
of the two levels. In every level, as the measure of ―Closeness‖ dp-metric is used, a k-dimensional objective
space is reduced to two –dimentional objective space by a first-order compromise procedure. The membership
functions of fuzzy set theory is used to represent the satisfaction level for both criteria. A single-objective
programming problem is obtained by using the max-min operator for the second –order compromise operaion.
A decomposition algorithm for generating a compromise ( satisfactory) solution through TOPSIS approach is
provided where the first level decision maker (FLDM) is asked to specify the relative importance of the
objectives. Finally, an illustrative numerical example is given to clarify the main results developed in the paper.

Burr Type III Software Reliability Growth Model

This document presents the Burr Type III software reliability growth model based on non-homogeneous Poisson process (NHPP) using time domain data. The maximum likelihood estimation method is used to estimate unknown parameters in the model from ungrouped failure data. Goodness of fit is also calculated to assess how well the mathematical model fits the data. Parameter estimation is performed on real software failure data sets, and analysis of reliability is presented for the given data sets.

Interflam 2016 - Design goals - FR demands for tall residential buildings

This document proposes a method for quantifying fire resistance requirements for tall residential buildings based on risk. It defines risk as the product of fire frequency, probability of fire causing structural failure, and consequences of failure for occupants and emergency responders. An equation is presented that calculates the required reliability of the fire resistance system based on apartment number and building height to achieve consistent risk levels. Four demonstration cases applying the equation to buildings of the same height but different apartment layouts show varying required fire resistance durations and reliabilities. The method aims to more rationally assess risk than prescriptive approaches alone.

Real interpolation method for transfer function approximation of distributed ...

Distributed parameter system (DPS) presents one of the most complex systems in the control theory. The transfer function of a DPS possibly contents: rational, nonlinear and irrational components. This thing leads that studies of the transfer function of a DPS are difficult in the time domain and frequency domain. In this paper, a systematic approach is proposed for linearizing DPS. This approach is based on the real interpolation method (RIM) to approximate the transfer function of DPS by rational-order transfer function. The results of the numerical examples show that the method is simple, computationally efficient, and flexible.

352735327 rsh-qam11-tif-04-doc

This document contains 71 multiple choice questions about regression analysis from the textbook "Quantitative Analysis for Management, 11e". The questions cover topics such as simple and multiple linear regression, assumptions of regression models, measuring model fit, and testing models for significance. Correct answers are provided along with a difficulty rating and topic for each question.

Interior Dual Optimization Software Engineering with Applications in BCS Elec...

Interior Dual Optimization Software Engineering with Applications in BCS Elec...BRNSS Publication Hub

This document summarizes an article that describes software engineering for interior dual optimization methods and their applications in electronics superconductivity. The key points are:
1) The software implements interior and dual interior optimization algorithms for solving nonlinear systems of equations, with a focus on programming methods for computational 3D visualization.
2) Applications include optimizing BCS equations for critical temperature prediction in type 1 superconductors platinum and tin, based on atomic mass.
3) Computational results using the software show acceptable errors in optimizing the BCS equations for platinum and tin. Graphical optimizations aid in visualizing optimal parameter values.Unit 3

The document provides information about Unit III of the syllabus, which covers dynamic and implementation UML diagrams. It discusses different types of dynamic diagrams like interaction diagrams (sequence diagram, collaboration diagram), state machine diagrams, and activity diagrams. It also discusses implementation diagrams like package diagrams, component diagrams, and deployment diagrams. Chapters from the third edition textbook related to these diagrams are listed. The document then provides more details on types of UML diagrams including structural diagrams and behavioral diagrams. It focuses on interaction diagrams, describing sequence diagrams and communication diagrams in detail. Examples and notation for drawing interaction diagrams are also explained.

A Novel Cosine Approximation for High-Speed Evaluation of DCT

This article presents a novel cosine approximation for high-speed evaluation of DCT (Discrete Cosine Transform) using Ramanujan Ordered Numbers. The proposed method uses the Ramanujan ordered number to convert the angles of the cosine function to integers. Evaluation of these angles is by using a 4th degree Polynomial that approximates the cosine function with error of approximation in the order of 10^-3. The evaluation of the cosine function is explained through the computation of the DCT coefficients. High-speed evaluation at the algorithmic level is measured in terms of the computational complexity of the algorithm. The proposed algorithm of cosine approximation increases the overhead on the number of adders by 13.6%. This algorithm avoids floating-point multipliers and requires N/2log2N shifts and (3N/2 log2 N)- N + 1 addition operations to evaluate an N-point DCT coefficients thereby improving the speed of computation of the coefficients .

An Interactive Decomposition Algorithm for Two-Level Large Scale Linear Multi...

This paper extended TOPSIS (Technique for Order Preference by Similarity Ideal Solution) method for solving
Two-Level Large Scale Linear Multiobjective Optimization Problems with Stochastic Parameters in the righthand
side of the constraints (TL-LSLMOP-SP)rhs of block angular structure. In order to obtain a compromise (
satisfactory) solution to the (TL-LSLMOP-SP)rhs of block angular structure using the proposed TOPSIS
method, a modified formulas for the distance function from the positive ideal solution (PIS ) and the distance
function from the negative ideal solution (NIS) are proposed and modeled to include all the objective functions
of the two levels. In every level, as the measure of ―Closeness‖ dp-metric is used, a k-dimensional objective
space is reduced to two –dimentional objective space by a first-order compromise procedure. The membership
functions of fuzzy set theory is used to represent the satisfaction level for both criteria. A single-objective
programming problem is obtained by using the max-min operator for the second –order compromise operaion.
A decomposition algorithm for generating a compromise ( satisfactory) solution through TOPSIS approach is
provided where the first level decision maker (FLDM) is asked to specify the relative importance of the
objectives. Finally, an illustrative numerical example is given to clarify the main results developed in the paper.

Burr Type III Software Reliability Growth Model

This document presents the Burr Type III software reliability growth model based on non-homogeneous Poisson process (NHPP) using time domain data. The maximum likelihood estimation method is used to estimate unknown parameters in the model from ungrouped failure data. Goodness of fit is also calculated to assess how well the mathematical model fits the data. Parameter estimation is performed on real software failure data sets, and analysis of reliability is presented for the given data sets.

Interflam 2016 - Design goals - FR demands for tall residential buildings

This document proposes a method for quantifying fire resistance requirements for tall residential buildings based on risk. It defines risk as the product of fire frequency, probability of fire causing structural failure, and consequences of failure for occupants and emergency responders. An equation is presented that calculates the required reliability of the fire resistance system based on apartment number and building height to achieve consistent risk levels. Four demonstration cases applying the equation to buildings of the same height but different apartment layouts show varying required fire resistance durations and reliabilities. The method aims to more rationally assess risk than prescriptive approaches alone.

Real interpolation method for transfer function approximation of distributed ...

Distributed parameter system (DPS) presents one of the most complex systems in the control theory. The transfer function of a DPS possibly contents: rational, nonlinear and irrational components. This thing leads that studies of the transfer function of a DPS are difficult in the time domain and frequency domain. In this paper, a systematic approach is proposed for linearizing DPS. This approach is based on the real interpolation method (RIM) to approximate the transfer function of DPS by rational-order transfer function. The results of the numerical examples show that the method is simple, computationally efficient, and flexible.

352735327 rsh-qam11-tif-04-doc

This document contains 71 multiple choice questions about regression analysis from the textbook "Quantitative Analysis for Management, 11e". The questions cover topics such as simple and multiple linear regression, assumptions of regression models, measuring model fit, and testing models for significance. Correct answers are provided along with a difficulty rating and topic for each question.

Interior Dual Optimization Software Engineering with Applications in BCS Elec...

Interior Dual Optimization Software Engineering with Applications in BCS Elec...BRNSS Publication Hub

This document summarizes an article that describes software engineering for interior dual optimization methods and their applications in electronics superconductivity. The key points are:
1) The software implements interior and dual interior optimization algorithms for solving nonlinear systems of equations, with a focus on programming methods for computational 3D visualization.
2) Applications include optimizing BCS equations for critical temperature prediction in type 1 superconductors platinum and tin, based on atomic mass.
3) Computational results using the software show acceptable errors in optimizing the BCS equations for platinum and tin. Graphical optimizations aid in visualizing optimal parameter values.Assessing Error Bound For Dominant Point Detection

This document compares the error bounds of two classes of dominant point detection methods: 1) methods based on reducing a distance metric like maximum deviation or integral square error, and 2) methods based on digital straight segments. For distance-based methods, the error bound is determined by the maximum deviation of pixels from the line segments between dominant points. For digital straight segment methods, the error bound depends on control parameters that define blurred or approximate digital straight segments. The document analyzes specific methods in each class and plots the theoretical error bounds to facilitate understanding and parameter selection for dominant point detection methods.

Solving Method of H-Infinity Model Matching Based on the Theory of the Model ...

People used to solve high-order H model matching based on H control theory, it is too
difficult. In this paper, we use model reduction theory to solve high-order H model matching problem, A
new method to solve H model matching problem based on the theory of the model reduction is
proposed． The simulation results show that the method has better applicability and can get the expected
performance．

Master thesis Francesco Serafin

Mathematical models play a fundamental role in many scientific and en- gineering fields in today’s world. They are used for example in geotechnics to evalute the hillslope stability, in weather science to predict weather trends and produce weather reports, in structural design to study the resistance to stress, and in fluid dynamics to compute fluid flows and air flows.
Consequently mathematical models are evolving all the time: more and more new numerical methods are being invented to solve the Partial Dif- ferential Equations (PDE)s that describe physical problems with increasing precision, and more and more complex and efficient processor units are being created to reduce the computational time.
Therefore, the code into which the mathematical models are translated has to be “dynamic” in order to be easily updated on the basis of the con- tinuous developments (Formetta et al. (2014) [16]).
On the other hand, completely different physical problems are often de- scribed using similar PDEs. For this reason, the numerical methods which provide solutions to different problems can be the same. This suggest the implementation of an IT infrastructure that hosts a standard structure for solving PDEs and that can serve various disciplines with the minimum of hassles.
This work is focused on the application of what is envisioned above, with the main purpose of the creation of an abstract code for implementing every type of mathematical model described by PDEs.
We work on hydrological topics but we hope to design a structure of general interest. Obviously the final goal of any work of this type is to find a proper numerical solver, and therefore, part of the thesis is devoted to the analysis of the problem under scrutiny, and the description of the solution found.

Simulation of nonlinear simulated moving bed chromatography using ChromWorks...

Simulation of nonlinear simulated moving bed chromatography
using ChromWorks computational software

2007 santiago marchi_cobem_2007

1) The document analyzes optimum parameters for a geometric multigrid method for solving a two-dimensional thermoelasticity problem and Laplace equation numerically.
2) It studies the effect of grid size, inner iterations, and number of grids on computational time.
3) The results are compared between the two problems, single-grid methods, and other literature to determine if coupling equations impacts multigrid performance.

Multi criteria decision making

This document discusses various multi-criteria decision making (MCDM) methods. It describes the objectives and steps in MCDM methodology. Three MCDM methods are explained in detail: Compromise Programming (CP), Preference Ranking Organisation METHod of Enrichment Evaluation (PROMETHEE), and the Weighted Average Method. An example is provided to illustrate the application of each method.

recko_paper

The document describes a recursive algorithm for multi-step prediction with mixture models that have dynamic switching between components. It begins by introducing notations and reviewing individual models, including normal regression components and static/dynamic switching models. It then presents the mixture prediction algorithm, first for a static switching model by constructing a predictive distribution from weighted component predictions. For a dynamic switching model, it similarly takes point estimates from the previous time and substitutes them into components to make weighted averaged predictions over multiple steps. The algorithm is summarized as initializing component statistics and parameter estimates, then substituting previous estimates into components to obtain weighted mixture predictions for new data points.

Modeling and quantification of uncertainties in numerical aerodynamics

Input uncertainties - angle of attack, Mach number, random shape geometry. Output uncertainty - lift, drag, lift and drag- coefficients, the whole pressure, density, velocity, turbulence kinetic energy fields.

StrucA final report

This document describes a finite element analysis project involving the development of a finite element code. It summarizes the course content, describes the coding process, presents results from analyzing a plate with a circular hole using different mesh densities, and compares the accuracy of stress predictions across the meshes. Key results include the strategic mesh achieving similar stress prediction accuracy to the densest mesh while using only 1/4 as many elements. The project improved the author's coding and finite element analysis skills.

351 b p.3

The document summarizes four numerical methods commonly used in geomechanics:
1. The Distinct Element Method (DEM) explicitly models discontinuities.
2. The Discontinuous Deformation Analysis Method (DDA) can consider discontinuities explicitly or implicitly.
3. The Bonded Particle Method (BPM) models geomaterials as an assembly of discrete particles.
4. The Artificial Neural Network Method (ANN) is a data-driven modeling approach not classified as continuum or discontinuum.
The document provides a brief overview of the fundamental algorithms of each method and examples of their applications.

How business process mapping saved an IT project.

How do we help a project in jeopardy of delivering a solution that does not meet customer needs? During this session we will describe how we answered that question by applying business process mapping techniques. The project goal was to automate multiple manual processes that had been developed over time to fulfill marketing orders. The customer had successfully implemented these processes using a collection of desktop spreadsheet and email applications and were asking for help to modernize. We will analyze the initial approach used to gather requirements and how changing to a process centric approach to allowed us to better understand which requirements were missed. We will also review how we incorporated elements of the Business Process Model Notation specification into our overall approach. By using this approach we brought IT and the business together, speaking the same language, and provided a solution that met their needs.

Building Business Applications with DMN and BPMN

Presentation from Denis Gagne from Trisotech and Matteo Mortari from Red Hat at the Digital Transformation Summit of BPM.com

Introduction to LeanLogistics

LeanLogistics provides transportation management services and technology solutions to help companies optimize their supply chains. Their flagship product is an on-demand transportation management system (TMS) that gives users access to LeanLogistics' network of carriers and transportation data. They also offer managed transportation services, freight optimization services, and consulting. LeanLogistics helps clients implement their solutions quickly, achieve visibility across their supply chains, and realize cost savings and other benefits.

bpmNEXt 2016 - Denis Gagne

This document discusses digital transformation and the need for organizations to adapt to rapid technological changes, evolving customer behaviors, and increasing competition. It emphasizes that companies must define a digital strategy to evolve with digital trends and leverage new technologies like mobile, cloud, social media, and analytics. The document promotes a customer-centric and outcome-focused approach to digital transformation and stresses the importance of execution, change management, and having a shared vision. It introduces Trisotech as a company that can help organizations envision the future, align strategies, and implement tools like process, case, and decision management to streamline innovation and digital transformation through visualization and insights.

Open Source Workflowmanagement mit BPMN und CMMN

19.11.2014 camunda bei der JUG HH: Open Source Workflowmanagement mit BPMN und CMMN

Devenir digital (Fr)

Présentation de Trisotech à l'atelier de Montréal sur La transformation numérique des entreprises

Mapping supply chains

The document discusses supply chain mapping, including what a supply chain map is, how mapping can help businesses, and how to create a supply chain map. It provides an example of how two companies, Capital Equipment Inc. and Mare Technologies, collaborated to map their supply chain, identify inefficiencies, prioritize improvements, and implement changes that reduced lead times, work-in-process, and costs. The document also covers potential difficulties in supply chain collaboration and provides an activity for mapping the peanut butter supply chain of Ritz Peanut Butter Co.

Integration of BPMN and CMMN

Prof. Dr. Knut Hinkelmann presents an approach to integrating BPMN, CMMN and DMN for modeling both processes and cases. BPMN covers structured processes while CMMN handles cases, but a combined language called BPCMN can model both. BPCMN uses BPMN elements like tasks, gateways and sequence flows as well as CMMN elements like sentries and discretionary tasks. Business logic and rules can be modeled using DMN. Together these three languages provide an integrated way to model all aspects of knowledge processes, including process flow, cases and business decisions.

Lean Logistics Operations Process Map

This document outlines a lean logistics operations process map to streamline processes, reduce inventory, and control supply chain networks. The process map details four phases - Plan, Do, Check, Act - to continuously improve customer satisfaction, reliability, supplier performance, and total logistics costs through strategic planning, tactical execution, performance monitoring, and root cause analysis. Key performance indicators are tracked to stabilize and sustain the lean supply chain long-term.

Integrated BPMN, CMMN and DMN - Combining Processes, Cases and Decisions

The document discusses integrating business process modeling notation (BPMN), case management modeling notation (CMMN), and decision modeling notation (DMN). It describes how BPMN is used for modeling processes, CMMN for modeling cases, and DMN for modeling decisions. The document promotes combining these three modeling approaches and claims its Digital Enterprise Suite allows for drawing, simulating, analyzing, synthesizing, and intelligently executing models in an integrated manner using an underlying digital enterprise graph.

A NEW COMPLEXITY METRIC FOR UML SEQUENCE DIAGRAMS

This document proposes new metrics for measuring the complexity of UML sequence diagrams. It begins with background on UML sequence diagrams and discusses existing complexity metrics for object-oriented designs that are not suitable for measuring sequence diagram complexity. It then identifies measurable attributes of sequence diagrams, including lifelines and messages. New base and derived metrics are defined that measure weighted numbers of messages into and out of lifelines, as well as a weighted number of lifelines. The metrics are applied to example sequence diagrams for a hotel order system and bank login system. Theoretical validation is provided by evaluating the metrics against Weyuker's nine properties of good complexity metrics. The metrics were found to satisfy the properties and provide a meaningful complexity measure for sequence

Dimensional analysis

Dimensional analysis means analysis of the dimensions of physical quantities. Dimensional analysis lowers the number of variables in a fluid phenomenon by mixing the some variables to form parameters which have no dimensions.

Assessing Error Bound For Dominant Point Detection

This document compares the error bounds of two classes of dominant point detection methods: 1) methods based on reducing a distance metric like maximum deviation or integral square error, and 2) methods based on digital straight segments. For distance-based methods, the error bound is determined by the maximum deviation of pixels from the line segments between dominant points. For digital straight segment methods, the error bound depends on control parameters that define blurred or approximate digital straight segments. The document analyzes specific methods in each class and plots the theoretical error bounds to facilitate understanding and parameter selection for dominant point detection methods.

Solving Method of H-Infinity Model Matching Based on the Theory of the Model ...

People used to solve high-order H model matching based on H control theory, it is too
difficult. In this paper, we use model reduction theory to solve high-order H model matching problem, A
new method to solve H model matching problem based on the theory of the model reduction is
proposed． The simulation results show that the method has better applicability and can get the expected
performance．

Master thesis Francesco Serafin

Mathematical models play a fundamental role in many scientific and en- gineering fields in today’s world. They are used for example in geotechnics to evalute the hillslope stability, in weather science to predict weather trends and produce weather reports, in structural design to study the resistance to stress, and in fluid dynamics to compute fluid flows and air flows.
Consequently mathematical models are evolving all the time: more and more new numerical methods are being invented to solve the Partial Dif- ferential Equations (PDE)s that describe physical problems with increasing precision, and more and more complex and efficient processor units are being created to reduce the computational time.
Therefore, the code into which the mathematical models are translated has to be “dynamic” in order to be easily updated on the basis of the con- tinuous developments (Formetta et al. (2014) [16]).
On the other hand, completely different physical problems are often de- scribed using similar PDEs. For this reason, the numerical methods which provide solutions to different problems can be the same. This suggest the implementation of an IT infrastructure that hosts a standard structure for solving PDEs and that can serve various disciplines with the minimum of hassles.
This work is focused on the application of what is envisioned above, with the main purpose of the creation of an abstract code for implementing every type of mathematical model described by PDEs.
We work on hydrological topics but we hope to design a structure of general interest. Obviously the final goal of any work of this type is to find a proper numerical solver, and therefore, part of the thesis is devoted to the analysis of the problem under scrutiny, and the description of the solution found.

Simulation of nonlinear simulated moving bed chromatography using ChromWorks...

Simulation of nonlinear simulated moving bed chromatography
using ChromWorks computational software

2007 santiago marchi_cobem_2007

1) The document analyzes optimum parameters for a geometric multigrid method for solving a two-dimensional thermoelasticity problem and Laplace equation numerically.
2) It studies the effect of grid size, inner iterations, and number of grids on computational time.
3) The results are compared between the two problems, single-grid methods, and other literature to determine if coupling equations impacts multigrid performance.

Multi criteria decision making

This document discusses various multi-criteria decision making (MCDM) methods. It describes the objectives and steps in MCDM methodology. Three MCDM methods are explained in detail: Compromise Programming (CP), Preference Ranking Organisation METHod of Enrichment Evaluation (PROMETHEE), and the Weighted Average Method. An example is provided to illustrate the application of each method.

recko_paper

The document describes a recursive algorithm for multi-step prediction with mixture models that have dynamic switching between components. It begins by introducing notations and reviewing individual models, including normal regression components and static/dynamic switching models. It then presents the mixture prediction algorithm, first for a static switching model by constructing a predictive distribution from weighted component predictions. For a dynamic switching model, it similarly takes point estimates from the previous time and substitutes them into components to make weighted averaged predictions over multiple steps. The algorithm is summarized as initializing component statistics and parameter estimates, then substituting previous estimates into components to obtain weighted mixture predictions for new data points.

Modeling and quantification of uncertainties in numerical aerodynamics

Input uncertainties - angle of attack, Mach number, random shape geometry. Output uncertainty - lift, drag, lift and drag- coefficients, the whole pressure, density, velocity, turbulence kinetic energy fields.

StrucA final report

This document describes a finite element analysis project involving the development of a finite element code. It summarizes the course content, describes the coding process, presents results from analyzing a plate with a circular hole using different mesh densities, and compares the accuracy of stress predictions across the meshes. Key results include the strategic mesh achieving similar stress prediction accuracy to the densest mesh while using only 1/4 as many elements. The project improved the author's coding and finite element analysis skills.

351 b p.3

The document summarizes four numerical methods commonly used in geomechanics:
1. The Distinct Element Method (DEM) explicitly models discontinuities.
2. The Discontinuous Deformation Analysis Method (DDA) can consider discontinuities explicitly or implicitly.
3. The Bonded Particle Method (BPM) models geomaterials as an assembly of discrete particles.
4. The Artificial Neural Network Method (ANN) is a data-driven modeling approach not classified as continuum or discontinuum.
The document provides a brief overview of the fundamental algorithms of each method and examples of their applications.

Assessing Error Bound For Dominant Point Detection

Assessing Error Bound For Dominant Point Detection

Solving Method of H-Infinity Model Matching Based on the Theory of the Model ...

Solving Method of H-Infinity Model Matching Based on the Theory of the Model ...

Master thesis Francesco Serafin

Master thesis Francesco Serafin

Simulation of nonlinear simulated moving bed chromatography using ChromWorks...

Simulation of nonlinear simulated moving bed chromatography using ChromWorks...

2007 santiago marchi_cobem_2007

2007 santiago marchi_cobem_2007

Multi criteria decision making

Multi criteria decision making

recko_paper

recko_paper

Modeling and quantification of uncertainties in numerical aerodynamics

Modeling and quantification of uncertainties in numerical aerodynamics

StrucA final report

StrucA final report

351 b p.3

351 b p.3

How business process mapping saved an IT project.

How do we help a project in jeopardy of delivering a solution that does not meet customer needs? During this session we will describe how we answered that question by applying business process mapping techniques. The project goal was to automate multiple manual processes that had been developed over time to fulfill marketing orders. The customer had successfully implemented these processes using a collection of desktop spreadsheet and email applications and were asking for help to modernize. We will analyze the initial approach used to gather requirements and how changing to a process centric approach to allowed us to better understand which requirements were missed. We will also review how we incorporated elements of the Business Process Model Notation specification into our overall approach. By using this approach we brought IT and the business together, speaking the same language, and provided a solution that met their needs.

Building Business Applications with DMN and BPMN

Presentation from Denis Gagne from Trisotech and Matteo Mortari from Red Hat at the Digital Transformation Summit of BPM.com

Introduction to LeanLogistics

LeanLogistics provides transportation management services and technology solutions to help companies optimize their supply chains. Their flagship product is an on-demand transportation management system (TMS) that gives users access to LeanLogistics' network of carriers and transportation data. They also offer managed transportation services, freight optimization services, and consulting. LeanLogistics helps clients implement their solutions quickly, achieve visibility across their supply chains, and realize cost savings and other benefits.

bpmNEXt 2016 - Denis Gagne

This document discusses digital transformation and the need for organizations to adapt to rapid technological changes, evolving customer behaviors, and increasing competition. It emphasizes that companies must define a digital strategy to evolve with digital trends and leverage new technologies like mobile, cloud, social media, and analytics. The document promotes a customer-centric and outcome-focused approach to digital transformation and stresses the importance of execution, change management, and having a shared vision. It introduces Trisotech as a company that can help organizations envision the future, align strategies, and implement tools like process, case, and decision management to streamline innovation and digital transformation through visualization and insights.

Open Source Workflowmanagement mit BPMN und CMMN

19.11.2014 camunda bei der JUG HH: Open Source Workflowmanagement mit BPMN und CMMN

Devenir digital (Fr)

Présentation de Trisotech à l'atelier de Montréal sur La transformation numérique des entreprises

Mapping supply chains

The document discusses supply chain mapping, including what a supply chain map is, how mapping can help businesses, and how to create a supply chain map. It provides an example of how two companies, Capital Equipment Inc. and Mare Technologies, collaborated to map their supply chain, identify inefficiencies, prioritize improvements, and implement changes that reduced lead times, work-in-process, and costs. The document also covers potential difficulties in supply chain collaboration and provides an activity for mapping the peanut butter supply chain of Ritz Peanut Butter Co.

Integration of BPMN and CMMN

Prof. Dr. Knut Hinkelmann presents an approach to integrating BPMN, CMMN and DMN for modeling both processes and cases. BPMN covers structured processes while CMMN handles cases, but a combined language called BPCMN can model both. BPCMN uses BPMN elements like tasks, gateways and sequence flows as well as CMMN elements like sentries and discretionary tasks. Business logic and rules can be modeled using DMN. Together these three languages provide an integrated way to model all aspects of knowledge processes, including process flow, cases and business decisions.

Lean Logistics Operations Process Map

This document outlines a lean logistics operations process map to streamline processes, reduce inventory, and control supply chain networks. The process map details four phases - Plan, Do, Check, Act - to continuously improve customer satisfaction, reliability, supplier performance, and total logistics costs through strategic planning, tactical execution, performance monitoring, and root cause analysis. Key performance indicators are tracked to stabilize and sustain the lean supply chain long-term.

Integrated BPMN, CMMN and DMN - Combining Processes, Cases and Decisions

The document discusses integrating business process modeling notation (BPMN), case management modeling notation (CMMN), and decision modeling notation (DMN). It describes how BPMN is used for modeling processes, CMMN for modeling cases, and DMN for modeling decisions. The document promotes combining these three modeling approaches and claims its Digital Enterprise Suite allows for drawing, simulating, analyzing, synthesizing, and intelligently executing models in an integrated manner using an underlying digital enterprise graph.

How business process mapping saved an IT project.

How business process mapping saved an IT project.

Building Business Applications with DMN and BPMN

Building Business Applications with DMN and BPMN

Introduction to LeanLogistics

Introduction to LeanLogistics

bpmNEXt 2016 - Denis Gagne

bpmNEXt 2016 - Denis Gagne

Open Source Workflowmanagement mit BPMN und CMMN

Open Source Workflowmanagement mit BPMN und CMMN

Devenir digital (Fr)

Devenir digital (Fr)

Mapping supply chains

Mapping supply chains

Integration of BPMN and CMMN

Integration of BPMN and CMMN

Lean Logistics Operations Process Map

Lean Logistics Operations Process Map

Integrated BPMN, CMMN and DMN - Combining Processes, Cases and Decisions

Integrated BPMN, CMMN and DMN - Combining Processes, Cases and Decisions

A NEW COMPLEXITY METRIC FOR UML SEQUENCE DIAGRAMS

This document proposes new metrics for measuring the complexity of UML sequence diagrams. It begins with background on UML sequence diagrams and discusses existing complexity metrics for object-oriented designs that are not suitable for measuring sequence diagram complexity. It then identifies measurable attributes of sequence diagrams, including lifelines and messages. New base and derived metrics are defined that measure weighted numbers of messages into and out of lifelines, as well as a weighted number of lifelines. The metrics are applied to example sequence diagrams for a hotel order system and bank login system. Theoretical validation is provided by evaluating the metrics against Weyuker's nine properties of good complexity metrics. The metrics were found to satisfy the properties and provide a meaningful complexity measure for sequence

Dimensional analysis

Dimensional analysis means analysis of the dimensions of physical quantities. Dimensional analysis lowers the number of variables in a fluid phenomenon by mixing the some variables to form parameters which have no dimensions.

Operation's research models

This document provides an overview of various operations research (OR) models, including: linear programming, network flow programming, integer programming, nonlinear programming, dynamic programming, stochastic programming, combinatorial optimization, stochastic processes, discrete time Markov chains, continuous time Markov chains, queuing, and simulation. It describes the basic components and applications of each model type at a high level.

A Validation of Object-Oriented Design Metrics as Quality Indicators

The document summarizes a research paper that empirically validated several object-oriented design metrics proposed by Chidamber and Kemerer as indicators of fault-prone classes. The study analyzed 6 metrics on 180 classes from a system. Univariate analysis found 5 metrics to be significantly correlated with fault probability. Multivariate analysis using these 5 metrics achieved better prediction of faulty classes than models using traditional code metrics. The research validated that these OO design metrics can help identify fault-prone classes early in the development lifecycle.

A practical approach for model based slicing

This document presents a methodology for model-based slicing of UML sequence diagrams to extract submodels. The methodology involves:
1. Generating a sequence diagram from requirements and converting it to XML.
2. Parsing the XML with a DOM parser to extract message information.
3. Slicing the message information based on a slicing criteria, such as a variable, to extract relevant messages.
4. Converting the sliced messages back into a simplified sequence diagram fragment focused on the slicing criteria.
The methodology aims to address the difficulty of visualizing and testing large, complex software models by extracting a relevant submodel based on a slicing criteria, making the model easier to understand and test.

Specifying quantities in software models

The correct representation of numerical values and their units is an essential requirement for the design and development of any engineering application that deals with real-world physical systems. Although solutions exist for several programming languages and simulation frameworks, this problem is not fully solved in the case of software models. This talk discusses how both measurement uncertainty and units can be effectively incorporated into software models, becoming part of their basic type systems and hence ensuring statically type- and unit-safe assignments and operations of those model elements representing physical quantities.

COMPLEXITY METRICS FOR STATECHART DIAGRAMS

Model-Driven Development and the Model-Driven Architecture paradigm have in the recent past been
emphasizing on the importance of good models. In the Object-Oriented paradigm one of the key artefacts
are the Statechart diagrams. Statechart diagrams have inherent complexity which keeps increasing every
time the diagrams are modified, and this complexity poses problems when it comes to comprehending the
diagrams. Statechart diagrams provide a foundation for analysing the dynamic behaviour of systems, and
therefore, their quality should be maintained. The aim of this study is to develop and validate metrics for
measuring the complexity of UML Statechart diagrams. This study used design science which involved the
definition of metrics, development of a metrics tool, and theoretical and empirical validation of the metrics.
For the measurement of the cognitive complexity of statechart diagrams, this study proposes three metrics.
The defined metrics were further used to calculate the complexity of two sample statechart diagrams and
found relevant. Also, theoretical validation of the defined metrics was done using the Weyuker’s nine
properties and revealed they are mathematically sound. Empirical validations were performed on the
metrics and results indicate that all the three metrics are good for the measurement of the cognitive
complexity of statecharts

A Comparison of Traditional Simulation and MSAL (6-3-2015)

This document compares traditional simulation approaches to the Model-Simulation-Analysis-Looping (MSAL) approach. It provides background information on system modeling and simulation basics, including conceptual models, simulation programs, sensitivity analysis, Monte Carlo methods, and simulation optimization. It then discusses risk and uncertainty, modeling systems of systems, and the current state of modeling and simulation in systems engineering. Finally, it introduces the MSAL approach, which uses graphs, analytics, and repeated simulation loops to address the increased complexity and uncertainty in systems of systems compared to traditional approaches. The MSAL approach aims to provide benefits like improved handling of uncertainty and complexity.

International Journal of Computational Engineering Research(IJCER)

International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.

A SYSTEMC/SIMULINK CO-SIMULATION ENVIRONMENT OF THE JPEG ALGORITHM

In the past decades, many factors have been continuously increasing like the functionality of embedded systems as well as the time-to-market pressure has been continuously increasing. Simulation of an entire system including both hardware and software from early design stages is one of the effective approaches to improve the design productivity. A large number of research efforts on hardware/software (HW/SW) co-simulation have been made so far. Real-time operating systems have become one of the important components in the embedded systems. However, in order to validate function of the entire system, this system has to be simulated together with application software and hardware. Indeed, traditional methods of verification have proven to be insufficient for complex digital systems. Register transfer level test-benches have become too complex to manage and too slow to execute. New methods and verification techniques began to emerge over the past few years. Highlevel test-benches, assertion-based verification, formal methods, hardware verification languages are just a few examples of the intense research activities driving the verification domain.

dimensional_analysis.pptx

The document discusses dimensional analysis and Buckingham Pi theorem. It begins by defining dimensions, units, and fundamental vs. derived dimensions. It then discusses dimensional homogeneity and uses examples to show how dimensional analysis can be used to identify non-dimensional parameters and reduce the number of variables in equations. The Buckingham Pi theorem is introduced as a method to systematically create dimensionless pi terms from physical variables. Steps of the theorem and examples applying it are provided. Overall, the document provides an overview of dimensional analysis and Buckingham Pi theorem as tools for understanding relationships between physical quantities and reducing complexity in experimental modeling.

IEEE

The document discusses using a support vector machine (SVM) classifier to classify emails as spam or not spam. It explores how changing the parameter C in the SVM algorithm affects the accuracy on training and test datasets. The author finds that very low or very high C values lead to underfitting or overfitting, respectively, while an intermediate C value of 0.01 achieved the best balance of high accuracy on both training and test data. Graphs are presented showing how differences in accuracy between training and test datasets change with different C values.

Validation Study of Dimensionality Reduction Impact on Breast Cancer Classifi...

A fundamental problem in machine learning is identifying the most representative subset of features from
which we can construct a predictive model for a classification task. This paper aims to present a validation
study of dimensionality reduction effect on the classification accuracy of mammographic images. The
studied dimensionality reduction methods were: locality-preserving projection (LPP), locally linear
embedding (LLE), Isometric Mapping (ISOMAP) and spectral regression (SR). We have achieved high
rates of classifications. In some combinations the classification rate was 100%. But in most of the cases the
classification rate is about 95%. It was also found that the classification rate increases with the size of the
reduced space and the optimal value of space dimension is 60. We proceeded to validate the obtained
results by measuring some validation indices such as: Xie-Beni index, Dun index and Alternative Dun
index. The measurement of these indices confirms that the optimal value of reduced space dimension is
d=60.

I017144954

The document presents the Burr Type III software reliability growth model based on non-homogeneous Poisson process (NHPP) with time domain data. It describes the background and formulation of the Burr Type III and NHPP models. Parameter estimation for the Burr Type III model is performed using maximum likelihood estimation on ungrouped time domain failure data. Goodness of fit is analyzed to assess how well the model fits real software failure data sets.

Building a new CTL model checker using Web Services

Florin Stoica, Laura Stoica, Building a new CTL model checker using Web Services, Proceeding The 21th International Conference on Software, Telecommunications and Computer Networks (SoftCOM 2013), At Split-Primosten, Croatia, 18-20 September, pp. 285-289, 2013
DOI=10.1109/SoftCOM.2013.6671858 http://dx.doi.org/10.1109/SoftCOM.2013.6671858

Qt unit i

The document discusses different types of mathematical models, including deterministic and probabilistic models. It provides examples of each. It also discusses building, verifying, and refining mathematical models. Additionally, it covers optimization models, their components including objective functions and constraints. Finally, it discusses specific types of optimization models like linear programming, network flow programming, and integer programming.

07 18sep 7983 10108-1-ed an edge edit ari

Edge exposure or edge detection is an important and classical study of the medical field and computer vision. Caliber Fuzzy C-means (CFCM) clustering Algorithm for edge detection depends on the selection of initial cluster center value. This endeavor to put in order a collection of pixels into a cluster, such that a pixel within the cluster must be more comparable to every other pixel. Using CFCM techniques first cluster the BSDS image, next the clustered image is given as an input to the basic canny edge detection algorithm. The application of new parameters with fewer operations for CFCM is fruitful. According to the calculation, a result acquired by using CFCM clustering function divides the image into four clusters in common. The proposed method is evidently robust into the modification of fuzzy c-means and canny algorithm. The convergence of this algorithm is very speedy compare to the entire edge detection algorithms. The consequences of this proposed algorithm make enhanced edge detection and better result than any other traditional image edge detection techniques.

Stereo matching algorithm using census transform and segment tree for depth e...

This article proposes an algorithm for stereo matching corresponding process that will be used in many applications such as augmented reality, autonomous vehicle navigation and surface reconstruction. Basically, the proposed framework in this article is developed through a series of functions. The final result from this framework is disparity map which this map has the information of depth estimation. Fundamentally, the framework input is the stereo image which represents left and right images respectively. The proposed algorithm in this article has four steps in total, which starts with the matching cost computation using census transform, cost aggregation utilizes segment-tree, optimization using winner-takes-all (WTA) strategy, and post-processing stage uses weighted median filter. Based on the experimental results from the standard benchmarking evaluation system from the Middlebury, the disparity map results produce an average low noise error at 9.68% for nonocc error and 18.9% for all error attributes. On average, it performs far better and very competitive with other available methods from the benchmark system.

OOAD - Ch.09 - Software Project Estimation.pptx

OOAD - Ch.09 - Software Project Estimation.pptx

Using UML and OCL Models to realize High-Level Digital Twins

Digital twins constitute virtual representations of physically existing systems. However, their inherent complexity makes them difficult to develop and prove correct. In this paper, we explore the use of UML and OCL, complemented with an executable language, SOIL, to build and test digital twins at a high level of abstraction. We also show how to realize the bidirectional connection between the UML models of the digital twin in the USE tool with the physical twin, using an architectural framework centered on a data lake. We have built a prototype of the framework to demonstrate our ideas, and validated it by developing a digital twin of a Lego Mindstorms car. The results allow us to show some interesting advantages of using high-level UML models to specify virtual twins, such as simulation, property checking, and some other types of tests.

A NEW COMPLEXITY METRIC FOR UML SEQUENCE DIAGRAMS

A NEW COMPLEXITY METRIC FOR UML SEQUENCE DIAGRAMS

Dimensional analysis

Dimensional analysis

Operation's research models

Operation's research models

A Validation of Object-Oriented Design Metrics as Quality Indicators

A Validation of Object-Oriented Design Metrics as Quality Indicators

A practical approach for model based slicing

A practical approach for model based slicing

Specifying quantities in software models

Specifying quantities in software models

COMPLEXITY METRICS FOR STATECHART DIAGRAMS

COMPLEXITY METRICS FOR STATECHART DIAGRAMS

A Comparison of Traditional Simulation and MSAL (6-3-2015)

A Comparison of Traditional Simulation and MSAL (6-3-2015)

International Journal of Computational Engineering Research(IJCER)

International Journal of Computational Engineering Research(IJCER)

A SYSTEMC/SIMULINK CO-SIMULATION ENVIRONMENT OF THE JPEG ALGORITHM

A SYSTEMC/SIMULINK CO-SIMULATION ENVIRONMENT OF THE JPEG ALGORITHM

dimensional_analysis.pptx

dimensional_analysis.pptx

IEEE

IEEE

Validation Study of Dimensionality Reduction Impact on Breast Cancer Classifi...

Validation Study of Dimensionality Reduction Impact on Breast Cancer Classifi...

I017144954

I017144954

Building a new CTL model checker using Web Services

Building a new CTL model checker using Web Services

Qt unit i

Qt unit i

07 18sep 7983 10108-1-ed an edge edit ari

07 18sep 7983 10108-1-ed an edge edit ari

Stereo matching algorithm using census transform and segment tree for depth e...

Stereo matching algorithm using census transform and segment tree for depth e...

OOAD - Ch.09 - Software Project Estimation.pptx

OOAD - Ch.09 - Software Project Estimation.pptx

Using UML and OCL Models to realize High-Level Digital Twins

Using UML and OCL Models to realize High-Level Digital Twins

Measuring method complexity of the case management modeling and notation (CMMN)

Compares modeling notation between CMMN, BPMN, EPC, and UML Activity Diagrams using the meta-model based method complexity approach introduced by Rossi and Brinkkemper

2010 04-29 mm (carson, california - csu-dh) petri-nets introduction

The document is a presentation by Mike Marin from IBM on Petri nets and their use in business process modeling. It introduces Petri nets as directed bipartite graphs that can model discrete systems and have been used as the theoretical foundation for workflow and business process management systems. It then provides an overview of Petri nets, including their history, applications, definitions, properties, analysis methods, and how they relate to business process modeling.

2009 11-04 mm (carson, california - csu-dh) bpm introduction

“Business Process Management – An Introduction”. Introductory presentation given by Mike Marin to Computer Science students at California State University Dominguez Hills in 2009.

2007 11-09 mm (costa rica - incae cit omg - spanish) modelando con bpmn y xpdl

Spanish version of “Business Process Modeling with BPMN & XPDL”. Introduction to business process modeling presented by Mike Marin in Costa Rica at the INCAE (Costa Rica) during aClub de Investigaciones Tecnológicas (CIT) and OMG event.

2007 11-09 mm (costa rica - incae cit omg) modeling with bpmn and xpdl

“Business Process Modeling with BPMN & XPDL”. Introduction to business process modeling presented by Mike Marin in Costa Rica at the INCAE (Costa Rica) during aClub de Investigaciones Tecnológicas (CIT) and OMG event.

2006 mm,ks,jb (miami, florida bpm summit) xpdl tutorial

“XPDL 2.0 and BPMN Tutorial
”. XPDL Tutorial presented by Mike Marin, Keith Swenson, and Justin Brunt, during the Business Process Management Summit (February 1, 2006 – Miami, Florida).

2005 10-11 mm (seoul, korea - bpm korea forum) xpdl2 tutorial

“XPDL 2.0 Tutorial”. Introductory tutorial on the 2005 emerging XPDL 2.0 standard, presented by Mike Marin during the join BPM Korea Forum and WfMC technical meeting in Seoul, Korea.

2001 09 ma,ma b2 b process integration tutorial

“XML-based standards for B2B Process Integration
”. Tutorial about WfMC standards in the area of workflow and B2B, presented by Martin Ader, and Mike Marin.

2000 09 dh,mm,mts,mz m (xml world 2000) wf-xml tutorial

“XML-based standards for B2B Process Integration”. Tutorial about WfMC XML standards in the area of workflow and B2B, presented at XML World 2000 by David Hollingsworth, Mike Marin, Marc-Thomas Schmidt, and Michael zur Muehlen.

1998 08-28 mm (costa rica, una - spanish) - workflow-documents

Presentation to students in a masters program on computer science, at the Universidad Nacional de Costa Rica

Measuring method complexity of the case management modeling and notation (CMMN)

Measuring method complexity of the case management modeling and notation (CMMN)

2010 04-29 mm (carson, california - csu-dh) petri-nets introduction

2010 04-29 mm (carson, california - csu-dh) petri-nets introduction

2009 11-04 mm (carson, california - csu-dh) bpm introduction

2009 11-04 mm (carson, california - csu-dh) bpm introduction

2007 11-09 mm (costa rica - incae cit omg - spanish) modelando con bpmn y xpdl

2007 11-09 mm (costa rica - incae cit omg - spanish) modelando con bpmn y xpdl

2007 11-09 mm (costa rica - incae cit omg) modeling with bpmn and xpdl

2007 11-09 mm (costa rica - incae cit omg) modeling with bpmn and xpdl

2006 mm,ks,jb (miami, florida bpm summit) xpdl tutorial

2006 mm,ks,jb (miami, florida bpm summit) xpdl tutorial

2005 10-11 mm (seoul, korea - bpm korea forum) xpdl2 tutorial

2005 10-11 mm (seoul, korea - bpm korea forum) xpdl2 tutorial

2001 09 ma,ma b2 b process integration tutorial

2001 09 ma,ma b2 b process integration tutorial

2000 09 dh,mm,mts,mz m (xml world 2000) wf-xml tutorial

2000 09 dh,mm,mts,mz m (xml world 2000) wf-xml tutorial

1998 08-28 mm (costa rica, una - spanish) - workflow-documents

1998 08-28 mm (costa rica, una - spanish) - workflow-documents

Sharlene Leurig - Enabling Onsite Water Use with Net Zero Water

Sharlene Leurig - Enabling Onsite Water Use with Net Zero WaterTexas Alliance of Groundwater Districts

Presented at June 6-7 Texas Alliance of Groundwater Districts Business MeetingBob Reedy - Nitrate in Texas Groundwater.pdf

Presented at June 6-7 Texas Alliance of Groundwater Districts Business Meeting

Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...

Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor

Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/aziz sancar nobel prize winner: from mardin to nobel

aziz sancar nobel prize winner

3D Hybrid PIC simulation of the plasma expansion (ISSS-14)

3D Particle-In-Cell (PIC) algorithm,
Plasma expansion in the dipole magnetic field.

Nucleic Acid-its structural and functional complexity.

This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.

NuGOweek 2024 Ghent programme overview flyer

NuGOweek 2024 Ghent programme overview flyer

Randomised Optimisation Algorithms in DAPHNE

Slides from talk:
Aleš Zamuda: Randomised Optimisation Algorithms in DAPHNE .
Austrian-Slovenian HPC Meeting 2024 – ASHPC24, Seeblickhotel Grundlsee in Austria, 10–13 June 2024
https://ashpc.eu/

Micronuclei test.M.sc.zoology.fisheries.

Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.

Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...

Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor

Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...

Travis Hills of Minnesota developed a method to convert waste into high-value dry fertilizer, significantly enriching soil quality. By providing farmers with a valuable resource derived from waste, Travis Hills helps enhance farm profitability while promoting environmental stewardship. Travis Hills' sustainable practices lead to cost savings and increased revenue for farmers by improving resource efficiency and reducing waste.

Equivariant neural networks and representation theory

Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html

Deep Software Variability and Frictionless Reproducibility

Deep Software Variability and Frictionless ReproducibilityUniversity of Rennes, INSA Rennes, Inria/IRISA, CNRS

The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Shallowest Oil Discovery of Turkiye.pptx

The Petroleum System of the Çukurova Field - the Shallowest Oil Discovery of Türkiye, Adana

Cytokines and their role in immune regulation.pptx

This presentation covers the content and information on "Cytokines " and their role in immune regulation .

molar-distalization in orthodontics-seminar.pptx

orthodontic topic

ANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptx

Abnormal or anomalous secondary growth in plants. It defines secondary growth as an increase in plant girth due to vascular cambium or cork cambium. Anomalous secondary growth does not follow the normal pattern of a single vascular cambium producing xylem internally and phloem externally.

THEMATIC APPERCEPTION TEST(TAT) cognitive abilities, creativity, and critic...

THEMATIC APPERCEPTION TEST(TAT) cognitive abilities, creativity, and critic...Abdul Wali Khan University Mardan,kP,Pakistan

hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skillsThe binding of cosmological structures by massless topological defects

Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.

Sharlene Leurig - Enabling Onsite Water Use with Net Zero Water

Sharlene Leurig - Enabling Onsite Water Use with Net Zero Water

Bob Reedy - Nitrate in Texas Groundwater.pdf

Bob Reedy - Nitrate in Texas Groundwater.pdf

Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...

Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...

aziz sancar nobel prize winner: from mardin to nobel

aziz sancar nobel prize winner: from mardin to nobel

3D Hybrid PIC simulation of the plasma expansion (ISSS-14)

3D Hybrid PIC simulation of the plasma expansion (ISSS-14)

Oedema_types_causes_pathophysiology.pptx

Oedema_types_causes_pathophysiology.pptx

Nucleic Acid-its structural and functional complexity.

Nucleic Acid-its structural and functional complexity.

NuGOweek 2024 Ghent programme overview flyer

NuGOweek 2024 Ghent programme overview flyer

Randomised Optimisation Algorithms in DAPHNE

Randomised Optimisation Algorithms in DAPHNE

Micronuclei test.M.sc.zoology.fisheries.

Micronuclei test.M.sc.zoology.fisheries.

Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...

Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...

Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...

Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...

Equivariant neural networks and representation theory

Equivariant neural networks and representation theory

Deep Software Variability and Frictionless Reproducibility

Deep Software Variability and Frictionless Reproducibility

Shallowest Oil Discovery of Turkiye.pptx

Shallowest Oil Discovery of Turkiye.pptx

Cytokines and their role in immune regulation.pptx

Cytokines and their role in immune regulation.pptx

molar-distalization in orthodontics-seminar.pptx

molar-distalization in orthodontics-seminar.pptx

ANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptx

ANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptx

THEMATIC APPERCEPTION TEST(TAT) cognitive abilities, creativity, and critic...

THEMATIC APPERCEPTION TEST(TAT) cognitive abilities, creativity, and critic...

The binding of cosmological structures by massless topological defects

The binding of cosmological structures by massless topological defects

- 1. Metrics for the Case Management Modeling and Notation (CMMN) Speciﬁcation Mike A. Marin Hugo Lotriet John A. Van Der Poll University of South Africa September 30, 2015 SAICSIT 2015 Wallenberg Research Centre, Stellenbosch, South Africa Mike A. Marin, Hugo Lotriet, John A. Van Der Poll Complexity metrics for CMMN
- 2. Outline Agenda Motivation Case Management Modeling and Notation Methodology CMMN Model Modeling elements and annotators Size Metric Length Metric Complexity Metric Findings Future Work Mike A. Marin, Hugo Lotriet, John A. Van Der Poll Complexity metrics for CMMN
- 3. Motivation Business Process Management (BPM) is widely used by businesses to automate business process in the enterprise Extensive research has been conducted in several aspects of the technology Including complexity metrics Commonly used notations are procedural and graph based Nodes represents activities Arcs represent routes The Business Process Management and Notation (BPMN) is the main BPM standard Created by the object management group (OMG) BPMN version 1.0 released in May 2004 The Case Management Modeling and Notation (CMMN) is a new process notation Trying to understand modeling complexity for the CMMN Identify and validate complexity metrics for CMMN Mike A. Marin, Hugo Lotriet, John A. Van Der Poll Complexity metrics for CMMN
- 4. Diﬀerences between traditional BPM and CMMN Traditional BPM CMMN Procedural Declarative Control ﬂow Event based Process centric Data centric Engine control process ﬂow Knowledge workers control the ﬂow Arcs describe the sequence No predeﬁned sequence BPMN Example CMMN Example Mike A. Marin, Hugo Lotriet, John A. Van Der Poll Complexity metrics for CMMN
- 5. Case Management Modeling and Notation (CMMN) CMMN is a new process modeling notation Version 1.0 released in May 2014 Created by the object management group (OMG) Notation Compatible with BPMN Diamonds represent guards (pre-conditions) Rounded rectangles represent tasks Declarative Notation based on business artifacts with guard-stage-milestone CMMN Example Mike A. Marin, Hugo Lotriet, John A. Van Der Poll Complexity metrics for CMMN
- 6. Methodology Extensive literature review on complexity software metrics, in particular for Workﬂow and BPM. Formalized the deﬁnition of CMMN in order to deﬁne metrics and validate them Identify three metrics Size (CS) Length (CL) Complexity (CC) Used the theoretical and empirical validation of software product measures framework deﬁned by Briand et al. to validate the three metrics Used Weyuker's nine properties for evaluating software complexity measures to further validate the Complexity metric (CC) Mike A. Marin, Hugo Lotriet, John A. Van Der Poll Complexity metrics for CMMN
- 7. CMMN Model Deﬁnition (Model) A CMMN model C is deﬁned as a tuple C = E, U, V, A Where E is a set of modeling elements. U is a binary relationship in which two elements x and y in E are related if and only if they are contained in the same scope. V is a binary relationship in which two elements x and y in E are related if and only if an event from one (x) triggers the other (y). A is a set of annotators used to indicate characteristics of elements in E. Mike A. Marin, Hugo Lotriet, John A. Van Der Poll Complexity metrics for CMMN
- 8. Modeling elements E and annotators A Modeling elements E Annotators A Mike A. Marin, Hugo Lotriet, John A. Van Der Poll Complexity metrics for CMMN
- 9. Size Metric Deﬁnition (Size) The size of a model C denoted by CS(C) is deﬁned as the cardinality of E, CS(C) = |E| Example CS(C) = 8 Mike A. Marin, Hugo Lotriet, John A. Van Der Poll Complexity metrics for CMMN
- 10. Size metric characteristics Size CS(C) complies with the Briand et al. framework properties for size metrics Size (Non-negativity) The size of a model S = E, R is non-negative. Size (Null value) The size of a model S = E, R is zero if E is empty. Size (Module additivity) The size of a module S = E, R is equal to the sum of the sizes of two of its modules m1 = Em1, Rm1 and m2 = Em2, Rm2 such that any element of S is in either m1 or in m1. Mike A. Marin, Hugo Lotriet, John A. Van Der Poll Complexity metrics for CMMN
- 11. Length Metric Deﬁnition The length of a model C denoted by CL(C) is deﬁned as the maximum nesting depth of a model. The length CL(C) can be calculated by the following algorithm, Example CL(C) = 3 Mike A. Marin, Hugo Lotriet, John A. Van Der Poll Complexity metrics for CMMN
- 12. Length metric characteristics Length CL(C) complies with the Briand et al. framework properties for length metrics Length (Non-negativity) The length of a model S = E, R is non-negative. Length (Null value) The length of a model S = E, R is zero if E is empty. Length (Non-increasing monotonicity for connected components) Adding relationships between elements of a module m does not increases the length of the model S = E, R . Length (Non-decreasing monotonicity for non-connected components) Adding relationships between the elements of two modules m1 and m2 does not decrease the length of the model S = E, R . Length (Disjoint modules) The length of a model S = E, R made up of two disjoint modules m1 and m2 is equal to the maximum of the lengths of the modules m1 and m2. Mike A. Marin, Hugo Lotriet, John A. Van Der Poll Complexity metrics for CMMN
- 13. Complexity Metric Deﬁnition The complexity of a model C denoted by CC(C) is deﬁned as, CC(∅) = 0, otherwise CC(C) = i∈E∪A Wi Where, the weight, Wi is given in by a table of weights. Example CC(C) = 11 Mike A. Marin, Hugo Lotriet, John A. Van Der Poll Complexity metrics for CMMN
- 14. Complexity metric characteristics Complexity CC(C) complies with the Briand et al. framework properties for complexity metrics Complexity (Non-negativity) The complexity of a model S = E, R must be non-negative. Complexity (Null value) The complexity of a model S = E, R is zero if R is empty. Complexity (Symmetry) The complexity of a model S = E, R does not depend on the convention chosen to represent the relationships between its elements. Complexity (Module monotonicity) The complexity of a model S = E, R is no less than the sum of the complexities of any two of its modules with no relationships in common. Complexity (Disjoint module additivity) The complexity of a model S = E, R composed of two disjoint modules m1 and m2 is equal to the sum of the complexities of the two modules. Mike A. Marin, Hugo Lotriet, John A. Van Der Poll Complexity metrics for CMMN
- 15. Properties 1/2 Complexity CC(C) complies with the Weyuker’s complexity properties Property (Non-coarseness) A metric should not rank all models as equally complex. Property (Granularity) A metric should rank only a ﬁnite number of models with the same complexity. Property (Non-uniqueness) A metric should allow some models to have the same complexity. Property (Design details are important) Two distinct but equivalent models that compute the same function need not have the same complexity. Property (Monotonicity) The complexity of two models joined together is greater than or equal to the complexity of either model considered separate. Mike A. Marin, Hugo Lotriet, John A. Van Der Poll Complexity metrics for CMMN
- 16. Properties 2/2 Complexity CC(C) complies with the Weyuker’s complexity properties Property (Nonequivalence of interaction) Two models with the same complexity when each is joined to a third model the resulting complexity may be diﬀerent between the two. Property (Permutation) Complexity should be responsive to the order of statements. Property (Renaming) Complexity should not be aﬀected by renaming. Property (Interaction may increase complexity) (∃P)(∃Q)( P + Q < P;Q ). Mike A. Marin, Hugo Lotriet, John A. Van Der Poll Complexity metrics for CMMN
- 17. Findings Main ﬁndings The formalization of the CMMN model was suﬃcient to deﬁne and validate the metrics. The three proposed metrics comply with the formal framework for software measurements deﬁned by Briand et al. The complexity metric also complies with the properties described by Weyuker. Both Briand et al., and Weyuker assume that software systems are build using a procedural style, based on directed acyclic graphs that may not be totally applicable to CMMN. Mike A. Marin, Hugo Lotriet, John A. Van Der Poll Complexity metrics for CMMN
- 18. Future work Work is required to understand the applicability of Briand et al., and Weyuker to declarative systems. Work is required to conduct the empirical validation for the proposed metrics. CMMN claims an approach based on business artifacts, therefore further work is required to compare the formal CMMN model described in the paper with a formalization of business artifacts. The weights given for the complexity metric CC(C) were assigned based on the intuition of the authors, and further empirical work is needed to ﬁne tune the weights. Empirical work is needed to understand the inﬂuence of CMMN non-visual entities on the complexity metric. Mike A. Marin, Hugo Lotriet, John A. Van Der Poll Complexity metrics for CMMN
- 19. Thanks Mike A. Marin, Hugo Lotriet, John A. Van Der Poll Complexity metrics for CMMN