This document proposes using the Capella modeling tool and ARCADIA framework to model and optimize a distributed avionics system. Specifically, it will develop a simplified model of a Distributed Integrated Modular Avionics (DIMA) system in Capella, extract parameters to specify an optimization problem, and evaluate different cost functions to optimize tasks allocation and hardware placement for the DIMA architecture. The goal is to demonstrate how model-based systems engineering tools can help automate and improve the design of complex avionics systems.
High Dimensionality Structures Selection for Efficient Economic Big data usin...IRJET Journal
This document proposes a new framework for efficient analysis of high-dimensional economic big data using feature selection and k-means clustering algorithms. It introduces challenges in analyzing large volumes of economic data with high dimensionality. The framework combines methods for economic feature selection and model construction to identify patterns for economic development. It uses novel data preprocessing, distributed feature identification to select important indicators, and new econometric models to capture hidden patterns for economic analysis. The results on economic data sets demonstrate superior performance of the proposed methods.
IRJET- Cloud Cost Analyzer and OptimizerIRJET Journal
This document proposes a system to monitor virtual machines (VMs or EC2 instances) on private clouds like Amazon or Google and provide solutions to reduce infrastructure costs from the customer's perspective. The system would monitor EC2 VM usage, performance metrics, and the customer's current cloud cost plan. It aims to optimize resource usage and save costs by proposing reductions to resources or cost plans. The system is designed to build a test bed using an Amazon account to connect to a user's resources and fetch performance data like RAM, CPU usage. It would then calculate pricing for storage, CPU usage, requests and other metrics to estimate overall setup costs and find opportunities for cost optimization.
Energy-Efficient Task Scheduling in Cloud EnvironmentIRJET Journal
1. The document discusses developing an energy-efficient task scheduling approach for cloud data centers using deep reinforcement learning.
2. It aims to minimize computational costs and cooling costs by optimizing task assignment to servers based on factors like temperature, CPU, and memory.
3. The proposed approach uses a greedy algorithm to schedule tasks to servers maintaining the lowest temperature, thus reducing energy consumption and improving data center performance.
RESILIENT INTERFACE DESIGN FOR SAFETY-CRITICAL EMBEDDED AUTOMOTIVE SOFTWAREcsandit
The replacement of the former, purely mechanical, functionality with mechatronics-based solutions, the introduction of new propulsion technologies, and the connection of cars to their environment are just a few reasons for the continuously increasing electrical and/or electronic
system (E/E system) complexity in modern passenger cars. Smart methodologies and techniques have been introduced in system development to cope with these new challenges. A topic that is often neglected is the definition of the interface between the hardware and software subsystems.
However, during the development of safety-critical E/E systems, according to the automotive
functional safety standard ISO 26262, an unambiguous definition of the hardware-software interface (HSI) has become vital. This paper presents a domain-specific modelling approach for mechatronic systems with an integrated hardware-software interface definition feature. The
newly developed model-based domain-specific language is tailored to the needs of mechatronic system engineers and supports the system’s architectural design including the interface definition, with a special focus on safety-criticality.
RESILIENT INTERFACE DESIGN FOR SAFETY-CRITICAL EMBEDDED AUTOMOTIVE SOFTWAREcscpconf
The replacement of the former, purely mechanical, functionality with mechatronics-based solutions, the introduction of new propulsion technologies, and the connection of cars to their environment are just a few reasons for the continuously increasing electrical and/or electronic system (E/E system) complexity in modern passenger cars. Smart methodologies and techniques have been introduced in system development to cope with these new challenges. A topic that is often neglected is the definition of the interface between the hardware and software subsystems. However, during the development of safety-critical E/E systems, according to the automotive functional safety standard ISO 26262, an unambiguous definition of the hardware-software interface (HSI) has become vital. This paper presents a domain-specific modelling approach for mechatronic systems with an integrated hardware-software interface definition feature. The newly developed model-based domain-specific language is tailored to the needs of mechatronic system engineers and supports the system’s architectural design including the interface definition, with a special focus on safety-criticality
IRJET- Advance Approach for Load Balancing in Cloud Computing using (HMSO) Hy...IRJET Journal
This document proposes a new hybrid multi-swarm optimization (HMSO) algorithm for load balancing in cloud computing. It aims to minimize response time and costs while improving resource utilization and customer satisfaction. The HMSO algorithm uses multi-level particle swarm optimization to find an optimal resource allocation solution. Simulation results show that the proposed HMSO technique reduces response time and datacenter costs compared to other algorithms. It also achieves a more balanced load distribution across resources.
ABC Architecture A New Approach To Build Reusable And Adaptable Business Tie...Joshua Gorinson
1) The document proposes a new ABC Architecture for building reusable and adaptable business tier components based on static business interfaces.
2) The architecture uses Call Level Interfaces as a low-level API to communicate with databases, and implements type-safe interfaces to communicate with client applications based on SQL statement schemas.
3) ABC components can dynamically accept, remove, and update SQL statements at runtime from authorized entities, and execute the statements on behalf of client applications through the interfaces.
High Dimensionality Structures Selection for Efficient Economic Big data usin...IRJET Journal
This document proposes a new framework for efficient analysis of high-dimensional economic big data using feature selection and k-means clustering algorithms. It introduces challenges in analyzing large volumes of economic data with high dimensionality. The framework combines methods for economic feature selection and model construction to identify patterns for economic development. It uses novel data preprocessing, distributed feature identification to select important indicators, and new econometric models to capture hidden patterns for economic analysis. The results on economic data sets demonstrate superior performance of the proposed methods.
IRJET- Cloud Cost Analyzer and OptimizerIRJET Journal
This document proposes a system to monitor virtual machines (VMs or EC2 instances) on private clouds like Amazon or Google and provide solutions to reduce infrastructure costs from the customer's perspective. The system would monitor EC2 VM usage, performance metrics, and the customer's current cloud cost plan. It aims to optimize resource usage and save costs by proposing reductions to resources or cost plans. The system is designed to build a test bed using an Amazon account to connect to a user's resources and fetch performance data like RAM, CPU usage. It would then calculate pricing for storage, CPU usage, requests and other metrics to estimate overall setup costs and find opportunities for cost optimization.
Energy-Efficient Task Scheduling in Cloud EnvironmentIRJET Journal
1. The document discusses developing an energy-efficient task scheduling approach for cloud data centers using deep reinforcement learning.
2. It aims to minimize computational costs and cooling costs by optimizing task assignment to servers based on factors like temperature, CPU, and memory.
3. The proposed approach uses a greedy algorithm to schedule tasks to servers maintaining the lowest temperature, thus reducing energy consumption and improving data center performance.
RESILIENT INTERFACE DESIGN FOR SAFETY-CRITICAL EMBEDDED AUTOMOTIVE SOFTWAREcsandit
The replacement of the former, purely mechanical, functionality with mechatronics-based solutions, the introduction of new propulsion technologies, and the connection of cars to their environment are just a few reasons for the continuously increasing electrical and/or electronic
system (E/E system) complexity in modern passenger cars. Smart methodologies and techniques have been introduced in system development to cope with these new challenges. A topic that is often neglected is the definition of the interface between the hardware and software subsystems.
However, during the development of safety-critical E/E systems, according to the automotive
functional safety standard ISO 26262, an unambiguous definition of the hardware-software interface (HSI) has become vital. This paper presents a domain-specific modelling approach for mechatronic systems with an integrated hardware-software interface definition feature. The
newly developed model-based domain-specific language is tailored to the needs of mechatronic system engineers and supports the system’s architectural design including the interface definition, with a special focus on safety-criticality.
RESILIENT INTERFACE DESIGN FOR SAFETY-CRITICAL EMBEDDED AUTOMOTIVE SOFTWAREcscpconf
The replacement of the former, purely mechanical, functionality with mechatronics-based solutions, the introduction of new propulsion technologies, and the connection of cars to their environment are just a few reasons for the continuously increasing electrical and/or electronic system (E/E system) complexity in modern passenger cars. Smart methodologies and techniques have been introduced in system development to cope with these new challenges. A topic that is often neglected is the definition of the interface between the hardware and software subsystems. However, during the development of safety-critical E/E systems, according to the automotive functional safety standard ISO 26262, an unambiguous definition of the hardware-software interface (HSI) has become vital. This paper presents a domain-specific modelling approach for mechatronic systems with an integrated hardware-software interface definition feature. The newly developed model-based domain-specific language is tailored to the needs of mechatronic system engineers and supports the system’s architectural design including the interface definition, with a special focus on safety-criticality
IRJET- Advance Approach for Load Balancing in Cloud Computing using (HMSO) Hy...IRJET Journal
This document proposes a new hybrid multi-swarm optimization (HMSO) algorithm for load balancing in cloud computing. It aims to minimize response time and costs while improving resource utilization and customer satisfaction. The HMSO algorithm uses multi-level particle swarm optimization to find an optimal resource allocation solution. Simulation results show that the proposed HMSO technique reduces response time and datacenter costs compared to other algorithms. It also achieves a more balanced load distribution across resources.
ABC Architecture A New Approach To Build Reusable And Adaptable Business Tie...Joshua Gorinson
1) The document proposes a new ABC Architecture for building reusable and adaptable business tier components based on static business interfaces.
2) The architecture uses Call Level Interfaces as a low-level API to communicate with databases, and implements type-safe interfaces to communicate with client applications based on SQL statement schemas.
3) ABC components can dynamically accept, remove, and update SQL statements at runtime from authorized entities, and execute the statements on behalf of client applications through the interfaces.
This document summarizes a research paper that presents a methodology for cycle-accurate simulation of energy dissipation in embedded systems. The methodology tightly couples component models to enable accurate estimates of performance and energy consumption within 5% of hardware measurements. The simulator can be used to explore hardware design alternatives and estimate the impact of software changes. It also includes a profiler to relate energy consumption to source code, allowing quick identification and evaluation of energy-efficient software optimizations. The tools were tested on an industrial application called the SmartBadge, reducing energy consumption by 77% for an MP3 audio decoder example.
PHP modernization approach generating KDM models from PHP legacy codejournalBEEI
With the rise of new web technologies such as web 2.0, Jquery, Bootstrap. Modernizing legacy web systems to benefit from the advantages of the new technologies is more and more relevant. The migration of a system from an environment to another is a time and effort consuming process, it involves a complete rewrite of the application adapted to the target platform. To realize this migration in an automated and standardized way, many approaches have tried to define standardized engineering processes. Architecture Driven Modernization (ADM) defines an approach to standardize and automate the reengineering process. We defined an ADM approach to represent PHP web applications in the highest level of abstraction models. To do this, we have used software artifacts as a entry point . This paper describes the extraction process, which permits discovering and understanding of the legacy system. And generate models to represent the system in an abstract way.
Matlab Based High Level Synthesis Engine for Area And Power Efficient Arithme...ijceronline
Embedded systems used in real-time applications require low power, less area and a high computation speed. For digital signal processing (DSP), image processing and communication applications, data are often received at a continuously high rate. Embedded processors have to cope with this high data rate and process the incoming data based on specific application requirements. Even though there are many different application domains, they all require arithmetic operations that quickly compute the desired values using a larger range of operation, reconfigurable behavior, low power and high precision. The type of necessary arithmetic operations may vary greatly among different applications. The RTL-based design and verification of one or more of these functions may be time-consuming. Some High Level Synthesis tools reduce this design and verification time but may not be optimal or suitable for low power applications. The developed MATLAB-based Arithmetic Engine improves design time and reduces the verification process, but the key point is to use a unified design that combines some of the basic operations with more complex operations to reduce area and power consumption. The results indicate that using the Arithmetic Engine from a simple design to more complex systems can improve design time by reducing the verification time by up to 62%. The MATLAB-based Arithmetic Engine generates structural RTL code, a testbench, and gives the designers more control. The MATLAB-based design and verification engine uses optimized algorithms for better accuracy at a better throughput.
Computer Aided Process Planning Using Neutral File Step for Rotational PartsRSIS International
Present investigation is on process planning using neutral file with format STEP for rotational parts with use of computer. CAE systems involved in every stage of product life cycle mainly uses the product data produced by the CAD systems and integrated manufacturing data produced by CAPP and CAM systems. As the degree of automation and CAD/CAM integration increases, the inclusion of high level information with the product data and its seamless flow in CAD-CAM-CNC chain becomes a necessity. The objective of this work is to develop a Computer Aided Process Planning system for rotational parts using ISO 10303 standard STEP AP224 data exchange file, enabling the inclusion of high level information about the product besides geometry. The developed system aims to incorporate the small and medium sized manufacturing enterprises into the e-manufacturing chain by adopting the NC-code based CNC machine tools without any modification of the controllers.
Performance Evaluation using Blackboard Technique in Software ArchitectureEditor IJCATR
This document proposes an approach to evaluate software performance using the blackboard technique at the software architecture level. It begins by describing blackboard technique, performance modeling in UML, and timed colored Petri nets. It then outlines an algorithm to convert a UML model of a software architecture using blackboard technique into an executable timed colored Petri net model. This would allow evaluating non-functional requirements like response time at the architecture level before implementation. As a case study, it applies the method to a hotel reservation system modeled with UML diagrams and implemented using the blackboard technique. The performance is then evaluated by analyzing the resulting timed colored Petri net model.
This document is a project report submitted by three students (Amit Kumar, Ankit Singh, and Sushant Bhadkamkar) for their Bachelor of Engineering degree in Computer Science. The report describes their work on a parallel computing cluster called Parallex. Parallex aims to create a high-performance computing system without requiring modifications to operating system kernels. It allows different operating systems and processor architectures to work together in parallel without using existing parallel libraries. The students implemented new distribution algorithms and parallel algorithms for Parallex to make administration and usage simple while maintaining efficiency.
Hybrid Task Scheduling Approach using Gravitational and ACO Search AlgorithmIRJET Journal
The document proposes a hybrid task scheduling approach for cloud computing called ACGSA that combines ant colony optimization and gravitational search algorithms. It describes using the Cloudsim simulator to test the performance of ACGSA and comparing it to ant colony optimization. The results show that ACGSA achieves better performance than the basic ant colony approach on relevant parameters like task scheduling time and resource utilization.
Integrating profiling into mde compilersijseajournal
Scientific computation requires more and more performance in its algorithms. New massively parallel
architectures suit well to these algorithms. They are known for offering high performance and power
efficiency. Unfortunately, as parallel programming for these architectures requires a complex distribution
of tasks and data, developers find difficult to implement their applications effectively. Although approaches
based on source-to-source intends to provide a low learning curve for parallel programming and take
advantage of architecture features to create optimized applications, programming remains difficult for
neophytes. This work aims at improving performance by returning to the high-level models, specific
execution data from a profiling tool enhanced by smart advices computed by an analysis engine. In order to
keep the link between execution and model, the process is based on a traceability mechanism. Once the
model is automatically annotated, it can be re-factored aiming better performances on the re-generated
code. Hence, this work allows keeping coherence between model and code without forgetting to harness the
power of parallel architectures. To illustrate and clarify key points of this approach, we provide an
experimental example in GPUs context. The example uses a transformation chain from UML-MARTE
models to OpenCL code.
A framework for ERP systems in sme based On cloud computing technologyijccsa
This document proposes a framework for implementing ERP systems for SMEs using cloud computing technology. It begins with an introduction discussing issues with current ERP systems and how cloud computing could address them. It then reviews background literature on ERP systems and cloud computing. The objectives of the research are outlined as comparing ERP before and after moving to cloud, proposing a generic cloud-based ERP framework for SMEs, and testing the framework. A case study of a company called Awal is discussed for evaluating the proposed framework.
The document presents a Petri net model for hardware/software codesign. Petri nets are used as an intermediate model to allow for formal qualitative and quantitative analysis in order to perform hardware/software partitioning. Quantitative metrics like load balance, communication cost, and mutual exclusion degree are computed from the Petri net model to guide the initial allocation and partitioning process. The approach also estimates hardware area and considers multiple software components in the partitioning method.
#SiriusCon 2015: Talk by Christophe Boudjennah "Experimenting the Open Source...Obeo
Capella is a Model Based Systems Engineering (MBSE) solution using Sirius for its diagrams rendering.
It has been initially developed in house by Thales and has been open sourced (in Polarsys) within the context of the CLARITY project. This was actually the very first step of CLARITY, which aims at developing and structuring an international ecosystem around Capella. The CLARITY project now investigates customization capabilities for Capella and aims at complementing the ecosystem with a community that brings together major actors of the entire engineering value chain (industrials, integrators, technology providers and consultants, academia) for open innovation in MBSE within Capella.
In this context, Areva and Airbus Defence & Space already made lots of experimentations and are helping the ecosystem to mature up by providing feedbacks to the community. In this talk, you will get an overview of what those 2 Industrial companies have realized so far.
[About Christophe Boudjennah:
Christophe is a senior system/software architect and project manager. His experience leads him to work for various domains such as defense, IT, or the Automotive industry. Most of his career has been focused on Systems Engineering for complex embedded systems, whether it is from the "methods and tools provider" point of view or from the operational one. He is now working for Obeo, and is dealing with various open source and systems engineering related topics. One of his current main responsibilities is to be the project coordinator of Clarity, a large R&D project whose purpose is to open-source Capella (an industrial workbench for system engineering).]
DYNAMIC TENANT PROVISIONING AND SERVICE ORCHESTRATION IN HYBRID CLOUDijccsa
The advent of container orchestration and cloud computing, as well as associated security and compliance complexities, make it challenging for the enterprises to develop robust, secure, manageable and extendable architectures which would be applicable to the public and private cloud. The main challenges stem from the fact that on-premises, private cloud and third-party, public cloud services often have seemingly different and sometimes conflicting requirements to tenant provisioning, service deployment, security and compliance and that can lead to rather different architectures which still have a lot of commonalities but evolve independently. Understanding and bridging the functionality gaps between such architectures is highly desirable in terms of common approaches, API/SPI as well as maintainability and extendibility. The authors discuss and propose common architectural approaches to the dynamic tenant provisioning and
service orchestration in public, private and hybrid clouds focusing on deployment, security, compliance, scalability and extendibility of stateful Kubernetes runtimes.
DYNAMIC TENANT PROVISIONING AND SERVICE ORCHESTRATION IN HYBRID CLOUDijccsa
The advent of container orchestration and cloud computing, as well as associated security and compliance complexities, make it challenging for the enterprises to develop robust, secure, manageable and extendable architectures which would be applicable to the public and private cloud. The main challenges stem from the fact that on-premises, private cloud and third-party, public cloud services often have seemingly different and sometimes conflicting requirements to tenant provisioning, service deployment, security and compliance and that can lead to rather different architectures which still have a lot of commonalities but evolve independently. Understanding and bridging the functionality gaps between such architectures is highly desirable in terms of common approaches, API/SPI as well as maintainability and extendibility. The authors discuss and propose common architectural approaches to the dynamic tenant provisioning and service orchestration in public, private and hybrid clouds focusing on deployment, security, compliance, scalability and extendibility of stateful Kubernetes runtimes.
IRJET- Machine Learning Techniques for Code OptimizationIRJET Journal
This document summarizes research on using machine learning techniques for code optimization. It discusses how machine learning can help address two main compiler optimization problems: optimization selection and phase ordering. It provides an overview of supervised and unsupervised machine learning approaches that have been used, including linear models, decision trees, clustering, and evolutionary algorithms. Key papers applying these techniques to problems like optimization selection, phase ordering, and code compression are summarized. The document concludes that machine learning is increasingly being applied to compiler optimization problems to develop intelligent heuristics with minimal human input.
A Reconfigurable Component-Based Problem Solving EnvironmentSheila Sinclair
This technical report describes a reconfigurable component-based problem solving environment called DISCWorld. The key features discussed are:
1) DISCWorld uses a data flow model represented as directed acyclic graphs (DAGs) of operators to integrate distributed computing components across networks.
2) It supports both long running simulations and parameter search applications by allowing complex processing requests to be composed graphically or through scripting and executed on heterogeneous platforms.
3) Operators can be simple "pure Java" implementations or wrappers to fast platform-specific implementations, and some operators may represent sub-graphs that can be reconfigured to run across multiple servers for faster execution.
Model-Driven Architecture for Cloud Applications Development, A survey Editor IJCATR
Model Driven Architecture and Cloud computing are among the most important paradigms in software service engineering now a days. As cloud computing continues to gain more activities, more issues and challenges for many systems with its dynamic usage are introduced. Model Driven Architecture (MDA) approach for development and maintenance becomes an evident choice for ensuring software solutions that are robust, flexible and agile for developing applications.
This paper aims to survey and analyze the research issues and challenges that have been emerging in cloud computing applications with a focus on using Model Driven architecture (MDA) development. We discuss the open research issues and highlight future research problems.
Model-Driven Architecture for Cloud Applications Development, A surveyEditor IJCATR
Model Driven Architecture and Cloud computing are among the most important paradigms in software service engineering
now a days. As cloud computing continues to gain more activities, more issues and challenges for many systems with its dynamic usage
are introduced. Model Driven Architecture (MDA) approach for development and maintenance becomes an evident choice for ensuring
software solutions that are robust, flexible and agile for developing applications.
This paper aims to survey and analyze the research issues and challenges that have been emerging in cloud computing applications with
a focus on using Model Driven architecture (MDA) development. We discuss the open research issues and highlight future research
problems.
Model-Driven Architecture for Cloud Applications Development, A surveyEditor IJCATR
Model Driven Architecture and Cloud computing are among the most important paradigms in software service engineering
now a days. As cloud computing continues to gain more activities, more issues and challenges for many systems with its dynamic usage
are introduced. Model Driven Architecture (MDA) approach for development and maintenance becomes an evident choice for ensuring
software solutions that are robust, flexible and agile for developing applications.
This paper aims to survey and analyze the research issues and challenges that have been emerging in cloud computing applications with
a focus on using Model Driven architecture (MDA) development. We discuss the open research issues and highlight future research
problems.
This document provides a review of simulation techniques for parallel and distributed computing. It discusses several key topics:
1) It defines parallel computing, distributed computing, and parallel and distributed computing systems. Various classification schemes for parallel and distributed systems are also described.
2) It examines several modeling techniques for parallel and distributed systems including system modeling, network modeling, performance modeling, and mathematical modeling. It provides details on parallel discrete event simulation.
3) It reviews several simulation software tools used for modeling parallel and distributed systems including SimOS, SimJava, and MicroGrid.
4) It concludes with a focused discussion on cloud computing as the latest development in parallel and distributed computing.
An Adjacent Analysis of the Parallel Programming Model Perspective: A SurveyIRJET Journal
This document provides an overview and analysis of parallel programming models. It begins with an abstract discussing the growing demand for parallel computing and challenges with existing parallel programming frameworks. It then reviews several relevant studies on parallel programming models and architectures. The document goes on to describe several key parallel programming models in more detail, including the Parallel Random Access Machine (PRAM) model, Unrestricted Message Passing (UMP) model, and Bulk Synchronous Parallel (BSP) model. It discusses aspects of each model like architecture, communication methods, and associated cost models. The overall goal is to compare benefits and limitations of different parallel programming models.
This document summarizes a research paper that presents a methodology for cycle-accurate simulation of energy dissipation in embedded systems. The methodology tightly couples component models to enable accurate estimates of performance and energy consumption within 5% of hardware measurements. The simulator can be used to explore hardware design alternatives and estimate the impact of software changes. It also includes a profiler to relate energy consumption to source code, allowing quick identification and evaluation of energy-efficient software optimizations. The tools were tested on an industrial application called the SmartBadge, reducing energy consumption by 77% for an MP3 audio decoder example.
PHP modernization approach generating KDM models from PHP legacy codejournalBEEI
With the rise of new web technologies such as web 2.0, Jquery, Bootstrap. Modernizing legacy web systems to benefit from the advantages of the new technologies is more and more relevant. The migration of a system from an environment to another is a time and effort consuming process, it involves a complete rewrite of the application adapted to the target platform. To realize this migration in an automated and standardized way, many approaches have tried to define standardized engineering processes. Architecture Driven Modernization (ADM) defines an approach to standardize and automate the reengineering process. We defined an ADM approach to represent PHP web applications in the highest level of abstraction models. To do this, we have used software artifacts as a entry point . This paper describes the extraction process, which permits discovering and understanding of the legacy system. And generate models to represent the system in an abstract way.
Matlab Based High Level Synthesis Engine for Area And Power Efficient Arithme...ijceronline
Embedded systems used in real-time applications require low power, less area and a high computation speed. For digital signal processing (DSP), image processing and communication applications, data are often received at a continuously high rate. Embedded processors have to cope with this high data rate and process the incoming data based on specific application requirements. Even though there are many different application domains, they all require arithmetic operations that quickly compute the desired values using a larger range of operation, reconfigurable behavior, low power and high precision. The type of necessary arithmetic operations may vary greatly among different applications. The RTL-based design and verification of one or more of these functions may be time-consuming. Some High Level Synthesis tools reduce this design and verification time but may not be optimal or suitable for low power applications. The developed MATLAB-based Arithmetic Engine improves design time and reduces the verification process, but the key point is to use a unified design that combines some of the basic operations with more complex operations to reduce area and power consumption. The results indicate that using the Arithmetic Engine from a simple design to more complex systems can improve design time by reducing the verification time by up to 62%. The MATLAB-based Arithmetic Engine generates structural RTL code, a testbench, and gives the designers more control. The MATLAB-based design and verification engine uses optimized algorithms for better accuracy at a better throughput.
Computer Aided Process Planning Using Neutral File Step for Rotational PartsRSIS International
Present investigation is on process planning using neutral file with format STEP for rotational parts with use of computer. CAE systems involved in every stage of product life cycle mainly uses the product data produced by the CAD systems and integrated manufacturing data produced by CAPP and CAM systems. As the degree of automation and CAD/CAM integration increases, the inclusion of high level information with the product data and its seamless flow in CAD-CAM-CNC chain becomes a necessity. The objective of this work is to develop a Computer Aided Process Planning system for rotational parts using ISO 10303 standard STEP AP224 data exchange file, enabling the inclusion of high level information about the product besides geometry. The developed system aims to incorporate the small and medium sized manufacturing enterprises into the e-manufacturing chain by adopting the NC-code based CNC machine tools without any modification of the controllers.
Performance Evaluation using Blackboard Technique in Software ArchitectureEditor IJCATR
This document proposes an approach to evaluate software performance using the blackboard technique at the software architecture level. It begins by describing blackboard technique, performance modeling in UML, and timed colored Petri nets. It then outlines an algorithm to convert a UML model of a software architecture using blackboard technique into an executable timed colored Petri net model. This would allow evaluating non-functional requirements like response time at the architecture level before implementation. As a case study, it applies the method to a hotel reservation system modeled with UML diagrams and implemented using the blackboard technique. The performance is then evaluated by analyzing the resulting timed colored Petri net model.
This document is a project report submitted by three students (Amit Kumar, Ankit Singh, and Sushant Bhadkamkar) for their Bachelor of Engineering degree in Computer Science. The report describes their work on a parallel computing cluster called Parallex. Parallex aims to create a high-performance computing system without requiring modifications to operating system kernels. It allows different operating systems and processor architectures to work together in parallel without using existing parallel libraries. The students implemented new distribution algorithms and parallel algorithms for Parallex to make administration and usage simple while maintaining efficiency.
Hybrid Task Scheduling Approach using Gravitational and ACO Search AlgorithmIRJET Journal
The document proposes a hybrid task scheduling approach for cloud computing called ACGSA that combines ant colony optimization and gravitational search algorithms. It describes using the Cloudsim simulator to test the performance of ACGSA and comparing it to ant colony optimization. The results show that ACGSA achieves better performance than the basic ant colony approach on relevant parameters like task scheduling time and resource utilization.
Integrating profiling into mde compilersijseajournal
Scientific computation requires more and more performance in its algorithms. New massively parallel
architectures suit well to these algorithms. They are known for offering high performance and power
efficiency. Unfortunately, as parallel programming for these architectures requires a complex distribution
of tasks and data, developers find difficult to implement their applications effectively. Although approaches
based on source-to-source intends to provide a low learning curve for parallel programming and take
advantage of architecture features to create optimized applications, programming remains difficult for
neophytes. This work aims at improving performance by returning to the high-level models, specific
execution data from a profiling tool enhanced by smart advices computed by an analysis engine. In order to
keep the link between execution and model, the process is based on a traceability mechanism. Once the
model is automatically annotated, it can be re-factored aiming better performances on the re-generated
code. Hence, this work allows keeping coherence between model and code without forgetting to harness the
power of parallel architectures. To illustrate and clarify key points of this approach, we provide an
experimental example in GPUs context. The example uses a transformation chain from UML-MARTE
models to OpenCL code.
A framework for ERP systems in sme based On cloud computing technologyijccsa
This document proposes a framework for implementing ERP systems for SMEs using cloud computing technology. It begins with an introduction discussing issues with current ERP systems and how cloud computing could address them. It then reviews background literature on ERP systems and cloud computing. The objectives of the research are outlined as comparing ERP before and after moving to cloud, proposing a generic cloud-based ERP framework for SMEs, and testing the framework. A case study of a company called Awal is discussed for evaluating the proposed framework.
The document presents a Petri net model for hardware/software codesign. Petri nets are used as an intermediate model to allow for formal qualitative and quantitative analysis in order to perform hardware/software partitioning. Quantitative metrics like load balance, communication cost, and mutual exclusion degree are computed from the Petri net model to guide the initial allocation and partitioning process. The approach also estimates hardware area and considers multiple software components in the partitioning method.
#SiriusCon 2015: Talk by Christophe Boudjennah "Experimenting the Open Source...Obeo
Capella is a Model Based Systems Engineering (MBSE) solution using Sirius for its diagrams rendering.
It has been initially developed in house by Thales and has been open sourced (in Polarsys) within the context of the CLARITY project. This was actually the very first step of CLARITY, which aims at developing and structuring an international ecosystem around Capella. The CLARITY project now investigates customization capabilities for Capella and aims at complementing the ecosystem with a community that brings together major actors of the entire engineering value chain (industrials, integrators, technology providers and consultants, academia) for open innovation in MBSE within Capella.
In this context, Areva and Airbus Defence & Space already made lots of experimentations and are helping the ecosystem to mature up by providing feedbacks to the community. In this talk, you will get an overview of what those 2 Industrial companies have realized so far.
[About Christophe Boudjennah:
Christophe is a senior system/software architect and project manager. His experience leads him to work for various domains such as defense, IT, or the Automotive industry. Most of his career has been focused on Systems Engineering for complex embedded systems, whether it is from the "methods and tools provider" point of view or from the operational one. He is now working for Obeo, and is dealing with various open source and systems engineering related topics. One of his current main responsibilities is to be the project coordinator of Clarity, a large R&D project whose purpose is to open-source Capella (an industrial workbench for system engineering).]
DYNAMIC TENANT PROVISIONING AND SERVICE ORCHESTRATION IN HYBRID CLOUDijccsa
The advent of container orchestration and cloud computing, as well as associated security and compliance complexities, make it challenging for the enterprises to develop robust, secure, manageable and extendable architectures which would be applicable to the public and private cloud. The main challenges stem from the fact that on-premises, private cloud and third-party, public cloud services often have seemingly different and sometimes conflicting requirements to tenant provisioning, service deployment, security and compliance and that can lead to rather different architectures which still have a lot of commonalities but evolve independently. Understanding and bridging the functionality gaps between such architectures is highly desirable in terms of common approaches, API/SPI as well as maintainability and extendibility. The authors discuss and propose common architectural approaches to the dynamic tenant provisioning and
service orchestration in public, private and hybrid clouds focusing on deployment, security, compliance, scalability and extendibility of stateful Kubernetes runtimes.
DYNAMIC TENANT PROVISIONING AND SERVICE ORCHESTRATION IN HYBRID CLOUDijccsa
The advent of container orchestration and cloud computing, as well as associated security and compliance complexities, make it challenging for the enterprises to develop robust, secure, manageable and extendable architectures which would be applicable to the public and private cloud. The main challenges stem from the fact that on-premises, private cloud and third-party, public cloud services often have seemingly different and sometimes conflicting requirements to tenant provisioning, service deployment, security and compliance and that can lead to rather different architectures which still have a lot of commonalities but evolve independently. Understanding and bridging the functionality gaps between such architectures is highly desirable in terms of common approaches, API/SPI as well as maintainability and extendibility. The authors discuss and propose common architectural approaches to the dynamic tenant provisioning and service orchestration in public, private and hybrid clouds focusing on deployment, security, compliance, scalability and extendibility of stateful Kubernetes runtimes.
IRJET- Machine Learning Techniques for Code OptimizationIRJET Journal
This document summarizes research on using machine learning techniques for code optimization. It discusses how machine learning can help address two main compiler optimization problems: optimization selection and phase ordering. It provides an overview of supervised and unsupervised machine learning approaches that have been used, including linear models, decision trees, clustering, and evolutionary algorithms. Key papers applying these techniques to problems like optimization selection, phase ordering, and code compression are summarized. The document concludes that machine learning is increasingly being applied to compiler optimization problems to develop intelligent heuristics with minimal human input.
A Reconfigurable Component-Based Problem Solving EnvironmentSheila Sinclair
This technical report describes a reconfigurable component-based problem solving environment called DISCWorld. The key features discussed are:
1) DISCWorld uses a data flow model represented as directed acyclic graphs (DAGs) of operators to integrate distributed computing components across networks.
2) It supports both long running simulations and parameter search applications by allowing complex processing requests to be composed graphically or through scripting and executed on heterogeneous platforms.
3) Operators can be simple "pure Java" implementations or wrappers to fast platform-specific implementations, and some operators may represent sub-graphs that can be reconfigured to run across multiple servers for faster execution.
Model-Driven Architecture for Cloud Applications Development, A survey Editor IJCATR
Model Driven Architecture and Cloud computing are among the most important paradigms in software service engineering now a days. As cloud computing continues to gain more activities, more issues and challenges for many systems with its dynamic usage are introduced. Model Driven Architecture (MDA) approach for development and maintenance becomes an evident choice for ensuring software solutions that are robust, flexible and agile for developing applications.
This paper aims to survey and analyze the research issues and challenges that have been emerging in cloud computing applications with a focus on using Model Driven architecture (MDA) development. We discuss the open research issues and highlight future research problems.
Model-Driven Architecture for Cloud Applications Development, A surveyEditor IJCATR
Model Driven Architecture and Cloud computing are among the most important paradigms in software service engineering
now a days. As cloud computing continues to gain more activities, more issues and challenges for many systems with its dynamic usage
are introduced. Model Driven Architecture (MDA) approach for development and maintenance becomes an evident choice for ensuring
software solutions that are robust, flexible and agile for developing applications.
This paper aims to survey and analyze the research issues and challenges that have been emerging in cloud computing applications with
a focus on using Model Driven architecture (MDA) development. We discuss the open research issues and highlight future research
problems.
Model-Driven Architecture for Cloud Applications Development, A surveyEditor IJCATR
Model Driven Architecture and Cloud computing are among the most important paradigms in software service engineering
now a days. As cloud computing continues to gain more activities, more issues and challenges for many systems with its dynamic usage
are introduced. Model Driven Architecture (MDA) approach for development and maintenance becomes an evident choice for ensuring
software solutions that are robust, flexible and agile for developing applications.
This paper aims to survey and analyze the research issues and challenges that have been emerging in cloud computing applications with
a focus on using Model Driven architecture (MDA) development. We discuss the open research issues and highlight future research
problems.
This document provides a review of simulation techniques for parallel and distributed computing. It discusses several key topics:
1) It defines parallel computing, distributed computing, and parallel and distributed computing systems. Various classification schemes for parallel and distributed systems are also described.
2) It examines several modeling techniques for parallel and distributed systems including system modeling, network modeling, performance modeling, and mathematical modeling. It provides details on parallel discrete event simulation.
3) It reviews several simulation software tools used for modeling parallel and distributed systems including SimOS, SimJava, and MicroGrid.
4) It concludes with a focused discussion on cloud computing as the latest development in parallel and distributed computing.
An Adjacent Analysis of the Parallel Programming Model Perspective: A SurveyIRJET Journal
This document provides an overview and analysis of parallel programming models. It begins with an abstract discussing the growing demand for parallel computing and challenges with existing parallel programming frameworks. It then reviews several relevant studies on parallel programming models and architectures. The document goes on to describe several key parallel programming models in more detail, including the Parallel Random Access Machine (PRAM) model, Unrestricted Message Passing (UMP) model, and Bulk Synchronous Parallel (BSP) model. It discusses aspects of each model like architecture, communication methods, and associated cost models. The overall goal is to compare benefits and limitations of different parallel programming models.
Similar to Capella Based System Engineering Modelling and Multi-Objective Optimization of Avionics Systems.pdf (20)
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Blood finder application project report (1).pdfKamal Acharya
Blood Finder is an emergency time app where a user can search for the blood banks as
well as the registered blood donors around Mumbai. This application also provide an
opportunity for the user of this application to become a registered donor for this user have
to enroll for the donor request from the application itself. If the admin wish to make user
a registered donor, with some of the formalities with the organization it can be done.
Specialization of this application is that the user will not have to register on sign-in for
searching the blood banks and blood donors it can be just done by installing the
application to the mobile.
The purpose of making this application is to save the user’s time for searching blood of
needed blood group during the time of the emergency.
This is an android application developed in Java and XML with the connectivity of
SQLite database. This application will provide most of basic functionality required for an
emergency time application. All the details of Blood banks and Blood donors are stored
in the database i.e. SQLite.
This application allowed the user to get all the information regarding blood banks and
blood donors such as Name, Number, Address, Blood Group, rather than searching it on
the different websites and wasting the precious time. This application is effective and
user friendly.
This study Examines the Effectiveness of Talent Procurement through the Imple...DharmaBanothu
In the world with high technology and fast
forward mindset recruiters are walking/showing interest
towards E-Recruitment. Present most of the HRs of
many companies are choosing E-Recruitment as the best
choice for recruitment. E-Recruitment is being done
through many online platforms like Linkedin, Naukri,
Instagram , Facebook etc. Now with high technology E-
Recruitment has gone through next level by using
Artificial Intelligence too.
Key Words : Talent Management, Talent Acquisition , E-
Recruitment , Artificial Intelligence Introduction
Effectiveness of Talent Acquisition through E-
Recruitment in this topic we will discuss about 4important
and interlinked topics which are
Accident detection system project report.pdfKamal Acharya
The Rapid growth of technology and infrastructure has made our lives easier. The
advent of technology has also increased the traffic hazards and the road accidents take place
frequently which causes huge loss of life and property because of the poor emergency facilities.
Many lives could have been saved if emergency service could get accident information and
reach in time. Our project will provide an optimum solution to this draw back. A piezo electric
sensor can be used as a crash or rollover detector of the vehicle during and after a crash. With
signals from a piezo electric sensor, a severe accident can be recognized. According to this
project when a vehicle meets with an accident immediately piezo electric sensor will detect the
signal or if a car rolls over. Then with the help of GSM module and GPS module, the location
will be sent to the emergency contact. Then after conforming the location necessary action will
be taken. If the person meets with a small accident or if there is no serious threat to anyone’s
life, then the alert message can be terminated by the driver by a switch provided in order to
avoid wasting the valuable time of the medical rescue team.
Generative AI Use cases applications solutions and implementation.pdfmahaffeycheryld
Generative AI solutions encompass a range of capabilities from content creation to complex problem-solving across industries. Implementing generative AI involves identifying specific business needs, developing tailored AI models using techniques like GANs and VAEs, and integrating these models into existing workflows. Data quality and continuous model refinement are crucial for effective implementation. Businesses must also consider ethical implications and ensure transparency in AI decision-making. Generative AI's implementation aims to enhance efficiency, creativity, and innovation by leveraging autonomous generation and sophisticated learning algorithms to meet diverse business challenges.
https://www.leewayhertz.com/generative-ai-use-cases-and-applications/
Tools & Techniques for Commissioning and Maintaining PV Systems W-Animations ...Transcat
Join us for this solutions-based webinar on the tools and techniques for commissioning and maintaining PV Systems. In this session, we'll review the process of building and maintaining a solar array, starting with installation and commissioning, then reviewing operations and maintenance of the system. This course will review insulation resistance testing, I-V curve testing, earth-bond continuity, ground resistance testing, performance tests, visual inspections, ground and arc fault testing procedures, and power quality analysis.
Fluke Solar Application Specialist Will White is presenting on this engaging topic:
Will has worked in the renewable energy industry since 2005, first as an installer for a small east coast solar integrator before adding sales, design, and project management to his skillset. In 2022, Will joined Fluke as a solar application specialist, where he supports their renewable energy testing equipment like IV-curve tracers, electrical meters, and thermal imaging cameras. Experienced in wind power, solar thermal, energy storage, and all scales of PV, Will has primarily focused on residential and small commercial systems. He is passionate about implementing high-quality, code-compliant installation techniques.
Road construction is not as easy as it seems to be, it includes various steps and it starts with its designing and
structure including the traffic volume consideration. Then base layer is done by bulldozers and levelers and after
base surface coating has to be done. For giving road a smooth surface with flexibility, Asphalt concrete is used.
Asphalt requires an aggregate sub base material layer, and then a base layer to be put into first place. Asphalt road
construction is formulated to support the heavy traffic load and climatic conditions. It is 100% recyclable and
saving non renewable natural resources.
With the advancement of technology, Asphalt technology gives assurance about the good drainage system and with
skid resistance it can be used where safety is necessary such as outsidethe schools.
The largest use of Asphalt is for making asphalt concrete for road surfaces. It is widely used in airports around the
world due to the sturdiness and ability to be repaired quickly, it is widely used for runways dedicated to aircraft
landing and taking off. Asphalt is normally stored and transported at 150’C or 300’F temperature
Open Channel Flow: fluid flow with a free surfaceIndrajeet sahu
Open Channel Flow: This topic focuses on fluid flow with a free surface, such as in rivers, canals, and drainage ditches. Key concepts include the classification of flow types (steady vs. unsteady, uniform vs. non-uniform), hydraulic radius, flow resistance, Manning's equation, critical flow conditions, and energy and momentum principles. It also covers flow measurement techniques, gradually varied flow analysis, and the design of open channels. Understanding these principles is vital for effective water resource management and engineering applications.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELijaia
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
2. Genetic algorithms have been used to cope with the
complexity of DIMA design [7]. These methods are
computationally efficient for calculating the Pareto front. They
are bio-inspired algorithms that simulates the natural selection,
mutation and crossover processes.
To compose the DIMA allocation problem, many cost
functions are found in literature. The performance measures
used by the optimization heuristics are mass, ship set costs,
operational interruption costs and initial provisioning costs.
There are also some metrics related to end-to-end delay and to
resources consumption [3-8].
III. AVIONICS SYSTEMS
Traditional avionics systems are designed and built using
federated architectures. In this approach each function is
carried out by a line replaceable unit (LRU) in the avionics
platform. The information sharing in a federated architecture
is implemented using dedicated interfaces. The Figure 1 shows
an example of federated architecture. In this example each
blue rectangle represents a LRU and each arrow represents a
communication interface between the LRU’s. These interfaces
usually are implemented as a serial communication bus or a
discrete line.
Figure 1 - Federated Architecture
Adding a new function to a federated architecture is
usually done by a new LRU integration. This process includes
deploying a dedicated wiring in order to provide the required
communication for each interface.
Aerospace manufacturers use the federated architecture
approach for several decades and it is still largely used by
legacy systems. New stringent market requirements pushed
the aeronautical industry to deliver solutions with constant
increasing weight, power and cost constraints. Moreover,
customers demand product customization, integrated solutions
to be delivered in a challenging time to market schedule.
The Distributed Integrated Modular Architecture (DIMA)
has been developed to provide a better approach to the severe
scenario constraints. The Figure 2 shows a simplified view of
an DIMA architecture.
Figure 2. Integrated Modular Avionics
Each LRU in an DIMA architecture can fulfill more than
one function within different partitions with different design
assurance levels. The different systems and functions share a
common communication bus. These shared resources allow a
reduction in weight and power to the avionics systems. The
number of LRU and quantity of wiring can be reduced due to
the shared resources. Another advantage of DIMA architecture
is the spare computational time optimization. In a federated
architecture, it is not possible to share the spare computational
time. DIMA architecture provides the capability to deploy
highly integrated system functions based on shared data. The
main benefit of DIMA is to provide a better resource
management in comparison to the federated architecture
approach.
Traditionally the aerospace industry describes the system
functions in a text requirement base. These functions are
usually manually allocated to systems. With the increasing
complexity of modern aircraft, the design space grows
exponentially. This scenario leads to a high cost to explore
different solutions and generally the development team focus
on finding any feasible solution.
New processes and tools should be developed to manage
the complexity of the high integration level of DIMA
architectures. There are several approaches to solve the
physical allocation and function problem using optimization
techniques. These methods are based in a system function
model breakdown and a physical model definition.
In this paper we propose a method to help DIMA
architects to define the necessary models to build an optimized
architecture.
IV. MBSE, ARCADIA AND CAPELLA
THALES has developed the Arcadia MBSE method for
architectural design. The method focuses on complex
architecture definition, functional analysis and early
evaluation [1][2]. The Figure 3 describes the Arcadia
engineering phases. The first layer is dedicated to operational
3. analysis (OA) by analyzing customer needs and goals,
expected missions and activities, beyond system requirements.
The outputs are an operational architecture describing and
structuring this need, in terms of actors/users, their operational
capabilities and activities, and operational scenarios. The
second design level is the system analysis (SA) which focuses
on the system perimeter to define how it can satisfy the
operational needs. The outputs of this stage are a functional
analysis describing the need, the interactions with the users
and external systems and system requirements.
Figure 3. ARCADIA General flow
The logical architecture (LA) layer describes the solution
that is the architectural design. The physical architecture (PA)
refers to the selection of physical components to composed the
system of interest. The Arcadia method promotes the use of
multi-viewpoints enabling evaluation of the architecture
according to different stakeholders. A viewpoint is a set of
specific constraints, figures of merit and analysis rules defined
by specialists. Multi-viewpoints are used by architects to
orchestrate trade-offs between different technology domains to
achieve a common feasible solution.
The logical architecture intends to identify the system
building blocks, their functional contents, relationships and
properties, excluding implementation or technical and
technological issues. The resulting component breakdown and
interfaces are the best compromise between functional
allocation and integration of all major non-functional
constraints and design drivers. The physical architecture layer
makes the logical architecture evolve according to
implementation, technical and technological constraints and
choices. It introduces rationalization, architectural patterns,
new technical functions and components.
The ARCADIA framework also provides a domain
specific language (DSL) similar to UML/SysML and NAF
standards. The DSL ensures the communication between the
different stakeholders. Moreover, the DSL is suited to process
large models and it helps in the automatic transition to the
following model level.
The Capella software is an ARCADIA dedicated modeling
workbench. It provides a guided and iterative experience
trough the ARCADIA process. For each model change,
Capella automatically propagates the information for all model
elements keeping all instances synchronized. The tool also
helps during the transition between the different modeling
phases providing an automatic and incremental transition.
The collaboration between the different specialties is
achieved by constructing Capella viewpoints. A viewpoint is
the formal specification of a system constraint and it is
propagated to different model levels with automatic
traceability. Using a viewpoint allows the system designer to
perform an impact analysis of a specific constraint. Cost,
mass, power and safety are examples of constraint that can be
analyzed using viewpoints.
V. CAPELLA MODELLING
The project we are developing is intended to achieve a
seamless integration of the different tools used along the
design chain. This synthesis process starts in the requirement
analysis and go all the way to find the possible solutions,
which are optimal for the specific project. This flow needs
interactions of specific tools dedicated for each step of the
processes [9][10]. In this paper we develop a simplified
system model to demonstrate the concepts of integrating
optimization tools with model based systems engineering.
The first step is to develop a DIMA model using the
ARCADIA process and the Capella tool. The modeling
process starts with an operational analysis of DIMA system in
order to identify the operational actors and its operational
activities. The simplified operational model is shown in Figure
4.
Figure 4. DIMA Operational Architecture model
The operational architecture model describes the user
needs and activities. The operational activity ‘Define route’
4. expresses pilot’s need for requesting a new direction to follow.
This is requested to the operational entity Aircraft using an
interaction ‘Requested route’. Once the route is defined, the
pilot needs to monitor the route followed by the aircraft.
Since the operational analysis is finished, the Capella can
automatically export the developed model to the next step, i.e.
the system analysis phase. The Figure 5 shows the system
architecture model where it is defined what the system needs
to do in order to comply with operational activities. This
model defines the main functions performed by the system and
its interfaces.
Figure 5. System Architecture – High Level functions
The system functions are detailed during the system
analysis phase. Figure 6 shows the system function breakdown
diagram for the considered example. In this step, the system
functions are arranged in a hierarchical manner following the
level of detail.
Figure 6. DIMA System analysis
Capella has also the capability to describe functional data
flow for this detailed model.
Figure 7 - System Function Breakdown
The diagram shown in Figure 8 provides a small sample of
the detailed functional dataflow diagram. It is important to
notice that the Capella tool propagates all changes to the
different diagrams of the same development phase. In the case
that a new function is added to system data flow diagram,
Capella automatically adds this function for the function
breakdown model.
Figure 8. Functional Dataflow
The Figure 9 shows a system analysis scenario for the
DIMA model. This scenario describes the simple use case
when the pilot sets a waypoint and the system reacts to this
action. This diagram shows all the performed functions and its
interface messages. This allows a better understand of the event
chain due to a user action.
Figure 9. System analysis scenario
5. VI. CAPELLA MODELLING BASED OPTIMIZATION
Long-established systems engineering processes are
strongly based on textual requirements databases. Despite the
developments in MBSE, the adoption of this new
methodology by industrial projects remains a big challenge.
Figure 10 – Textual Bridge – Linking MBSE and MBD
Figure 10 shows the traditional integration of MBSE and
model based design (MBD). In this process the artifacts
generated by MBSE are manually translated to textual
requirements modules. This formal requirement database
constructs the bridge between the high level models designed
by MBSE and the detailed models created by MBD
methodology.
After translation, these requirements are also linked to the
upper-level model. This manual approach is feasible for small
requirements database. For complex systems with thousands
of requirements, the manual translation and traceability steps
are very expensive tasks demanding an important number of
man-hours to keep information synchronized.
Once the requirements are translated and traced, a snapshot
of the database is created in order to freeze the requirements.
This important change management step creates a discrepancy
between the MBSE and the requirements database in the
course of time. This can lead to unnecessary work due to
mismatch between these two levels.
A specialist constructs the design model from the
baselined requirements. This model is manually constructed
taking into account the functional requirements and all known
problem constraints. The development process fundamentally
consists of finding any feasible solution.
Usually the design model is validated before starting
assembling the system. The validation process basically
consists of comparing the simulation results and textual
system constraints. The validation usually is done manually,
but can be automated depending on the structure of text
requirements. This step is also error prone due to manual
actions and also due to mismatch between different baselines.
In this paper we propose a new method for bridging the
gap between MBSE and MBD. The Figure 11 shows the
proposed design process.
Figure 11 – General system design process
A. Capella Model
Following the ARCADIA process, during the logical
architecture stage, it identifies the logical entities and its
relations. At this level, it is possible to group the logically
related functions and to decide how the logical functions will
be realized. The function constraints are defined based on
simulation, previous experience and stakeholder’s
requirements. The following step is to allocated the modelled
functions to a physical architecture.
The Capella logical architecture model is then parsed in
order to extract the desired functions and its constraints.
Figure 11 shows that the Capella model is automatically
exported to two different modules: System Complexity and
Design Space Exploration Engine.
The main objective of automatically parsing the Capella
model is to avoid to manually translate the high level artifact
into text requirements. This can lead to a reduction of errors
and necessary man-hours to accomplish this task.
B. System Complexity
The system complexity order can be estimated based on
the total number of constraints, number of functions to be
allocated and number of resources available. In this paper we
do not provide a formal definition of system complexity.
Although the system complexity order notion is used to
choose the optimization model and corresponding solution
algorithm.
C. Design Space Exploration Engine
In the traditional design process, an expert manually
constructs a feasible solution based on functional requirements
and system constraints. Manually finding a feasible solution is
6. demanding increasingly resources due to the rise of system
complexity. Besides, even for low complexity systems, few
design alternatives are usually evaluated.
In order to automate this step, the design problem is
modelled as an optimization problem. In this paper, we
modelled the functional allocation problem but the
methodology can be extended for different design problems.
The optimization model approach includes the functional
requirements and the system constraints exported from
Capella. Then the design problem is solved using an
optimization algorithm.
After all, the main objective of this step is to find the
solutions which compose the Pareto front. Moreover, we aim
to expose the existing trade-offs between design variables. In
the context of multi-objective optimization, we are also
interested in computationally efficient algorithms to explore
the design space. The choice of the optimization model and
corresponding solution algorithm is based on the system
complexity order.
D. Parametric Design Model
The automation of the design process is based on the
ability to explore different solutions for a defined design
pattern. The parametric model is built to link with the
optimization algorithm. This approach allows the designer to
choose the desired fidelity level based on the system maturity
or available resources.
The avionics function allocation model is described in
section VII. This model is used to explore de design space and
find the Pareto front for this multi-objective optimization
problem.
E. Simulation
This step is the evaluation of the parametric design model.
Based on the model formulation, the functional requirements
and the constraints are evaluated in this stage. The generated
results are consolidated in the system validation step.
F. System Validation
In this step the simulation results are compared to the
functional requirements and system constraints exported from
Capella. This stage is also responsible for evaluating the
existing trade-offs between the design variables and its impact
to system requirements and constraints.
In this phase, the optimization results are consolidated and
the architect can choose either a solution in the Pareto front set
or update the specification model and run a new optimization
cycle. The iterative nature of this process allows the system
designer to cope with the lack of information in the early
stages of the development. As the system maturity increases,
new simulation models can be integrated to this optimization
process enhancing the accuracy of the solution and narrowing
the design space.
VII. OPTIMIZATION MODEL
In this paper we are investigating the function realization
using an DIMA architecture. In this context, each logical
function is transformed to a software task in a DIMA
hardware [3][4][6]. The set of functions can be expressed as a
task set, where functions are converted to N software tasks.
= ( , , … , )
These tasks have to be allocated to DIMA hardware
complying with all the necessary constraints and resources.
During the function analysis, an estimate of processing time is
done for each function. This value can be estimated from
previous experience or can be obtained from a detailed
function model simulation. For each function a value of
required processing time is specified. The function interfaces
also require resources related to communication bandwidth for
input and output messages. Based on the function exchanges
elaborated during system analysis, it is possible to calculate
the amount bandwidth required for each task. The resource set
required for each task is composed by all exigencies for the
task to execute correctly. For a generic model, the task has
demanded resources.
= ( , , … , )
The DIMA architecture has the RDC that provides the
computing power and required interfaces to execute software
tasks. Each device has its own available resources that are
consumed for each assigned task. The available resource set
for a single data concentrator describes the maximum
available resources. The Capella function model contains the
required processing time and exchange interfaces. In term of
DIMA resources, this information is translated as CPU time,
input bandwidth and output bandwidth. A device has
available resources.
= ( , , … , )
The software mapping solution requires that each task
shall be allocated only once. Moreover, the resources
demanded by all allocated tasks in a single device cannot
exceed its capacity. This problem can be stated as a binary
integer programming with objective functions.
min ( ), ( ), … , ( )
=
≤
The solution vector is composed by binary variables that
correspond to a specific allocation relationship between a task
and a device. For the general case where we have N tasks and
M devices.
= ( , , … , , , , … , , … , )
The equality constraint can be used to express that each
task can be assigned only once. To capture this requirement,
we set the following equation for a task .
∑ = 1
7. During the allocation process it is imperative to comply
with the maximum available resources for each device. This is
captured by the model using the inequality constraints. The
resource required by all tasks allocated to device shall be
less than or equal to available resources.
∑ . ≤
The cost function can be defined by different figures of
merit found in the literature. The total mass of the shipset is
used as a cost function for our optimization problem. The total
weight is considered here because it may have a relevant
impact in the aircraft performance. There are several other
cost functions related operational cost and maintenance issues.
VIII. CASE STUDY
A Capella logical architecture model containing 500
functions was created for validation purpose. This model is
used to explore the software mapping problem. For the sake of
simplicity, the system model takes into consideration only the
CPU time. The CPU time is the amount of processing time
demanded for the considered task. Each function was assigned
a random execution time with uniform distribution. The
optimization model was build using these functions exported
from Capella with the following constraints:
1. Maximum device CPU allocation shall be less than
80%;
2. Each function shall be allocated only once;
3. The initial platform processing time shall be 20%
greater than total functions execution time;
The first constraint assures a provision to system growth.
During the design phase usually the system maturity level is
usually low. So this can guarantee that new functions can be
added to the system. The second constraint is intended to
allow a single allocation for each function. It means that
redundancy management, for example, shall be performed at
Capella design level. The third constraint is related to the
quantity of available devices in the platform. The number of
devices to be considered in the initial optimization problem is
calculated using the following equation:
= 1.2 ∗ ∑
Where is the normalized task execution time for task .
This formulation enables the system architect to find an
optimal solution based on a set of defined cost functions. In
this study case, we use a single unitary cost function to find a
valid architecture that minimizes the number of switches in the
platform. The optimization model evaluated in this study case
is the following:
min
∑ = 1, ∀ ∈ 1, … ,
∑ . ≤ 0.8, ∀ ∈ 1, … ,
= 1.2 ∗ ∑
= 1, . . ,
This problem was solved for = 500 using the GNU
Linear Programming Kit (GLPK) included in Octave 4.0.3
compiled for 32-bit architecture. The Figure 12 shows the
execution time in seconds in function of the number of tasks
and the Figure 13 shows the memory allocation for GLPK.
Figure 12 - GLPK execution time
Figure 13 - GLPK Memory Allocation
From these results, we find that the execution time increase
exponentially with the number of tasks. The same behavior is
verified for memory allocation. In this study case, we
considered, for sake of simplicity, a single constraint and a
single cost function. For systems larger than 500 tasks, the
algorithm did not find a solution due to memory allocation
limitations.
IX. FUTURE WORK
In future works we encourage the development of more
detailed DIMA models, the construction of new figure of
8. merit and the development of new viewpoints integrated to
Capella.
Another interesting field of research includes the
formalization of system complexity. This is an important
milestone for choosing the optimization algorithm used by
search engine.
For future developments we intend to achieve a seamless
integration of the different tools used along the design chain.
X. CONCLUSION
In this paper we presented a design method linking model
based systems engineering to architectural synthesis using
optimization techniques. This link is traditionally done using a
textual requirements database manually written. The proposed
solution aims to automatically extract functional requirements
and systems constraints from Capella model. Then this
information is used by a simulation engine in order to explore
the design space and find the Pareto front solutions.
This solution discovery process is automated using an
optimization algorithm. This approach allows the architect to
evaluate a large number of feasible solutions. Besides, the
method exposes the existing trade-offs between the design
variables. The proposed method also eliminates the manual
requirements translation. This approach can empower the
system architect with the necessary framework to cope with
the increasing complexity of modern systems.
We also described the ARCADIA framework with its
associated tool Capella. It was developed a simple DIMA
model in order to demonstrate the model concept. A binary
programming model was constructed to automate the synthesis
process. In this paper we simulated systems with single
constraint and single objective. From the results, we can
conclude that GLPK can be used for exploring low complexity
systems design space. Adding more constraints and new
objectives for high complexity systems will demand the
evaluation of different solution algorithms.
XI. ACKNOWLEDGMENT
The research that led to this article was funded by the
Brazilian National Research Council (CNPq) under grant
204962/2014-5. The authors wish to thank all those who
supported the efforts of Capella development.
REFERENCES
[1] J-L.Voirin, S.bonnet V.Normand and D.Exertier, “From initial
investigations up to large-scale rollout of an MBSE method and its
supporting workbench: The THALES experience”, in Proc. of the 25th
Annual INCOSE international symposium, Seattle, USA, July 13-16,
2015.
[2] S.Bonnet, J-L.Voirin, V.Normand and D.Exertier, “Implementing the
MBSE Cultural Change: Organization, Coaching and Lessons Learned”,
in Proc. of the 25th
Annual INCOSE international symposium, Seattle,
USA, July 13-16, 2015.
[3] B. Annighöfer, E. Kleemann and F. Thielecke, "Automated selection,
sizing, and mapping of Integrated Modular Avionics Modules," 2013
IEEE/AIAA 32nd Digital Avionics Systems Conference (DASC), East
Syracuse, NY, 2013, pp. 2E2-1-2E2-15.
[4] C. Zhang and J. Xiao, "Modeling and optimization in Distributed
Integrated Modular Avionics," 2013 IEEE/AIAA 32nd Digital Avionics
Systems Conference (DASC), East Syracuse, NY, 2013, pp. 2E1-1-2E1-
12.
[5] P. Heise, F. Geyer and R. Obermaisser, "Deterministic OpenFlow:
Performance evaluation of SDN hardware for avionic networks,"
Network and Service Management (CNSM), 2015 11th International
Conference on, Barcelona, 2015, pp. 372-377.
[6] X. Zheng, N. Huang, Y. Zhang and X. Li, "Performability optimization
design of virtual links in AFDX networks," 2016 Annual Reliability and
Maintainability Symposium (RAMS), Tucson, AZ, 2016, pp. 1-6.
[7] X. Li, N. Huang and F. Zhao, "A genetic algorithm based configuration
optimization method for AFDX," Reliability, Maintainability and Safety
(ICRMS), 2014 International Conference on, Guangzhou, 2014, pp. 440-
444.
[8] A. Amari, A. Mifdaoui, F. Frances and J. Lacan, "Worst-case timing
analysis of AeroRing A Full Duplex Ethernet ring for safety-critical
avionics," 2016 IEEE World Conference on Factory Communication
Systems (WFCS), Aveiro, Portugal, 2016, pp. 1-8.
[9] O.Hammami, “ SYNSYS-ME: Seamless System Engineering to
Mechanical Flow Through Multiobjective Optimization and
Requirements Analysis”, IEEE Syscon, Mar.31-Apr.3, 2014, Ottawa,
Canada.
[10] Mian Chen, Omar Hammami A System Engineering Conception of
Multi-objective Optimization for Multi-physics System, Multiphysics
Modelling and Simulation for Systems Design and Monitoring Applied
Condition Monitoring Volume 2, 2015, pp 299-306 Springer-Verlag.