The tool Embedded Architect provides a way to evaluate software performance early in the design process for embedded systems. It allows specifying an "evaluation scenario" based on representative control flows extracted from software source code. This scenario serves as the context for estimating execution time on different candidate architectures. The tool automates much of the scenario specification process and generates performance estimates based on architecture mappings and component parameters.
This document describes a method developed by Fraunhofer to reconstruct the as-built architecture of medical device software. The method discovers both static and runtime architectural structures from source code to help the FDA analyze software for safety issues during regulatory reviews when only documents and test results are submitted, not source code. It addresses FDA needs like identifying unsafe code constructs and assessing testability and safety. Sample outputs show tasks, communication, and module dependencies. The method formalizes the reconstruction using graph relations to aid static analysis tool usage and safety assurance cases.
[2015/2016] AADL (Architecture Analysis and Design Language)Ivano Malavolta
This document introduces the Architecture Analysis and Design Language (AADL) and uses a radar system as an example to demonstrate AADL modeling concepts. It breaks down the radar system into hardware and software components, showing how to model processes, threads, devices, and connections between them. It also models the deployment of software processes onto hardware processors and memories. The example illustrates key AADL concepts like components, features, connections, bindings, and properties.
This document provides an overview of the Architecture Analysis and Design Language (AADL). AADL is a text-based language used to model embedded systems. It separates system architecture into software and hardware components, and separates component types from their implementations. AADL models can be used to generate code, perform analysis, and capture system topology, properties, and modes. Key features include different component types, ports, connectors, bindings, flows, and annexes for extensions.
The document presents the "4+1" view model for describing software architectures. It consists of five views: the logical view, process view, physical view, development view, and use case scenarios. Each view addresses different stakeholder concerns and can be described using its own notation. The logical view describes the object-oriented decomposition. The process view addresses concurrency and distribution. The physical view maps software to hardware. The development view describes module organization. Together these views provide a comprehensive architecture description that addresses multiple stakeholder needs.
The document discusses the purpose and process of software design. It describes software design as where customer requirements, business needs, and technical considerations come together to formulate a product or system. The design model provides detail about data structures, architecture, interfaces, and components. It can be assessed for quality before implementation. The document outlines the tasks in software design including examining data models, selecting an architecture, partitioning models into subsystems, and designing classes, components, interfaces, data structures, and algorithms. It also discusses the phases, methods, strategies, and importance of quality in software design.
This document discusses P-code compilers and the P-code machine. It explains that P-code compilers translate source code into an intermediate P-code format that can be executed by a P-code interpreter on any machine. The P-code machine is a virtual machine that executes P-code instructions. It has a stack-based architecture and uses registers to manage memory areas like the stack, heap, and constants. Stack frames contain information like static and dynamic links to support calling procedures.
Concept lattices: a representation space to structure software variabilityRa'Fat Al-Msie'deen
This document discusses using concept lattices to structure software variability in software product lines. It introduces formal concept analysis and how concept lattices can represent the common and variable features of software variants. The document provides examples of how concept lattices and AOC-posets can be constructed from product configurations and used to identify features. It also discusses related work on using concept analysis to reverse engineer feature models from software variants.
This document describes a method developed by Fraunhofer to reconstruct the as-built architecture of medical device software. The method discovers both static and runtime architectural structures from source code to help the FDA analyze software for safety issues during regulatory reviews when only documents and test results are submitted, not source code. It addresses FDA needs like identifying unsafe code constructs and assessing testability and safety. Sample outputs show tasks, communication, and module dependencies. The method formalizes the reconstruction using graph relations to aid static analysis tool usage and safety assurance cases.
[2015/2016] AADL (Architecture Analysis and Design Language)Ivano Malavolta
This document introduces the Architecture Analysis and Design Language (AADL) and uses a radar system as an example to demonstrate AADL modeling concepts. It breaks down the radar system into hardware and software components, showing how to model processes, threads, devices, and connections between them. It also models the deployment of software processes onto hardware processors and memories. The example illustrates key AADL concepts like components, features, connections, bindings, and properties.
This document provides an overview of the Architecture Analysis and Design Language (AADL). AADL is a text-based language used to model embedded systems. It separates system architecture into software and hardware components, and separates component types from their implementations. AADL models can be used to generate code, perform analysis, and capture system topology, properties, and modes. Key features include different component types, ports, connectors, bindings, flows, and annexes for extensions.
The document presents the "4+1" view model for describing software architectures. It consists of five views: the logical view, process view, physical view, development view, and use case scenarios. Each view addresses different stakeholder concerns and can be described using its own notation. The logical view describes the object-oriented decomposition. The process view addresses concurrency and distribution. The physical view maps software to hardware. The development view describes module organization. Together these views provide a comprehensive architecture description that addresses multiple stakeholder needs.
The document discusses the purpose and process of software design. It describes software design as where customer requirements, business needs, and technical considerations come together to formulate a product or system. The design model provides detail about data structures, architecture, interfaces, and components. It can be assessed for quality before implementation. The document outlines the tasks in software design including examining data models, selecting an architecture, partitioning models into subsystems, and designing classes, components, interfaces, data structures, and algorithms. It also discusses the phases, methods, strategies, and importance of quality in software design.
This document discusses P-code compilers and the P-code machine. It explains that P-code compilers translate source code into an intermediate P-code format that can be executed by a P-code interpreter on any machine. The P-code machine is a virtual machine that executes P-code instructions. It has a stack-based architecture and uses registers to manage memory areas like the stack, heap, and constants. Stack frames contain information like static and dynamic links to support calling procedures.
Concept lattices: a representation space to structure software variabilityRa'Fat Al-Msie'deen
This document discusses using concept lattices to structure software variability in software product lines. It introduces formal concept analysis and how concept lattices can represent the common and variable features of software variants. The document provides examples of how concept lattices and AOC-posets can be constructed from product configurations and used to identify features. It also discusses related work on using concept analysis to reverse engineer feature models from software variants.
Software reverse engineering is an active threat against software programs. One of the popular techniques used to make software reverse engineering harder is obfuscation. Among various control flow obfuscations methods proposed in the last decade there is a lack of inter-functional control flow obfuscation techniques. In this paper we propose an inter- unctional control flow obfuscation by manipulating return instructions. In our proposed method each function is split into different units, with each unit ending with a return instruction. The linear order in which functions appear in the program is obscured by shuffling these units there by creating an interfunctional control flow obfuscation. Experimental results show that the algorithm performs well against automated reverse engineering attacks.
This presentation is about a lecture I gave within the "Software systems and services" immigration course at the Gran Sasso Science Institute, L'Aquila (Italy): http://cs.gssi.infn.it/.
http://www.ivanomalavolta.com
Upgrading to System Verilog for FPGA Designs, Srinivasan Venkataramanan, CVCFPGA Central
This document discusses upgrading FPGA designs to SystemVerilog. It presents an agenda that covers SystemVerilog constructs for RTL design, interfaces, assertions, and success stories. It then discusses the SystemVerilog-FPGA ecosystem. The presenter has over 13 years of experience in VLSI design and verification and has authored books on verification topics including SystemVerilog assertions. SystemVerilog is a superset of Verilog-2001 and offers enhanced constructs for modeling logic, interfaces, testbenches and connecting to C/C++.
The document describes the analysis-synthesis model of compilation which has two parts: analysis breaks down the source program into pieces and creates an intermediate representation, and synthesis constructs the target program from the intermediate representation. During analysis, the operations of the source program are determined and recorded in a syntax tree where each node represents an operation and children are the arguments.
Formal Specification in Software Engineering SE9koolkampus
This document discusses formal specification techniques for software. It describes algebraic techniques for specifying interfaces as abstract data types and model-based techniques for specifying system behavior. Algebraic specifications define operations and their relationships, while model-based specifications represent the system state using mathematical constructs like sets and sequences. Formal specification finds errors earlier and reduces rework, though it requires more upfront effort. The document also provides an example of formally specifying an air traffic control system and insulin pump.
This document discusses different techniques for extracting architecture from code, including clustered-based and pattern-based techniques. Clustered-based techniques like FOCUS and ROMANTIC group components using hierarchical clustering. Pattern-based techniques use a top-down approach with human guidance to identify architectural patterns in code. The document also covers motivation for architecture extraction, including handling legacy code and system evolution.
This presentation is about a lecture I gave within the "Software systems and services" immigration course at the Gran Sasso Science Institute, L'Aquila (Italy): http://cs.gssi.infn.it/.
http://www.ivanomalavolta.com
The document discusses the three phases of analysis in compiling a source program:
1) Linear analysis involves grouping characters into tokens with collective meanings like identifiers and operators.
2) Hierarchical analysis groups tokens into nested structures with collective meanings like expressions, represented by parse trees.
3) Semantic analysis checks that program components fit together meaningfully through type checking and ensuring operators have permitted operand types.
The document discusses the major phases of a compiler:
1. Syntax analysis parses the source code and produces an abstract syntax tree.
2. Contextual analysis checks the program for errors like type checking and scope and annotates the abstract syntax tree.
3. Code generation transforms the decorated abstract syntax tree into object code.
This document provides an overview of compiler design and its various phases. It discusses the different phases of a compiler including lexical analysis, syntax analysis, semantic analysis, intermediate code generation, code optimization, and code generation. It also describes the structure of a compiler and how it translates a source program through various representations and analyses until generating target code. Key concepts covered include context-free grammars, parsing techniques, symbol tables, intermediate representations, and code optimization.
The document discusses the key phases and components of a compiler. It describes a compiler as a program that translates a program written in a source language into an equivalent program in a target language. The main phases covered are lexical analysis, syntax analysis, semantic analysis, intermediate code generation, code optimization, code generation, and symbol table management. Lexical analysis involves breaking the source code into tokens, while syntax and semantic analysis ensure grammatical correctness and type checking. The output of these phases can undergo code optimizations before final code generation in the target language.
This document presents a model for evaluating the availability of automotive software architectures. The model is implemented as a reasoning framework in the ArchE architecture expert system. The model analyzes how effective watchdog mechanisms are in improving system availability when failures occur. A watchdog is a separate processor that monitors the main CPU and triggers a reset if the CPU fails. The model allows architects to quantitatively analyze how well their design meets availability requirements and identifies improvements to better handle failures.
The document discusses component diagrams in UML. It explains that a component diagram shows the structure and relationships between software components, including their dependencies. Components can consist of classes, packages, interfaces, and more. The diagram shows the static design and implementation view of a system by depicting components, interfaces, ports, and how they interconnect.
Performance prediction for software architecturesMr. Chanuwan
The document proposes an approach called APPEAR for predicting software performance in component-based systems. APPEAR uses both structural and statistical modeling techniques. It consists of two main parts: (1) calibrating a statistical regression model by measuring performance of existing applications, and (2) using the calibrated model to predict performance of new applications. Both parts are based on a model that describes relevant execution properties in terms of a "signature". The method supports flexible choice of parts modeled structurally versus statistically. It is being validated on two industrial case studies.
This document proposes an approach called APPEAR (Analysis and Prediction of Performance for Evolving Architectures) for predicting software performance in component-based systems. APPEAR uses both structural and statistical modeling techniques. Structural modeling reasons about component properties, while statistical modeling abstracts irrelevant execution details. APPEAR consists of two parts: 1) calibrating a statistical regression model by measuring existing applications, and 2) using the calibrated model to predict new application performance. Both parts are based on a signature model describing relevant execution properties. APPEAR supports choosing parts for structural vs. statistical modeling to balance accuracy and effort. It is being validated on two industrial case studies.
The document presents the "4+1" view model for describing software architectures. It consists of five views: the logical view, process view, physical view, development view, and use case scenarios. Each view addresses different stakeholder concerns and can be described using its own notation. The logical view describes the object-oriented decomposition. The process view addresses concurrency and distribution. The physical view maps software to hardware. The development view describes module organization. Together these views provide a comprehensive architecture description that addresses multiple stakeholder needs.
Architectural design steps, Representing the system in context, Archetypes, instantiations of system, Refine architecture into components, Refine components structure, ADL, Fundamentals of Software Engineering
Software engineering is a detailed study of engineering to the design, development and maintenance of software. Software engineering was introduced to address the issues of low-quality software projects.
The document discusses digital design methodology at different levels of abstraction: gate level, register level, and processor level. It describes how a digital design can be represented behaviorally, structurally, and through physical implementation. The key levels in computer design are the processor level (architecture), register transfer level, and gate level. Lower levels have more detail but faster speeds, while higher levels have less detail but allow more complex functions. Computer-aided design tools like simulators and synthesizers help address complexity through iterative design processes.
This document discusses semantic interoperability and reasoning techniques for heterogeneous IoT devices and data in smart buildings. It describes using ontologies and semantic annotations to model building components, properties, and their relationships. Semantic matching of component inputs and outputs can then enable automatic configuration of monitoring and control systems based on the available devices. Reasoning over the semantic knowledge graph allows reconfiguration when devices are added, removed or properties change over time.
This document provides an introduction to software architecture concepts. It defines key terms like software architecture, architectural styles, patterns, elements and stakeholders.
It describes software architecture as the set of principal design decisions about a system. The main elements are components, connectors and configuration. Architectural styles and patterns provide general and specific design decisions to organize systems. Models are used to capture architectural designs. Architecture influences various software development processes. Stakeholders in architecture include architects, developers, testers, managers and users.
Software reverse engineering is an active threat against software programs. One of the popular techniques used to make software reverse engineering harder is obfuscation. Among various control flow obfuscations methods proposed in the last decade there is a lack of inter-functional control flow obfuscation techniques. In this paper we propose an inter- unctional control flow obfuscation by manipulating return instructions. In our proposed method each function is split into different units, with each unit ending with a return instruction. The linear order in which functions appear in the program is obscured by shuffling these units there by creating an interfunctional control flow obfuscation. Experimental results show that the algorithm performs well against automated reverse engineering attacks.
This presentation is about a lecture I gave within the "Software systems and services" immigration course at the Gran Sasso Science Institute, L'Aquila (Italy): http://cs.gssi.infn.it/.
http://www.ivanomalavolta.com
Upgrading to System Verilog for FPGA Designs, Srinivasan Venkataramanan, CVCFPGA Central
This document discusses upgrading FPGA designs to SystemVerilog. It presents an agenda that covers SystemVerilog constructs for RTL design, interfaces, assertions, and success stories. It then discusses the SystemVerilog-FPGA ecosystem. The presenter has over 13 years of experience in VLSI design and verification and has authored books on verification topics including SystemVerilog assertions. SystemVerilog is a superset of Verilog-2001 and offers enhanced constructs for modeling logic, interfaces, testbenches and connecting to C/C++.
The document describes the analysis-synthesis model of compilation which has two parts: analysis breaks down the source program into pieces and creates an intermediate representation, and synthesis constructs the target program from the intermediate representation. During analysis, the operations of the source program are determined and recorded in a syntax tree where each node represents an operation and children are the arguments.
Formal Specification in Software Engineering SE9koolkampus
This document discusses formal specification techniques for software. It describes algebraic techniques for specifying interfaces as abstract data types and model-based techniques for specifying system behavior. Algebraic specifications define operations and their relationships, while model-based specifications represent the system state using mathematical constructs like sets and sequences. Formal specification finds errors earlier and reduces rework, though it requires more upfront effort. The document also provides an example of formally specifying an air traffic control system and insulin pump.
This document discusses different techniques for extracting architecture from code, including clustered-based and pattern-based techniques. Clustered-based techniques like FOCUS and ROMANTIC group components using hierarchical clustering. Pattern-based techniques use a top-down approach with human guidance to identify architectural patterns in code. The document also covers motivation for architecture extraction, including handling legacy code and system evolution.
This presentation is about a lecture I gave within the "Software systems and services" immigration course at the Gran Sasso Science Institute, L'Aquila (Italy): http://cs.gssi.infn.it/.
http://www.ivanomalavolta.com
The document discusses the three phases of analysis in compiling a source program:
1) Linear analysis involves grouping characters into tokens with collective meanings like identifiers and operators.
2) Hierarchical analysis groups tokens into nested structures with collective meanings like expressions, represented by parse trees.
3) Semantic analysis checks that program components fit together meaningfully through type checking and ensuring operators have permitted operand types.
The document discusses the major phases of a compiler:
1. Syntax analysis parses the source code and produces an abstract syntax tree.
2. Contextual analysis checks the program for errors like type checking and scope and annotates the abstract syntax tree.
3. Code generation transforms the decorated abstract syntax tree into object code.
This document provides an overview of compiler design and its various phases. It discusses the different phases of a compiler including lexical analysis, syntax analysis, semantic analysis, intermediate code generation, code optimization, and code generation. It also describes the structure of a compiler and how it translates a source program through various representations and analyses until generating target code. Key concepts covered include context-free grammars, parsing techniques, symbol tables, intermediate representations, and code optimization.
The document discusses the key phases and components of a compiler. It describes a compiler as a program that translates a program written in a source language into an equivalent program in a target language. The main phases covered are lexical analysis, syntax analysis, semantic analysis, intermediate code generation, code optimization, code generation, and symbol table management. Lexical analysis involves breaking the source code into tokens, while syntax and semantic analysis ensure grammatical correctness and type checking. The output of these phases can undergo code optimizations before final code generation in the target language.
This document presents a model for evaluating the availability of automotive software architectures. The model is implemented as a reasoning framework in the ArchE architecture expert system. The model analyzes how effective watchdog mechanisms are in improving system availability when failures occur. A watchdog is a separate processor that monitors the main CPU and triggers a reset if the CPU fails. The model allows architects to quantitatively analyze how well their design meets availability requirements and identifies improvements to better handle failures.
The document discusses component diagrams in UML. It explains that a component diagram shows the structure and relationships between software components, including their dependencies. Components can consist of classes, packages, interfaces, and more. The diagram shows the static design and implementation view of a system by depicting components, interfaces, ports, and how they interconnect.
Performance prediction for software architecturesMr. Chanuwan
The document proposes an approach called APPEAR for predicting software performance in component-based systems. APPEAR uses both structural and statistical modeling techniques. It consists of two main parts: (1) calibrating a statistical regression model by measuring performance of existing applications, and (2) using the calibrated model to predict performance of new applications. Both parts are based on a model that describes relevant execution properties in terms of a "signature". The method supports flexible choice of parts modeled structurally versus statistically. It is being validated on two industrial case studies.
This document proposes an approach called APPEAR (Analysis and Prediction of Performance for Evolving Architectures) for predicting software performance in component-based systems. APPEAR uses both structural and statistical modeling techniques. Structural modeling reasons about component properties, while statistical modeling abstracts irrelevant execution details. APPEAR consists of two parts: 1) calibrating a statistical regression model by measuring existing applications, and 2) using the calibrated model to predict new application performance. Both parts are based on a signature model describing relevant execution properties. APPEAR supports choosing parts for structural vs. statistical modeling to balance accuracy and effort. It is being validated on two industrial case studies.
The document presents the "4+1" view model for describing software architectures. It consists of five views: the logical view, process view, physical view, development view, and use case scenarios. Each view addresses different stakeholder concerns and can be described using its own notation. The logical view describes the object-oriented decomposition. The process view addresses concurrency and distribution. The physical view maps software to hardware. The development view describes module organization. Together these views provide a comprehensive architecture description that addresses multiple stakeholder needs.
Architectural design steps, Representing the system in context, Archetypes, instantiations of system, Refine architecture into components, Refine components structure, ADL, Fundamentals of Software Engineering
Software engineering is a detailed study of engineering to the design, development and maintenance of software. Software engineering was introduced to address the issues of low-quality software projects.
The document discusses digital design methodology at different levels of abstraction: gate level, register level, and processor level. It describes how a digital design can be represented behaviorally, structurally, and through physical implementation. The key levels in computer design are the processor level (architecture), register transfer level, and gate level. Lower levels have more detail but faster speeds, while higher levels have less detail but allow more complex functions. Computer-aided design tools like simulators and synthesizers help address complexity through iterative design processes.
This document discusses semantic interoperability and reasoning techniques for heterogeneous IoT devices and data in smart buildings. It describes using ontologies and semantic annotations to model building components, properties, and their relationships. Semantic matching of component inputs and outputs can then enable automatic configuration of monitoring and control systems based on the available devices. Reasoning over the semantic knowledge graph allows reconfiguration when devices are added, removed or properties change over time.
This document provides an introduction to software architecture concepts. It defines key terms like software architecture, architectural styles, patterns, elements and stakeholders.
It describes software architecture as the set of principal design decisions about a system. The main elements are components, connectors and configuration. Architectural styles and patterns provide general and specific design decisions to organize systems. Models are used to capture architectural designs. Architecture influences various software development processes. Stakeholders in architecture include architects, developers, testers, managers and users.
This document provides an overview of computer architecture and organization. It discusses the following key points in 3 or fewer sentences:
The document defines a system and describes the hardware and software components of a computer system. It then discusses different levels of design in computer architecture - the gate level, register level, and processor level. Finally, it provides examples of common components used at the register level like multiplexers, decoders, registers, and counters.
The document describes a proposed system for evaluating the performance of embedded software. The system consists of four main components: 1) a code analyzer that inserts additional code into the source code and compiles it, 2) testing agents that execute performance tests on the target system, 3) a data analyzer that translates raw test results into standardized APIs, and 4) a report viewer that offers graphical reports of the test results using those APIs. The proposed system aims to help developers easily and intuitively analyze an embedded software's performance and resource utilization without requiring additional hardware.
A system for performance evaluation of embedded softwareMr. Chanuwan
The document proposes a performance evaluation system for embedded software. The system consists of a code analyzer, testing agents, data analyzer, and report viewer. The code analyzer inserts additional code into the source code and compiles it. Testing agents execute performance tests on the target system. The data analyzer translates raw test results into APIs. The report viewer provides graphical reports using the APIs. The system aims to help developers easily and intuitively analyze software performance without additional hardware.
The document discusses software architecture and the Rational Unified Process (RUP). It provides definitions of key terms:
1) Software architecture is a structural plan that describes system elements, how they fit together, and how they work together to fulfill requirements. It guides implementation and sets expectations.
2) The RUP is an iterative process focusing on requirements and design. Each iteration captures requirements, performs analysis/design, implements, and tests to evolve the system. Risks are assessed and prioritized to guide iterations.
3) The RUP uses multiple "views" including use cases, components, and interactions to represent a system from different stakeholder perspectives.
A framework to performance analysis of software architectural stylesijfcstjournal
Growing and executable system architecture has a significant role in successful production of large and
distributed systems. Assessing the effect of different decisions in architecture design can decrease the time and cost of software production, especially when these decisions are related to non-functional properties of system. Performance is a non-functional property which relates to timing behaviour of system. In this paper
we propose an approach for modelling and analysis of performance in architecture level. To do this,we follow a general process which needs two formal notations for specifying architecture and performance models of system. In this paper we show how Stochastic Process Algebra (SPA) in the form of PEPA language can be used for performance modelling and analysis of software archi
tectures modelled using Graph Transformation System (GTS). To enable architecture model for performance analysis, equivalent PEPA model should be constructed with transformation. Transformed performance model of the
architecture has been analysed through PEPA toolkit for some properties like throughput, sensitivity analysis, response time and utilisation rate. The analysis results have been explained with regard to a realistic case study.
The document discusses development view and physical view models in software architecture. It provides details on:
1) The development view uses modules, packages, and namespaces to organize classes and support multiple developers. Package and component diagrams are used.
2) The physical view maps software components to hardware and describes deployment. It shows how components in logical, process, and development views map to nodes in the runtime environment.
3) Component diagrams classify groups of classes into components to support reusability and interchangeability. Deployment diagrams depict the physical configuration of a software system deployed across hardware servers and networks.
Software Architecture: views and viewpointsHenry Muccini
This is an introductory lecture to Software Architecture Views and Viewpoints, part of the Advanced Software Engineering course, at the University of L'Aquila, Italy (www.di.univaq.it/muccini/SE+/2012)
The following presentation covers the basics of Software Architecture and the related topics. Most of the information provided is given in short phrases. Refer to Wikipedia article on the same for more information.
This is meant to be a brief slideshow only.
Similar to Embedded architect a tool for early performance evaluation of embedded software (20)
This document discusses using multithreading in Java with LEJOS to control a line following robot. It presents the original single-threaded code and then improves it using two threads - one for line following and one for obstacle detection. The main program starts each thread and uses a DataExchange class to allow the threads to communicate and coordinate the robot's behavior. When no obstacles are detected, the line following thread controls the motors to follow a black line. When an obstacle is detected within 25cm, the obstacle detection thread stops the line following behavior.
High level programming of embedded hard real-time devicesMr. Chanuwan
This document describes a new Java virtual machine called Fiji VM that targets real-time embedded systems. The Fiji VM ahead-of-time compiles Java bytecode to C code to run on embedded hardware with minimal overhead. Evaluation shows the Fiji VM can achieve throughput close to C for a collision detection benchmark on a LEON3 processor, while still meeting hard real-time deadlines through the use of a concurrent, real-time garbage collector.
Runtime performance evaluation of embedded softwareMr. Chanuwan
The document describes a system for measuring runtime performance of embedded software. The system makes minor changes to the real-time kernel to enable measuring execution times of processes and programs without significantly impacting performance. It was used to evaluate a real-time garbage collector and provided valuable debugging information. The system assigns unique IDs to processes and interrupts and calls an output function on certain events to log identification numbers for external analysis of timing and scheduling.
Performance testing based on time complexity analysis for embedded softwareMr. Chanuwan
The document discusses performance testing of embedded software based on time complexity analysis. It presents a method to:
1) Statically analyze software modules and compute their time complexity based on architecture design.
2) Collect runtime data from modules during testing to compare actual vs expected time complexity and detect abnormalities.
3) Experiments on an embedded human-machine interface project showed the method can find inconsistencies between design and implementation.
High-Performance Timing Simulation of Embedded SoftwareMr. Chanuwan
This paper presents a hybrid approach for high-performance timing simulation of embedded software. The approach combines static execution time analysis and back-annotation of timing information into a SystemC model. In the first step, static timing analysis determines the cycle count for each basic block using processor models. This timing information is then back-annotated into SystemC. Dynamic effects like branches and caches are handled by instrumentation code. The generated code can be simulated with 90% of the speed of untimed software. This hybrid approach aims to achieve fast and accurate timing simulation of embedded software.
High performance operating system controlled memory compressionMr. Chanuwan
This document describes a new software-based online memory compression algorithm that can increase available memory for applications by 150% without modifying applications or hardware. The algorithm, called pattern-based partial match (PBPM), explores frequent patterns within memory words and exploits similarities between words to achieve high compression ratios comparable to Lempel-Ziv compression but with half the runtime. An adaptive memory management technique is also presented to pre-allocate memory for compressed data, further increasing available memory by up to 13%. Experimental results found these techniques effective at doubling the usable memory for a wide range of applications on an embedded portable device.
Performance and memory profiling for embedded system designMr. Chanuwan
1. Memtrace is a profiling tool that provides fast and accurate performance and memory access analysis of embedded systems.
2. It analyzes memory accesses and timing of applications running on an instruction set simulator without needing instrumentation code.
3. Memtrace generates detailed profiling results for each function and variable that can be used to optimize software, design hardware accelerators, and schedule system tasks.
Application scenarios in streaming oriented embedded-system designMr. Chanuwan
This document introduces the concept of application scenarios for streaming-oriented embedded system design. It defines application scenarios as sets of similar operation modes grouped by their resource usage. The document outlines a three-step methodology for incorporating application scenarios into the design process: 1) discovering scenarios by identifying and clustering similar operation modes, 2) deriving predictors to determine the active scenario, and 3) exploiting scenarios to optimize design aspects like energy efficiency. It also discusses different ways to classify and discover scenarios, and provides examples of how previous works have used scenarios to optimize memory usage, voltage scaling, and multi-task scheduling.
Software performance simulation strategies for high-level embedded system designMr. Chanuwan
This document discusses software performance simulation strategies for high-level embedded system design. It introduces instruction set simulators (ISSs), which are accurate but slow. It also discusses binary-level simulation (BLS) and source-level simulation (SLS), which are faster approaches based on native execution. The document proposes a new approach called intermediate source code instrumentation based simulation (iSciSim) that aims to achieve high accuracy, speed, and low complexity for system-level design space exploration.
Object and method exploration for embedded systemsMr. Chanuwan
This document discusses methodologies for exploring object-oriented software design for embedded systems. It introduces a two-part methodology: 1) Method exploration optimizes method implementations by automatically selecting software/hardware components to match application requirements. 2) Object exploration explores object organization by transforming dynamic allocations to static to reduce overhead while maintaining low memory costs. Experimental results on an MP3 player show this design space exploration can provide solutions optimized for performance, energy use, and memory footprint.
Model-Based Performance Prediction in Software Development: A SurveyMr. Chanuwan
This document provides a survey of model-based approaches for predicting software performance early in the development lifecycle. It reviews approaches that use queueing networks, stochastic Petri nets, and other models. The approaches are evaluated based on how integrated the software and performance models are, how early performance analysis can be done in the lifecycle, and the level of automation support. The survey finds that while progress has been made, fully integrated solutions spanning the entire lifecycle are still needed. Promising future work includes approaches with more semantic integration of models and higher degrees of automation.
2. A component is hardware or software whose speci- foo.c ,
Embedded ArchitectE m
fication is fixed, though possibly parameterized. A compo-
nent represents some portion of an architecture, ranging
Gcc~L I 'real"
r~.l Correlator I [-~-14J.a,,"~"
~ source --~ I I ~ I A
from a single subsystem to a frozen portion of the config- I cpp ~J I code ~ T !4~
uration consisting of several subsystems and connections. I Source + "~ - -
I RTL : i code.view ~ User I
2.2 Design Space Exploration f -~ ' , ' ~ export . :F i flail,! Interface I
/ Library of / r--t--, I ,,.as :.:i:i " 1' '
Design space exploration (DSE) ~omponents j
research can be classified as I as I I,----L--i :: view 4,
addressing aspects of representation, 4' " ; [ I mep~rt ~ . ; .i t ~: : | Manual I
fo00
_" i [ ,..r~ J V .... l
" I .'l I
r ^ .
uycles i
I
estimation, and exploration a l g o - I Architecture ~
F~" w . ~ :::; : Intemal ~ .! I
rithms. As shown to the right, a
DSE framework generates candidate
"Generator"
,~,
]
' I T'ace Represen*a*'°°:
Generator CFG, PDG, Automatic
i
architectures using a library of Candidate I ,~,:: . labels Cycles
components. Each candidate is Architecture o I • Pseud ° Tr ace " : .
evaluated based on a set of metrics .a
that may guide future architecture 'i' "~ Figure 3. Embedded Architect implementation.
generation. Our Embedded Archi- Evaluator
tect tool supports an evaluator in an provides a constraint. Using this constraint, more
existing framework by providing a 't' automatic constraint cycles are attempted, continuing until
rapid estimate of a performance Metrics . . . . all predicates are constrained.
metric.
3. I m p l e m e n t a t i o n o f E m b e d d e d A r c h i t e c t
Figure 1. DSE flow
2.3 Scenario Specification An overview of the tool is shown in Figure 3. To
Intuitively, an evaluation scenario corresponds to a walk provide for a comparison of performance estimates to code
through the collection of CFGs that expresses the executing on real hardware, the starting point is the gee
functional architecture. The flow to specify a scenario is compiler, as it can produce production quality code for a
shown in Figure 2, which begins with a set of CFGs to variety of processor architectures. The compiler was
analyze and the start/end nodes of the scenario. To define a instrumented to export a CFG, which is imported into
scenario, the designer needs to constrain all control Embedded Architect. The dashed line represents Embedded
operation predicates. An initial pruning cycle analyzes the Architect, and the shaded portion indicates the focus of the
control dependences between the start and end nodes and research prototype.
extracts predicate constraints. This begins an automatic
4. F u t u r e W o r k
cycle in which constraints are applied, nodes are pruned
from the CFG, and data flow facts are propagated. Embedded Architect analyzes a candidate architecture
If the data flow fact propagation results in new and eventually leads to a performance estimate. In addition
constraints on any control operations, then subsequent to comparing the fidelity of the estimates, the efficacy of
automatic cycles continue until no new constraints are the scenario specification is demonstrated based on the
found. If there are any control operations letI to constrain, number of control operation nodes eliminated due to
a manual constraint cycle begins in which the designer pruning (more is better), the number of constraints found
automatically from propagation of data flow facts (more is
Initial pruning due better), and the number of manual constraints supplied by
to end nodes the designer (fewer is better).
Find initial I
constraints I " 5. R e f e r e n c e s
~ Designer
Constrain CFG pI--j ~constraint ~ ~, [1] V. Basili, B. Boehm, COTS-Based Systems Top 10 List,
' + Computer, v 34, n 5, May 2001, pp 91-93
[2] U. Rastofer, F. Bellosa, Component-based software
I Prune nodes, I engineering for distributed embedded real-time systems, IEEE
§1~ I pr°pagate facts J / A . n Y ~ -18 Proc.-Software, v 148, n 3, June 2001
[3] J. Russell, M. Jaeome, Scenario-Based SoRware
Characterization as a Contingency to Traditional Program
Profiling, CASES '02, Grenoble, France, Oct. 2002
[4] M. Sparling, Lessons learned through six years of
component-baseddevelopment, Comm. ACM, v43, nl0, Oct 2000
Figure 2. Flow to specify evaluation scenario.
825