This document discusses the challenges with current embedded systems design methodologies and proposes a new approach. It notes that embedded systems are becoming more complex, software-based implementations are more common, and verification is more difficult. It proposes a new methodology with three principles: separation of concerns, a theoretical framework, and platform-based design. This new methodology aims to address issues like design time/cost, verification, constraints, and reuse through tools that span different abstraction levels. It also discusses current problems with embedded software development.
Robert Mars has over 20 years of experience as a software architect and technical leader in diverse industries such as networking, communications, geosciences, and biotechnology. He has a proven track record of successfully leading teams to strategically develop innovative software solutions, interface with stakeholders, and achieve significant cost savings and business results for employers such as Cisco Systems, Genomica, and Continuum Resources.
Performance Optimization: Incorporating Database and Code Optimzitation Into ...Michael Findling
Embarcadero offers database and code optimization tools for application developers, database developers, and database administrators to prevent, find, and fix problems that impact performance throughout the system development lifecycle.
Reengineering including reverse & forward EngineeringMuhammad Chaudhry
The document discusses forward engineering and reverse engineering. Reverse engineering involves extracting design information from source code through activities like data and processing analysis. This helps understand the internal structure and functionality. Forward engineering applies modern principles to redevelop existing applications, extending their capabilities. It may involve migrating applications to new architectures like client-server or object-oriented models. Reverse engineering provides critical information for both maintenance and reengineering existing software.
Software performance simulation strategies for high-level embedded system designMr. Chanuwan
This document discusses software performance simulation strategies for high-level embedded system design. It introduces instruction set simulators (ISSs), which are accurate but slow. It also discusses binary-level simulation (BLS) and source-level simulation (SLS), which are faster approaches based on native execution. The document proposes a new approach called intermediate source code instrumentation based simulation (iSciSim) that aims to achieve high accuracy, speed, and low complexity for system-level design space exploration.
Prateek Bhatnagar is a senior system engineer with over 2.7 years of experience in mainframe technologies such as COBOL, JCL, REXX, CICS and XML. He has worked on projects for clients including Southern California Edison and Royal Bank of Scotland. His responsibilities have included requirement gathering, design, coding, testing, implementation and providing support. He is proficient in developing batch jobs, debugging, and rapidly learning new technologies. Prateek holds a B.Tech in Mechanical Engineering and certifications in mainframe development and other technical areas.
The document discusses software design and key concepts related to software design including:
1) Software design is the process of planning the architecture, components, interfaces, and other characteristics of a software system.
2) Good software design aims for high cohesion and loose coupling between modules. It involves conceptual design, technical design, and refinement of the design.
3) Modularity, coupling, and cohesion are important design principles. Modularity enhances manageability while loose coupling and high cohesion are design goals.
This document provides an overview of software and software engineering. It defines software, discusses why software is important to modern economies, and outlines some key characteristics of software such as its non-physical nature and tendency to deteriorate over time rather than wear out. The document also introduces common software applications, categories, and costs. Finally, it discusses the importance of software engineering in developing reliable, high-quality software economically.
Robert Mars has over 20 years of experience as a software architect and technical leader in diverse industries such as networking, communications, geosciences, and biotechnology. He has a proven track record of successfully leading teams to strategically develop innovative software solutions, interface with stakeholders, and achieve significant cost savings and business results for employers such as Cisco Systems, Genomica, and Continuum Resources.
Performance Optimization: Incorporating Database and Code Optimzitation Into ...Michael Findling
Embarcadero offers database and code optimization tools for application developers, database developers, and database administrators to prevent, find, and fix problems that impact performance throughout the system development lifecycle.
Reengineering including reverse & forward EngineeringMuhammad Chaudhry
The document discusses forward engineering and reverse engineering. Reverse engineering involves extracting design information from source code through activities like data and processing analysis. This helps understand the internal structure and functionality. Forward engineering applies modern principles to redevelop existing applications, extending their capabilities. It may involve migrating applications to new architectures like client-server or object-oriented models. Reverse engineering provides critical information for both maintenance and reengineering existing software.
Software performance simulation strategies for high-level embedded system designMr. Chanuwan
This document discusses software performance simulation strategies for high-level embedded system design. It introduces instruction set simulators (ISSs), which are accurate but slow. It also discusses binary-level simulation (BLS) and source-level simulation (SLS), which are faster approaches based on native execution. The document proposes a new approach called intermediate source code instrumentation based simulation (iSciSim) that aims to achieve high accuracy, speed, and low complexity for system-level design space exploration.
Prateek Bhatnagar is a senior system engineer with over 2.7 years of experience in mainframe technologies such as COBOL, JCL, REXX, CICS and XML. He has worked on projects for clients including Southern California Edison and Royal Bank of Scotland. His responsibilities have included requirement gathering, design, coding, testing, implementation and providing support. He is proficient in developing batch jobs, debugging, and rapidly learning new technologies. Prateek holds a B.Tech in Mechanical Engineering and certifications in mainframe development and other technical areas.
The document discusses software design and key concepts related to software design including:
1) Software design is the process of planning the architecture, components, interfaces, and other characteristics of a software system.
2) Good software design aims for high cohesion and loose coupling between modules. It involves conceptual design, technical design, and refinement of the design.
3) Modularity, coupling, and cohesion are important design principles. Modularity enhances manageability while loose coupling and high cohesion are design goals.
This document provides an overview of software and software engineering. It defines software, discusses why software is important to modern economies, and outlines some key characteristics of software such as its non-physical nature and tendency to deteriorate over time rather than wear out. The document also introduces common software applications, categories, and costs. Finally, it discusses the importance of software engineering in developing reliable, high-quality software economically.
Model based engineering tutorial thomas consulting 4_sep13-1seymourmedia
This document discusses model-based system engineering (MBSE) and its application to the design of electro-optical sensors. It describes how MBSE differs from traditional engineering approaches by enabling integrated modeling across disciplines. Examples are given that show how MBSE allows designs to be evaluated earlier and problems to be identified and resolved more quickly. Both advantages and disadvantages of the MBSE approach are outlined. The author's goal of promoting wider adoption of MBSE methods through consulting is also mentioned.
This document discusses integrated project execution (IPE) and the AMEC Paragon solution. IPE aims to enable collaboration across parties, applications, and locations involved in engineering, procurement, and construction projects. The AMEC solution utilizes various software like Aveva, FileNet, PDMS to integrate key project processes like engineering design, procurement, document management. This integrated approach allows for improved quality, reduced timescales, minimized risk through increased visibility and consistency across projects.
This document provides an overview of forward engineering. It defines forward engineering as recreating an existing application using software engineering principles while integrating new requirements. When reengineering mainframe apps for client-server architectures, functionality moves to clients, new GUIs are implemented, database functions move to servers, and communication/security requirements are established. Object-oriented forward engineering first reverse engineers a system to collect data and create models, then extends functionality by defining use cases, data models, and classes. The key difference between forward and reverse engineering is that forward engineering constructs a system for a specific purpose while reverse engineering deconstructs a system to understand or extend it.
Software engineering aims to build software systems that are delivered on time, on budget, with acceptable performance and correct operation. It is concerned with developing theories, methods and tools for professional software development. Software costs, which often dominate overall system costs, are greater to maintain than initially develop. The key attributes of software products like maintainability, dependability and usability depend on the product and environment, with some attributes like efficiency dominating for safety-critical systems. Several process models exist for software development, including waterfall, evolutionary, formal transformation and reuse-based approaches, but no single model is appropriate for all projects and hybrid approaches are often used.
Eight deadly defects in systems engineering and how to fix themJoseph KAsser
Any organization desirous to adopt or improve systems engineering needs to be aware that research into the nature of systems engineering has identified a number of defects in the current systems engineering paradigm. This paper discusses eight of these defects and ways to fix or compensate for them.
The document discusses the benefits of using a model-based systems engineering (MBSE) approach to develop the architecture for Exploration Flight Test 1 (EFT-1), an unmanned test flight of the Orion spacecraft. Key benefits included providing an integrated view of the technical and programmatic aspects of the complex, distributed EFT-1 network system and managing technical baselines, risks, costs, and other program activities. The MBSE approach reduced documentation needs and enforced systems engineering rigor. Lessons learned included avoiding confusing terminology like "MBSE", taking time for stakeholder understanding and planning, and requiring discipline experts to use the modeling tool for systems engineering.
Reengineering involves improving existing software or business processes by making them more efficient, effective and adaptable to current business needs. It is an iterative process that involves reverse engineering the existing system, redesigning problematic areas, and forward engineering changes by implementing a redesigned prototype and refining it based on feedback. The goal is to create a system with improved functionality, performance, maintainability and alignment with current business goals and technologies.
This document discusses component-based software development (CBSD). It defines a software component and lists its criteria. CBSD allows for higher reuse, simplifies testing and maintenance, and leads to better quality and shorter development cycles compared to traditional software engineering. Example applications of CBSD include COM and CORBA. Content management systems (CMS) are also discussed as they are fundamentally based on CBSD principles, treating pages, images, and other elements as components. WebMatrix and OrchardCMS are presented as tools that facilitate CBSD.
Syncsort and Db2 – How to Get More from Your Db2 SystemsPrecisely
In our just released 2018 State of the Mainframe survey report, over 50% of respondents indicated Improving Db2 SLA Performance as their #1 mainframe database priority for the coming year, with migrating from IMS to Db2 as the second highest.
View this on-demand webcast with IDUG to hear key mainframe optimization opportunities and use cases spanning Db2 workload optimization, transparent data migration from IMS to Db2, and capacity management. In this session you’ll learn:
• How to save up to 50% of Db2 CPU time with workload-centric optimization
• The best approach to migrate IMS data to Db2 and standardize application change & enhancement in SQL
• How transaction capacity management ensures resources are right-sized to meet business requirements
Component-based software engineering (CBSE) involves building software using pre-existing software components rather than developing everything from scratch. Well-designed components interact through defined interfaces, are modular, deployable, and replaceable. CBSE allows for increased reliability, reduced risks, effective specialist use, and accelerated development by reusing tested components instead of recreating them. Potential disadvantages include increased initial development time and difficulty testing components for unknown uses.
The document summarizes a discussion on software architecture reviews for NASA flight projects. It outlines the goals of establishing a NASA-wide Software Architecture Review Board (SARB) to help projects achieve higher reliability and manage complexity through better software architecture. The board would engage with projects early in development to provide feedback on their software architecture design. Benefits mentioned include catching issues early to reduce costs and risks. The charter of the SARB is also summarized as helping spread best practices across NASA centers.
This Application Architecture presentation with voice overlay takes you through a number of fundamentals that remains relevant regardless of environment or technology.
This is the first presentation in a series that will cover each of the architectural domains and enterprise architecture to bind it all together.
This document discusses challenges faced during software component selection for component-based software development. It identifies key challenges as performance, time, component size, fault tolerance, reliability, functionality, compatibility, and available component subsets. Specifically, it states that performance, coupling, cohesion and interfaces impact each other, that using more commercial off-the-shelf components reduces time and cost, and that selecting components based on higher-level programming languages and fault tolerance can help address several of these challenges.
After studying this presentation you will be able to learn component based software engineering,what are components,what are advantageous,what are components,what are COTS.
This document summarizes key insights from a presentation on viewing project management through the lens of complexity theory. It discusses how complexity theory originated in the study of natural systems and how its concepts like emergence and non-linearity are relevant to project management. It also notes that while general systems theory promised to connect different fields, project management, cybernetics, and systems thinking ultimately diverged. The document reviews different perspectives on categorizing project complexity and shares insights from interviews where project managers discussed experiencing uncertainty, renegotiating plans, and maintaining progress despite radical uncertainty.
This resume summarizes James O'Hare's experience as a Technology Management and IT professional with over 20 years of experience in roles such as Solutions Architect, Operations Manager, and Director of Infrastructure. The resume outlines his technical skills in areas like Cisco networking, system architecture, data center operations, and ITIL best practices. It also provides details of his work history at companies like NTT Data, CIBER, Velocity Technology Solutions, Catholic Health Initiatives, and others where he implemented IT initiatives, improved processes, and managed teams and systems.
Developing reusable software components for distributed embedded systemseSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
This document discusses guidelines for specifying data center criticality or tier levels. It analyzes three common classification methods - The Uptime Institute's Tiers, TIA-942, and Syska Hennessy's Criticality Levels. All three methods define four levels of criticality/tier but lack detail. TIA-942 provides the most specificity. The document also provides guidance on choosing a criticality level based on a data center's costs and a business' downtime costs. It associates typical criticality levels with different business applications.
Charles Mandrell has over 30 years of experience in network administration, database management, and systems engineering. He has worked in various roles supporting networks of thousands of users and servers. His experience spans technologies from Novell Netware and Microsoft Windows servers to Cisco routers, Dell servers, and virtualization with VMware.
Bauer Media Group is a large European media company that publishes over 600 magazines and owns multiple radio and TV stations. The author wants Bauer Media to publish their new magazine because Bauer specializes in publishing and will know how to market the magazine to its target audience, potentially reaching many readers. Additionally, the author's magazine includes unique ideas that could interest Bauer's large audience and help the company expand while also earning significant revenue.
Model based engineering tutorial thomas consulting 4_sep13-1seymourmedia
This document discusses model-based system engineering (MBSE) and its application to the design of electro-optical sensors. It describes how MBSE differs from traditional engineering approaches by enabling integrated modeling across disciplines. Examples are given that show how MBSE allows designs to be evaluated earlier and problems to be identified and resolved more quickly. Both advantages and disadvantages of the MBSE approach are outlined. The author's goal of promoting wider adoption of MBSE methods through consulting is also mentioned.
This document discusses integrated project execution (IPE) and the AMEC Paragon solution. IPE aims to enable collaboration across parties, applications, and locations involved in engineering, procurement, and construction projects. The AMEC solution utilizes various software like Aveva, FileNet, PDMS to integrate key project processes like engineering design, procurement, document management. This integrated approach allows for improved quality, reduced timescales, minimized risk through increased visibility and consistency across projects.
This document provides an overview of forward engineering. It defines forward engineering as recreating an existing application using software engineering principles while integrating new requirements. When reengineering mainframe apps for client-server architectures, functionality moves to clients, new GUIs are implemented, database functions move to servers, and communication/security requirements are established. Object-oriented forward engineering first reverse engineers a system to collect data and create models, then extends functionality by defining use cases, data models, and classes. The key difference between forward and reverse engineering is that forward engineering constructs a system for a specific purpose while reverse engineering deconstructs a system to understand or extend it.
Software engineering aims to build software systems that are delivered on time, on budget, with acceptable performance and correct operation. It is concerned with developing theories, methods and tools for professional software development. Software costs, which often dominate overall system costs, are greater to maintain than initially develop. The key attributes of software products like maintainability, dependability and usability depend on the product and environment, with some attributes like efficiency dominating for safety-critical systems. Several process models exist for software development, including waterfall, evolutionary, formal transformation and reuse-based approaches, but no single model is appropriate for all projects and hybrid approaches are often used.
Eight deadly defects in systems engineering and how to fix themJoseph KAsser
Any organization desirous to adopt or improve systems engineering needs to be aware that research into the nature of systems engineering has identified a number of defects in the current systems engineering paradigm. This paper discusses eight of these defects and ways to fix or compensate for them.
The document discusses the benefits of using a model-based systems engineering (MBSE) approach to develop the architecture for Exploration Flight Test 1 (EFT-1), an unmanned test flight of the Orion spacecraft. Key benefits included providing an integrated view of the technical and programmatic aspects of the complex, distributed EFT-1 network system and managing technical baselines, risks, costs, and other program activities. The MBSE approach reduced documentation needs and enforced systems engineering rigor. Lessons learned included avoiding confusing terminology like "MBSE", taking time for stakeholder understanding and planning, and requiring discipline experts to use the modeling tool for systems engineering.
Reengineering involves improving existing software or business processes by making them more efficient, effective and adaptable to current business needs. It is an iterative process that involves reverse engineering the existing system, redesigning problematic areas, and forward engineering changes by implementing a redesigned prototype and refining it based on feedback. The goal is to create a system with improved functionality, performance, maintainability and alignment with current business goals and technologies.
This document discusses component-based software development (CBSD). It defines a software component and lists its criteria. CBSD allows for higher reuse, simplifies testing and maintenance, and leads to better quality and shorter development cycles compared to traditional software engineering. Example applications of CBSD include COM and CORBA. Content management systems (CMS) are also discussed as they are fundamentally based on CBSD principles, treating pages, images, and other elements as components. WebMatrix and OrchardCMS are presented as tools that facilitate CBSD.
Syncsort and Db2 – How to Get More from Your Db2 SystemsPrecisely
In our just released 2018 State of the Mainframe survey report, over 50% of respondents indicated Improving Db2 SLA Performance as their #1 mainframe database priority for the coming year, with migrating from IMS to Db2 as the second highest.
View this on-demand webcast with IDUG to hear key mainframe optimization opportunities and use cases spanning Db2 workload optimization, transparent data migration from IMS to Db2, and capacity management. In this session you’ll learn:
• How to save up to 50% of Db2 CPU time with workload-centric optimization
• The best approach to migrate IMS data to Db2 and standardize application change & enhancement in SQL
• How transaction capacity management ensures resources are right-sized to meet business requirements
Component-based software engineering (CBSE) involves building software using pre-existing software components rather than developing everything from scratch. Well-designed components interact through defined interfaces, are modular, deployable, and replaceable. CBSE allows for increased reliability, reduced risks, effective specialist use, and accelerated development by reusing tested components instead of recreating them. Potential disadvantages include increased initial development time and difficulty testing components for unknown uses.
The document summarizes a discussion on software architecture reviews for NASA flight projects. It outlines the goals of establishing a NASA-wide Software Architecture Review Board (SARB) to help projects achieve higher reliability and manage complexity through better software architecture. The board would engage with projects early in development to provide feedback on their software architecture design. Benefits mentioned include catching issues early to reduce costs and risks. The charter of the SARB is also summarized as helping spread best practices across NASA centers.
This Application Architecture presentation with voice overlay takes you through a number of fundamentals that remains relevant regardless of environment or technology.
This is the first presentation in a series that will cover each of the architectural domains and enterprise architecture to bind it all together.
This document discusses challenges faced during software component selection for component-based software development. It identifies key challenges as performance, time, component size, fault tolerance, reliability, functionality, compatibility, and available component subsets. Specifically, it states that performance, coupling, cohesion and interfaces impact each other, that using more commercial off-the-shelf components reduces time and cost, and that selecting components based on higher-level programming languages and fault tolerance can help address several of these challenges.
After studying this presentation you will be able to learn component based software engineering,what are components,what are advantageous,what are components,what are COTS.
This document summarizes key insights from a presentation on viewing project management through the lens of complexity theory. It discusses how complexity theory originated in the study of natural systems and how its concepts like emergence and non-linearity are relevant to project management. It also notes that while general systems theory promised to connect different fields, project management, cybernetics, and systems thinking ultimately diverged. The document reviews different perspectives on categorizing project complexity and shares insights from interviews where project managers discussed experiencing uncertainty, renegotiating plans, and maintaining progress despite radical uncertainty.
This resume summarizes James O'Hare's experience as a Technology Management and IT professional with over 20 years of experience in roles such as Solutions Architect, Operations Manager, and Director of Infrastructure. The resume outlines his technical skills in areas like Cisco networking, system architecture, data center operations, and ITIL best practices. It also provides details of his work history at companies like NTT Data, CIBER, Velocity Technology Solutions, Catholic Health Initiatives, and others where he implemented IT initiatives, improved processes, and managed teams and systems.
Developing reusable software components for distributed embedded systemseSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
This document discusses guidelines for specifying data center criticality or tier levels. It analyzes three common classification methods - The Uptime Institute's Tiers, TIA-942, and Syska Hennessy's Criticality Levels. All three methods define four levels of criticality/tier but lack detail. TIA-942 provides the most specificity. The document also provides guidance on choosing a criticality level based on a data center's costs and a business' downtime costs. It associates typical criticality levels with different business applications.
Charles Mandrell has over 30 years of experience in network administration, database management, and systems engineering. He has worked in various roles supporting networks of thousands of users and servers. His experience spans technologies from Novell Netware and Microsoft Windows servers to Cisco routers, Dell servers, and virtualization with VMware.
Bauer Media Group is a large European media company that publishes over 600 magazines and owns multiple radio and TV stations. The author wants Bauer Media to publish their new magazine because Bauer specializes in publishing and will know how to market the magazine to its target audience, potentially reaching many readers. Additionally, the author's magazine includes unique ideas that could interest Bauer's large audience and help the company expand while also earning significant revenue.
Now a days ungenerous parking is a big problem in Delhi, people park their cars on red light, in front of gates, in no parking zone and alike anywhere without thoughtfulness. Even the authorities are facing difficulty to solve this parking problems and traffic jams on roads due to people’s odd behaviour. Thus for this purpose we are going to organize an event which will create awareness among the people.
The document discusses recommendations for student behavior in the hallway or passageway between classrooms. It recommends that students walk calmly through the hallway instead of running, to avoid accidents from running into each other or falling. It also recommends throwing trash in the garbage cans instead of on the ground, and speaking quietly in the hallway so as not to disturb other classes. The overall message is the importance of safe, quiet and clean behavior when moving between classrooms.
Mina Raafat Soliman is seeking a challenging position in a dynamic organization where he can utilize his technical skills. He has a BSc in Manufacturing Engineering and is a member of the Egyptian syndicate of professional engineers. His experience includes roles as a mechanical design engineer, shift maintenance engineer, and current role as a mechanical site engineer. In this role, he is responsible for design, supervision, and execution of HVAC, plumbing, and fire fighting systems along with project management duties. He has strong English, AutoCAD, and Microsoft Office skills.
The document is a curriculum vitae for N.V.V.R.Chandra Rao. It summarizes his educational and professional experiences. He has a B.Tech in Mechanical Engineering and is currently working as a Planning Engineer in Botswana. He has over 7 years of work experience in roles like Senior Planning Engineer, Senior Mechanical Engineer, and Executive Engineer in India. His experiences include maintenance planning, commissioning, and overseeing operations in thermal power plants.
ThinkPalm Technologies is a new age Product Engineering and Enterprise IT services company that focuses on Communications, Enterprise and Mobility technologies through a combination of in-house products, solutions and 3rd party services for our customers
The rush to the edge and new applications around AI are causing a shift in design strategies toward the highest performance per watt, rather than the highest performance or lowest power.
Model driven process for real time embeddedcaijjournal
Embedded systems shape our world nowadays. It’s almost impossible to imagine our day to day life without
it. Examples can include cell phones, home appliances, energy generators, satellites, automotive
components …etc. it is even far more complex if there are real-time and interface constraints.
Developing real-time embedded systems is significantly challenging to developers. Results need not be only
correct, but also in a timely manner. New software development approaches are needed due to the
complexity of functional and non-functional requirements of embedded systems.
Due to the complex context of embedded systems, defects can cause life threatening situations. Delays can
create huge costs, and insufficient productivity can impact the entire industry. The rapid evolution of
software engineering technologies will be a key factor in the successful future development of even more
complex embedded systems.
Software development is shifting from manual programming, to model-driven engineering (MDE). One of
the most important challenges is to manage the increasing complexity of embedded software development,
while maintaining the product’s quality, reducing time to market, and reducing development cost.
MDE is a promising approach that emerged lately. Instead of directly coding the software using
programming languages, developers model software systems using expressive, graphical notations, which
provide a higher abstraction level than programming languages. This is called Model Based Development
(MBD).
Model Based Development if accompanied by Model Based Validation (MBV), will help identify problems
early thus reduce rework cost. Applying tests based on the designed models not only enable early detection
of defects, but also continuous quality assurance. Testing can start in the first iteration of the development
process.
As a result of the model based approach, and in addition to the major advantage of early defects detection,
several time consuming tasks within the classical software development life cycle will be excluded. For
embedded systems development, it’s really important to follow a more time efficient approach.
Big Data Day LA 2016/ Use Case Driven track - From Clusters to Clouds, Hardwa...Data Con LA
Today’s Software Defined environments attempt to remove the weakness of computing hardware from the operational equation. There is no doubt that this is a natural progress away from overpriced, proprietary compute and storage layers. However, even at the heart of any Software Defined universe is an underlying hardware stack that must be robust, reliable and cost effective. Our 20+ years experience delivering over 2000 clusters and clouds has taught us how to properly design and engineer the right hardware solution for Big Data, Cluster and Cloud environments. This presentation will share this knowledge allowing user to make better design decisions for any deployment.
Agile and continuous delivery – How IBM Watson Workspace is builtVincent Burckhardt
Journey and transformations that we have been taking at IBM to implement Cloud Native application. Covers culture, architecture and pipeline changes. This presentation was given at IBM Connect 2017 in San Francisco in Feb 2017.
Tool-Driven Technology Transfer in Software EngineeringHeiko Koziolek
This talk presentst the tool-driven technology transfer process ABB Corporate Research applies in selected software engineering University collaborations. As an example, we have created an add-in to a popular UML tool and developed the tooling in close interaction with the target users. Centering the technology transfer around tool implementations brings many benefits such as the need to make conceptual contributions applicable and the ability to quickly benefit from the new concepts. A challenge to this form of technology transfer is the long-term commitment to the maintenance of the tooling, which we try to address by creating an open developer community. Tool-driven technology transfer projects have proven to be valuable a instrument of bringing advanced software engineering technologies into our organization.
Improved Strategy for Distributed Processing and Network Application Developm...Editor IJCATR
The complexity of software development abstraction and the new development in multi-core computers have shifted the burden of distributed software performance from network and chip designers to software architectures and developers. We need to look at software development strategies that will integrate parallelization of code, concurrency factors, multithreading, distributed resources allocation and distributed processing. In this paper, a new software development strategy that integrates these factors is further experimented on parallelism. The strategy is multidimensional aligns distributed conceptualization along a path. This development strategy mandates application developers to reason along usability, simplicity, resource distribution, parallelization of code where necessary, processing time and cost factors realignment as well as security and concurrency issues in a balanced path from the originating point of the network application to its retirement.
Improved Strategy for Distributed Processing and Network Application DevelopmentEditor IJCATR
The complexity of software development abstraction and the new development in multi-core computers have shifted the
burden of distributed software performance from network and chip designers to software architectures and developers. We need to
look at software development strategies that will integrate parallelization of code, concurrency factors, multithreading, distributed
resources allocation and distributed processing. In this paper, a new software development strategy that integrates these factors is
further experimented on parallelism. The strategy is multidimensional aligns distributed conceptualization along a path. This
development strategy mandates application developers to reason along usability, simplicity, resource distribution, parallelization of
code where necessary, processing time and cost factors realignment as well as security and concurrency issues in a balanced path from
the originating point of the network application to its retirement.
Running head MODEL-BASED SYSTEMS ENGINEERING IMPLEMENTATION 1.docxcowinhelen
Running head: MODEL-BASED SYSTEMS ENGINEERING IMPLEMENTATION 1
Research Project: Model-Based Systems Engineering Implementation
(Name and date withheld)
MODEL-BASED SYSTEMS ENGINEERING IMPLEMENTATION 2
Model-Based Systems Engineering (MBSE) Implementation
Section 1: Requirements Analysis
Section 1 tasks for the MBSE project will identify issues and requirements for
implementing an MBSE system for our Engineering and Systems Integration work.
Problem Definition
Our Engineering department focuses on a niche market of systems engineering, design,
integration, and ongoing maintenance and support of emergency communication and power
systems, predominantly housed in High-Altitude Electromagnetic Pulse (HEMP) protected
mobile environments. The complexity of the integrated voice, data, network, and audio/video
systems we work with, the specialized nature of the work, and the demanding project timelines
have put a strain on our resources and existing processes. We need to leverage common
requirements and design elements across projects and customers while remaining adaptive to
unique and changing customer needs. With people spread across multiple projects, it is harder to
capture changes on one system that should be implemented on others – especially since
requirements and design files are all in separate Word, Excel, Visio and other documents. We
would also like to expand our business to new customers with complex Systems-of-Systems
(SoS) environments that may require protections from HEMP events. To do that, we need to
streamline our processes and reduce redundant work to allow people to reasonably take on
additional projects and make new hires productive on project work more quickly – while
maintaining critical quality factors.
Issues requiring solution. The following list describes the primary issues requiring the
development of an MBSE system for Engineering.
1. Business growth is placing a strain on senior employees with specialized skills
and knowledge.
MODEL-BASED SYSTEMS ENGINEERING IMPLEMENTATION 3
2. System information is contained in a large, diverse set of files (Excel files, Word
documents, Visio diagrams, and CAD files) with no easy way to share technical information.
3. We can’t easily reuse or modify applicable requirements and design elements
between projects. This increases cost and effort of new work.
4. It 's hard to perform change management and assess impacts on a short timeline
across the systems’ lifecycle, again increasing workloads and quality risks.
5. We’re not able to quickly evaluate new customer requirements and designs against
existing systems efficiently, increasing the effort and response time on customer proposals.
6. There is an increasing demand from customers to see architectural views and
interconnecting systems that help them understand the proposed designs.
7. Consistently high workloads cause bottlenecks that slow ...
This document describes Flowtracer/TEM, a collaborative SystemC-based verification environment for distributed ASIC design teams. It addresses long simulation times, complex verification solutions, and large design sizes at the RTL level by using SystemC's higher abstraction and faster simulation. Flowtracer/TEM integrates revision control, batch processing, simulation, and a test database in a web interface to help teams coordinate verification work. It aims to streamline the verification process and improve productivity and communication for distributed teams.
This presentation gives an overview of the Rational Engineering Lifecycle Manager, and explains how it improves the efficiency of product development organizations.
Unsustainable Regaining Control of Uncontrollable AppsCAST
The ever-growing cost to maintain systems continues to crush IT organizations robbing their ability to fund innovation while increasing risks across the organization. There are, however, some tactics to reduce application total ownership cost, reduce complexity and improve sustainability across your portfolio.
The use of an architecture–centered development process for delivering information technology began with
the introduction of client / server based systems. Early client/server and legacy mainframe applications did not
provide the architectural flexibility needed to meet the changing business requirements of the modern
publishing organization. With the introduction of Object Oriented systems, the need for an architecture–
centered process became a critical success factor. Object reuse, layered system components, data
abstraction, web based user interfaces, CORBA, and rapid development and deployment processes all
provide economic incentives for object technologies. However, adopting the latest object oriented technology,
without an adequate understanding of how this technology fits a specific architecture, risks the creation of an
instant legacy system.
Publishing software systems must be architected in order to deal with the current and future needs of the
business organization. Managing software projects using architecture–centered methodologies must be an
intentional step in the process of deploying information systems – not an accidental by–product of the
software acquisition and integration process.
A Software Factory Integrating Rational & WebSphere Toolsghodgkinson
The document discusses how a large automotive retailer integrated Rational Software Architect, WebSphere Message Broker, and Rational Team Concert into a software factory to develop an integration layer between a new point of sale system and SAP backend. Key challenges included a multi-vendor global team and parallel development of UI, integration, and backend layers. The software factory employed model-driven development, continuous integration, and practices like architectural modeling in UML, automated WSDL generation, tracking work items and impediments, and collaborative configuration management to help coordinate distributed development and integrate results.
The document discusses software management and the evolution of approaches to software development. It covers the following key points:
- Traditional "waterfall" models of software development had drawbacks like late risk resolution and focus on documentation over collaboration.
- Newer agile approaches emphasize iterative development, early delivery of working software, and continuous improvement based on feedback.
- Improving software economics involves optimizing factors like size, process, personnel skills, tools/environments, and quality requirements. Techniques like reuse, object orientation, and automated testing can help compress schedules and reduce costs.
- Effective project management requires skills like hiring, communication, decision making, team building, and adapting to changes over time. Developing high
ESF .NET - Accelerated Framework for Enterprise System Re-EngineeringVisionet Systems, Inc.
ESF .Net framework and VSI legacy migration service combine to optimize project delivery time-lines and with better quality.
ESF .NET – Salient Features:
* As opposed to other frameworks ESF .Net is designed for legacy migration
* Makes the work of refactoring the code easy
* Maintains complete separation of data, business from UI layers and other technology specific features
* Elaborate Security features addressing, user authentication, role level authorization and data level security based on organizational units
* Framework layers are modeled to ease conversion of client server/desktop application into web based
* Integrated XML serialization will ease web services integration and adoption. SOA friendly.
* Configuration Management Console
* Supports both client side and server side validations
* Flexible Error handling and logging
This document provides an overview of software engineering and related topics covered in multiple units. Unit 1 discusses the nature of software, web applications, software engineering, software processes, agile development, and process models. It defines software and discusses its unique characteristics compared to hardware. Unit 1 also covers topics like legacy software, agile development principles, and generic and specialized software process models. Subsequent units cover requirements engineering, software design, user interface design, software testing, and other software engineering principles and practices.
This document provides an overview of software engineering and related topics covered in multiple units. Unit 1 discusses the nature of software, web applications, software engineering, software processes, agile development, and process models. It defines software and discusses its unique characteristics compared to hardware. Unit 1 also covers topics like legacy software, agile development principles, and generic and specialized software process models. Subsequent units cover requirements engineering, software design, user interface design, software testing, and other software engineering topics.
Taming Cloud Sprawl - XConf Europe 2023 - Kief.pdfKief Morris
Composable Environments is an architectural approach to implementing environments (using Infrastructure as Code, of course) that are better aligned to support your organization's strategy.
Avoid Snowkflakes as Code and Horizontal Environment Provisioning. Instead, use Application-Driven Infrastructure Provisioning and Just Enough Environments!
Restructuring Technical Debt - A Software and System Quality ApproachAdnan Masood
Agile Software Architecture based overview of the technical debt metaphor … idea is that developers sometimes accept compromises in a system in one dimension (e.g., modularity) to meet an urgent demand in some other dimension (e.g., a deadline), and that such compromises incur a "debt": on which "interest" has to be paid and which the "principal" should be repaid at some point for the long-term health of the project. (ACM)
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
Chapter wise All Notes of First year Basic Civil Engineering.pptxDenish Jangid
Chapter wise All Notes of First year Basic Civil Engineering
Syllabus
Chapter-1
Introduction to objective, scope and outcome the subject
Chapter 2
Introduction: Scope and Specialization of Civil Engineering, Role of civil Engineer in Society, Impact of infrastructural development on economy of country.
Chapter 3
Surveying: Object Principles & Types of Surveying; Site Plans, Plans & Maps; Scales & Unit of different Measurements.
Linear Measurements: Instruments used. Linear Measurement by Tape, Ranging out Survey Lines and overcoming Obstructions; Measurements on sloping ground; Tape corrections, conventional symbols. Angular Measurements: Instruments used; Introduction to Compass Surveying, Bearings and Longitude & Latitude of a Line, Introduction to total station.
Levelling: Instrument used Object of levelling, Methods of levelling in brief, and Contour maps.
Chapter 4
Buildings: Selection of site for Buildings, Layout of Building Plan, Types of buildings, Plinth area, carpet area, floor space index, Introduction to building byelaws, concept of sun light & ventilation. Components of Buildings & their functions, Basic concept of R.C.C., Introduction to types of foundation
Chapter 5
Transportation: Introduction to Transportation Engineering; Traffic and Road Safety: Types and Characteristics of Various Modes of Transportation; Various Road Traffic Signs, Causes of Accidents and Road Safety Measures.
Chapter 6
Environmental Engineering: Environmental Pollution, Environmental Acts and Regulations, Functional Concepts of Ecology, Basics of Species, Biodiversity, Ecosystem, Hydrological Cycle; Chemical Cycles: Carbon, Nitrogen & Phosphorus; Energy Flow in Ecosystems.
Water Pollution: Water Quality standards, Introduction to Treatment & Disposal of Waste Water. Reuse and Saving of Water, Rain Water Harvesting. Solid Waste Management: Classification of Solid Waste, Collection, Transportation and Disposal of Solid. Recycling of Solid Waste: Energy Recovery, Sanitary Landfill, On-Site Sanitation. Air & Noise Pollution: Primary and Secondary air pollutants, Harmful effects of Air Pollution, Control of Air Pollution. . Noise Pollution Harmful Effects of noise pollution, control of noise pollution, Global warming & Climate Change, Ozone depletion, Greenhouse effect
Text Books:
1. Palancharmy, Basic Civil Engineering, McGraw Hill publishers.
2. Satheesh Gopi, Basic Civil Engineering, Pearson Publishers.
3. Ketki Rangwala Dalal, Essentials of Civil Engineering, Charotar Publishing House.
4. BCP, Surveying volume 1
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
2. software crisis and the manufacturing problem
necessitates radical changes in embedded-
systems design. In particular, we propose a rig-
orous methodology for embedded-software
development and platform-based design.
Vision for embedded-system design
Our aim is high: a system design methodol-
ogy and supporting tools that
I address the supply chain integration prob-
lem, the linkage between intellectual prop-
erty (IP) creators, foundries, semiconductor
houses, systems houses, and applications
software developers;
I consider metrics to govern embedded-systems
design, including cost, weight, size, power
dissipation, and required performance;
I work at all abstraction levels, from concep-
tion to software and silicon implementation
and packaging, with guaranteed correctness
and efficient trade-off evaluation; and
I favor reuse by identifying requirements for
real plug-and-play operation.
To achieve these goals, our embedded-
Roadmaps and Visions
24 IEEE Design & Test of Computers
IC technology lets us integrate so many functions onto
a single chip that we can indeed implement an “environ-
ment-to-environment” system on a chip. However, we
cannot simply view SoCs as an extension of the applica-
tion-specific IC methodology that has been in vogue for
the past 10 years. The economics of SoCs in the ASIC
framework are not appealing. The cost of designing and
manufacturing SoCs is too high, given the relatively few
parts typical in ASIC production. Our platform-based
design methodology is an outgrowth of the SoC debate.
The overall goal of electronic system design is to bal-
ance production costs with development time and cost
in view of performance and functionality constraints.
Manufacturing cost depends mainly on a product’s
hardware components. Minimizing chip size to reduce pro-
duction cost implies tailoring the hardware to product func-
tionality. The cost of a state-of-the-art fabrication facility
continues to rise. A new 300-mm, 0.13- or 0.10-micron,
high-volume manufacturing plant today costs about $3.5
billion.1
The ITRS predicts that although manufacturing
complex SoC designs will be feasible—at least down to
50-nm minimum feature sizes—producing practical masks
and exposure systems will likely be a major bottleneck for
such chips’ development. That is, the cost of masks will
grow even more rapidly for these fine geometries. A single
mask set and probe card for a next-generation chip costs
more than $1 million for a complex part—up from less than
$100,000 a decade ago.2
This leads us to the first motiva-
tion for a move to platform-based design:
Motivation A. Increasing mask and manufacturing setup costs
are biasing manufacturers toward parts that have guaranteed
high-volume production from a single mask set.
Design costs are rising exponentially because of
products’ increased complexity (especially in embed-
ded software), the challenges posed by physical effects
for deep-submicron design, and the shortage of talent.
According to Sematech, design productivity is falling
behind exponentially with respect to technology
advances. Time-to-market constraints are also intensify-
ing so rapidly that even if costs cease to be an issue, it
is still becoming plainly impossible to develop complex
parts quickly enough. This leads us to a second reason
to embrace platform-based design:
Motivation B. Design problems are pushing IC and system
companies away from full-custom design methods, toward
designs that they can assemble quickly from predesigned
and precharacterized components. This places priority on
design reuse; correct component assembly; and fast, reli-
able, efficient compilation from specifications to imple-
mentations.
Platform-based design can trade off various aspects
of manufacturing costs, nonrecurring expenses, and
design productivity. Reuse, flexibility, and efficiency are
key to this approach.
References
1. S.P. Dean, “ASICs and Foundries,” Integrated Systems
Design, Mar. 2001; http://www.isdmag.com/story/
OEG20010301S0051.
2. “NRE Charges for SoC Designs to Hit $1M, Study Says,”
Semiconductor Business News, Oct. 1999; http://www.
csdmag.com/story/OEG19991006S0043.
Motivations for platform-based design
3. systems design methodology will leverage three
basic principles:
I Orthogonality. Based on the principle of sep-
aration of concerns, the methodology will
clearly separate behavior from implementa-
tion, and communication from computation.
I Theoretical framework. Without a rigorous
approach, it isn’t possible to achieve correct,
efficient, reliable, and robust designs. Solid
theoretical foundations will provide the nec-
essary infrastructure for a new generation of
interoperable tools that will work at different
abstraction levels. These tools will verify, sim-
ulate, and map designs from one abstraction
level to the next, help choose implementa-
tions that meet constraints, and optimize
design criteria. The framework will handle
both embedded software and hardware
designs, and precise semantics will let the
framework and tools analyze the design
specifications, identify and correct function-
al errors, and initiate synthesis processes.
I Platforms. Incorporating this common reuse
technique into our methodology will sub-
stantially reduce design time and cost. We
have formalized and elaborated the plat-
form concept to yield an approach that com-
bines hardware and software platforms into
a system platform.
Leveraging these principles, we can build an
environment in which designing complex sys-
tems will take only days instead of the many
months needed today.
Problems with embedded systems
To cope with the tight constraints on perfor-
mance and cost typical of most embedded sys-
tems, programmers write today’s embedded
software using low-level programming lan-
guages such as C or even assembly language.
The tools for creating and debugging embed-
ded software are basically the same as those for
standard software: compilers, assemblers,
debuggers, and cross-compilers. The difference
is in quality: Most tools for embedded software
are rather primitive compared to equivalent
tools for richer platforms. However, besides
these tools, embedded software also needs
hardware support for debugging and perfor-
mance evaluation, which are far more impor-
tant for embedded software than for standard
software. Another characteristic of embedded
software is that, because of performance and
memory requirements, it typically uses applica-
tion-dependent, proprietary operating systems.
When embedded software was simple, there
was little need for a more sophisticated
approach. However, with the increased com-
plexity of embedded-systems applications, this
rather primitive approach has become the bot-
tleneck. Most system companies have enhanced
their software design methodologies to increase
productivity and product quality. Many have
rushed to adopt object-oriented approaches and
other syntactically driven methods. Such meth-
ods certainly have value in cleaning up embed-
ded-software structure and documentation, but
they barely scratch the surface in terms of quality
assurance and time to market. To address these
concerns, the industry has sought to standardize
real-time operating systems (RTOSs)—through
either de facto standards or standards bodies
such as the German automotive industry’s Open
Systems and the Corresponding Interfaces for
Automotive Electronics (OSEK) committee.
RTOSs and traditional integrated development
environments dominate the embedded-software
market. Embedded-software design automation
is still a small segment, but work in this area
could lead to large productivity gains.
In some applications, the need to capture
specifications at high abstraction levels has led
to the use of modeling tools such as The
Mathworks’ Matlab and Simulink. These tools
let designers quickly assemble algorithms and
simulate behavior. However, the mathematical
models these tools support do not cover the full
embedded-system design spectrum. For exam-
ple, they lack formal dataflow support and do
not integrate modeling and finite-state machine
(FSM) capture. Understanding the mathemati-
cal properties of embedded-system functional-
ity is a key part of our vision; at this level, we
can achieve the best results in terms of func-
tional correctness and an error-free refinement
of the design into its implementation.
Technology boundaries between systems
and ICs are blurring. A system that needed a
25November–December 2001
4. printed circuit board yesterday is a single chip
today. Thus, IC providers must understand sys-
tem engineering and essentially become sub-
system providers. This means that IC providers
cannot sell “bare silicon” to system houses as
they used to. Instead, they must provide sup-
porting elements such as software development
tools and IP blocks that make the system imple-
menter’s job easier. However, today’s IC mak-
ers often depend on third parties to provide
such IP blocks and tools. As a result, the
embedded-systems designer must cope with
different user interfaces, programming styles,
and IP documentation styles and contents.
Platform-based design
The platform concept has been around for
years, but multiple definitions make its inter-
pretation confusing. In general, a platform is an
abstraction that covers several possible lower-
level refinements. Every platform gives a per-
spective from which to map higher abstraction
layers into the platform and one from which to
define the class of lower-level abstractions that
the platform implies.
Architecture platforms
PC makers develop their products quickly and
efficiently around a standard platform that has
gradually emerged: the x86 instruction set archi-
tecture, a full set of buses, legacy support for the
interrupt controller, and the specification of a set
of I/O devices. All PCs satisfy these constraints. So
in this domain, a platform isn’t a standardized
detailed hardware microarchitecture, but rather
an abstraction characterized by a set of con-
straints on the architecture. Thus, the platform is
an abstraction of a family of microarchitectures.
Chip makers work with another type of plat-
form: a flexible IC that designers can customize
for a particular application by “programming”
one or more of the chip’s components.
Programming can mean metal customization
(gate arrays), electrical modification (field-
programmable gate array personalization), or
software that runs on a microprocessor or digital
signal processor (DSP). For embedded software,
the platform is a fixed microarchitecture that min-
imizes mask-making costs but is flexible enough
to work for a set of applications so that produc-
tion volume remains high over an extended chip
lifetime. Microcontrollers designed for automo-
tive applications, such as the Motorola Black Oak
PowerPC, exemplify embedded-software plat-
forms. The problem with this approach is the
potential lack of optimization, which can lead to
chips that aren’t fast or small enough.
A better approach is to develop a family of
similar chips that differ in one or more compo-
nents but are based on the same microproces-
sor, such as Motorola’s Silver and Green Oak.
Such a chip family is also a platform.
In our vision, ICs used for embedded sys-
tems will be developed as an instance of a par-
ticular architecture platform. That is, rather
than assembling them from a collection of inde-
pendently developed blocks of silicon func-
tionality, designers will derive them from a
specific family of microarchitectures—possibly
oriented toward a particular class of problems.
The system developer can then modify the ICs
by extending or reducing them.
In general, platforms are characterized by
programmable components. Thus, each plat-
form instance derived from the architecture plat-
form maintains enough flexibility to support an
application space that guarantees the produc-
tion volumes necessary for economically viable
manufacturing. The library that defines the plat-
form can contain reconfigurable components.
These can be design-time reconfigurable, such
as the Tensilica Xtensa processor, which allows
considerable configuration of the instruction set
processor for applications; or they can be run-
time reconfigurable via reconfigurable logic, as
with field-programmable gate arrays (FPGAs).
The combination of programmable, designer-
configurable processors and runtime-reconfig-
urable logic yields the “highly programmable”
platforms that are the focus of the Mescal project
at the Gigascale Silicon Research Center.3
Platform instances
A designer derives an architecture platform
instance from the platform by choosing a set of
components from the platform library or by set-
ting parameters of the library’s reconfigurable
components.
Programmable components guarantee a
platform instance’s flexibility—that is, its abili-
Roadmaps and Visions
26 IEEE Design & Test of Computers
5. ty to support different applications. Software
programmability yields a more flexible solution
because it is easier to modify; reconfigurable
hardware executes much more quickly and
expends far less power than the corresponding
software. Thus, the trade-off here is between
flexibility and performance.
Platform design issues
Today, choosing an architecture platform is
more an art than a science. From an applica-
tion domain perspective, performance and size
are the constraints that usually determine the
architecture platform. For a particular applica-
tion, a processor must meet a minimum speed,
and the memory system must meet a minimum
size. Because each product has a different set
of functions, the constraints identify different
architecture platforms; more complex applica-
tions yield stronger architectural constraints.
For a semiconductor provider, limits on pro-
duction and design costs add platform con-
straints and consequently reduce the number
of choices. The intersection of the application
constraints and the semiconductor constraints
defines the architecture platforms available for
the final product. This process can result in a
platform instance that is overdesigned for a
given product. In several applications, howev-
er, an overdesigned architecture has been the
perfect vehicle to deliver new software prod-
ucts and extend the application. We believe
that the embedded-systems community will
soon accept some degree of overdesign to
improve design costs and time to market.
A hardware platform’s design emerges from
a trade-off in a complex space that includes
I platform flexibility, which is the application
space that the range of platform architec-
tures can support; and
I the architecture providers’ freedom in
designing their hardware platform instances.
Once an architecture platform has been
selected, the design process involves exploring
the design space defined by that platform’s con-
straints. These constraints can apply not only to
the components themselves but also to their
communication mechanism.
Architecture platform-based design is nei-
ther a top-down nor a bottom-up process: it is a
meet-in-the-middle approach. The trend is
toward semiconductor companies defining
platforms and platform instances in close col-
laboration with system companies.
Application developers first choose the
architectural components they require, yield-
ing a platform instance. Then they map their
application’s functions onto the platform
instance; this mapping process includes hard-
ware and software partitioning. For example,
the designers might decide to move a func-
tion from software running on one of the
processors to a hardware block, which could
be full-custom logic, an application-specific
integrated circuit (ASIC), or reconfigurable
logic. Once the partitioning and the platform
instance selection are finalized, the designer
develops the application software’s final and
optimized version.
For the best flexibility and time to market,
most of the implementation uses software.
Recent market analysis indicates that software
accounts for more than 80% of system devel-
opment. Thus, a competitive platform must
offer a powerful software development envi-
ronment. One reason to standardize program-
mable components in platforms is software
reuse. If the instruction set architecture (ISA)
remains constant, software porting is easier.
This brings us to the definition of an application
programming interface (API) platform.1
API platform
To maximize software reuse, the architec-
ture platform must be abstracted at a level
where the application software can use a high-
level interface to the hardware: the API, also
called the programmers’ model. A software
layer performs this abstraction, as shown in
Figure 1 (next page). This layer wraps the essen-
tial parts of the architecture platform:
I the programmable cores and the memory
subsystem through an RTOS,
I the I/O subsystem through the device dri-
vers, and
I the network connection through the net-
work communication subsystem.
27November–December 2001
6. In our conceptual framework, the program-
ming language is the abstraction of the ISA, and
the API is the abstraction of a variety of compu-
tational resources (the concurrency model pro-
vided by the RTOS) and available peripherals
(device drivers). The API is a unique abstract rep-
resentation of the architecture platform through
the software layer. With an API so defined, the
application software can be reused for every plat-
form instance. Indeed, the API is itself a platform.
The RTOS schedules available computing
resources and handles communication between
them and the memory subsystem. As platform
architectures move to multiple cores (already
common in second-generation wireless hand-
sets and set-top boxes), the RTOS schedules soft-
ware tasks across different computation engines.
System platform layers
Figure 2 captures the basic idea of the sys-
tem platform layer.1
The vertex where the two
cones meet represents the combination of the
API and the architecture platform. You can
think of the system platform layer as a single
platform obtained by gluing together the upper
layer (the API platform) and the lower layer
(the collection of components comprising the
architecture platform).
A system designer maps an application onto
the abstract representation, choosing from a
family of architectures to optimize cost, effi-
ciency, energy consumption, and flexibility.
With appropriate tools, this mapping can take
place in part automatically. For this to occur,
the tools must be aware of both the architec-
ture features and the API. Thus, the system plat-
form layer combines two platforms and the
tools that map one abstraction onto the other.
In the design space, there is an obvious trade-
off between the API’s abstraction level and the
number and diversity of platform instances cov-
ered. A more abstract API yields a richer set of
platform instances but also makes it more diffi-
cult to choose the optimal platform instance
and map automatically to it. Hence, we envision
using somewhat different abstractions and tools
for several system platform layers.
To generalize our thinking, we view design
primarily as a process of providing abstraction
views. An API platform is a predefined abstrac-
tion layer above a more complex device or sys-
tem that can be used for design at a higher
level. A set of operating system calls is also a
platform in the sense that it provides a layer of
abstraction over an underlying machine.
Following this model, a structural design view
is abstracted into the API model, providing the
basis for the design. To choose the right archi-
tecture platform, we need to export to the API
level an execution model of the architecture
platform that estimates its performance. This
model may include size, power consumption,
and timing. On the other hand, we can pass con-
straints from higher abstraction levels to lower
levels to continue the refinement process and
satisfy the original design constraints. Along with
constraints and estimates, we can also use cost
functions to compare feasible solutions.
In summary, the system platform layer is a
comprehensive model that includes the view
Roadmaps and Visions
28 IEEE Design & Test of Computers
I O
Hardware-dependent software
Application software
Platform API
Output devicesInput devices
Network
Hardware
Software
Device drivers
BIOS
RTOS
Networkcommunication
Hardware platform
Figure 1. Layered software structure. The platform API
provides the interface between the hardware-
dependent software and the middleware and
application layers of the software stack. The
hardware-dependent software rests on the physical
hardware and network. The platform is usually part of
a network, and thus provides an interface to the
network environment, as illustrated on the right.
7. of platforms from both the application and the
implementation architecture perspectives, the
vertex where the two cones in Figure 2 meet.
The embedded-systems
methodology roadmap
Now let’s turn to the key requirements for
the evolution of our broader embedded-system
design methodology. We advocate a holistic
approach that includes methodology, support-
ing tools, IP blocks, hardware and software plat-
forms, and supply chain management. Only by
taking a high-level view of the problem can we
devise solutions that will have a real impact on
embedded-systems design.
The essential issue to resolve is the link
between functionality and programmable
platforms. We must start from a high-level
abstraction of system functionality that is
completely implementation independent and
rests on solid theoretical foundations that
allow formal analysis. We must also select the
platform that can support the functionality
while meeting the physical constraints placed
on the final implementation. We must imple-
ment the functionality onto the platform so
that it maintains its properties and meets the
physical constraints.
Our vision is to have an optimized, semiau-
tomated, transparent, verifiable, and mathemat-
ically correct flow from product specification
through implementation for software-dominat-
ed products implemented with highly pro-
grammable platforms. Now let’s discuss the
implications of this vision for the major stages
of design (specification, refinement and
decomposition, and analysis) and implemen-
tation (target platform definition, mapping, and
links to implementation).
Specification
Specification means different things to dif-
ferent people. We understand specification to
be the entry point of the design process. In our
methodology, a specification should contain
three basic elements:
I A denotational description of the system’s
functionality that doesn’t imply an imple-
mentation. In general, we can express the
denotational description as a set of equa-
tions and inequalities in an appropriate alge-
bra. Finding a satisfying set of variables
corresponds to executing the specifications.
This unambiguous functionality description
lets us assess properties and evaluate rela-
tionships among the design’s variables.
Imperative specifications that fully define
the communication mechanism and the ele-
ments’ firing rules—FSM specifications,
dataflow models, and event-driven mod-
els—are refinements to the denotational
description. Although imperative specifica-
tions are important, the denotational speci-
fication is both cleaner and more rigorous.
I A set of constraints on the system’s final
implementation. In general, the constraints
can also be expressed as a set of equalities
and inequalities, but the variables remain
uninterpreted because they involve quanti-
ties that have no meaning in the denota-
tional description. A specification of an
29November–December 2001
Application space
Architectural space
Application instance
Platform instance
Platform
specification
Platform
design-space
exploration
System
platform
Figure 2. System platform layer and design flow. The
system platform effectively decouples the application
development process (the upper triangle) from the
architecture implementation process (the lower
triangle).
8. automotive engine controller’s behavior, for
example, cannot refer to its weight, cost,
emissions, or size; these variables are a func-
tion of its physical implementation. These
kinds of constraints limit the possible imple-
mentations of the system’s functionality.
I A set of design criteria. These are often more
qualitative than quantitative characteristics,
such as reliability, testability, maintain-
ability, and manufacturability. The difference
between constraints and criteria is that con-
straints, must be met, whereas you do your
best to optimize criteria. Multiple criteria
often produce multiple conflicts, leading to a
complex optimization.
The designer’s goal is to obtain a system
implementation whose behavior satisfies the
denotational description and the set of con-
straints, and optimizes the criteria. The language
we use to express these elements is an open ques-
tion, but the most important issue is not the lan-
guage per se but its semantics. In fact, the issue
of the mathematical models and the algebra
used to compose and manipulate the models is
the essential part of this stage of the design
process. Choosing a language is often a compro-
mise among several needs—for example, ease of
expression, compactness, and composability.
We believe that the specification languages
will be heterogeneous. Although application
domains most likely have common mathemat-
ical underpinnings, they each require different
constructs. Today, the mathematical frame-
works are partitioned in different domains—
that is, FSMs, dataflow graphs, and discrete-
event systems. Each of these domains has a
particular set of intrinsic properties and execu-
tion models that determine the simulation
mechanisms used to evaluate the description.
Although these formalisms are mathematically
rigorous, composing heterogeneous specifica-
tions is very difficult. An appropriate mathemat-
ical framework will let us establish properties of
the composition of heterogeneous models that
are impossible today.
The embedded-software community has a
strong interest in Unified Modeling Language as
a standard expression mechanism. Although
UML provides some degree of standardization
in terms of the syntactical aspects of system
design, its semantics are too generic to be use-
ful in our framework. However, rather than
define our own syntax for system-level lan-
guages, we plan to focus on the language
semantics we would need for the required
models of computation and their mathematical
properties. This work could enrich a language
such as UML and contribute to its standardiza-
tion. We have begun to define an embedded
UML profile that would be a basis for object-
oriented embedded-software development.4
Refinement and decomposition
Once the designer has captured a specifica-
tion, the design process should progress toward
implementation through well-defined stages.
The essential idea is to manipulate the descrip-
tion by introducing additional details while
both preserving the original functionality and
its properties and meeting the constraints that
can be evaluated at each new abstraction level.
The smaller the steps are, the easier it is to for-
mally prove that constraints are met and prop-
erties are satisfied.
This process, called successive refinement,
is one of the mainstays of our embedded-soft-
ware methodology. Key issues include the pre-
cise definition of what refinement means in the
appropriate mathematical framework and the
proofs that each refinement maintains the spec-
ification’s original properties.
During successive refinement, it is often con-
venient to break parts of the design description
into smaller parts so that optimization tech-
niques have a better chance of producing inter-
esting results: This is called decomposition. We
must determine whether the decomposed
system satisfies the original specifications.
Composition is the inverse transformation of
decomposition. Thus, successive refinement,
decomposition, and composition are the three
basic operations we use to move toward imple-
mentation. Both composition and decomposi-
tion are difficult when the parts that are to be
composed or that result from the decomposition
use heterogeneous models. The mathematical
framework must provide the foundations for
showing the relationship between the system
before and after these transformations.
Roadmaps and Visions
30 IEEE Design & Test of Computers
9. Analysis
While marching toward the final implemen-
tation, designers sometimes take paths that lead
to designs that have no chance of satisfying
some of the constraints. Hence, designers must
have tools that evaluate intermediate results with
respect to the constraints. The essential issue is
finding appropriate models at this abstraction
level that can carry enough information about
the ultimate physical implementation. The alter-
native is to go far down the implementation path
and use detailed models that contain the vari-
ables involved in the constraints.
A register-transfer-level (RTL) model of a
processor, together with simulation, can give
very accurate information—but at the cost of
long computing time. In addition, it might
provide information that is more precise than
necessary to assess how a choice meets our
constraints. Static analysis methods such as
Stars5
and abstract dynamic models can offer
up to a thousand times the performance of
simulation using detailed models. At a high
abstraction level, formal analysis might also
be an option.
Target platform definition
The methodology must use the right nota-
tion to define the system platform API: the pro-
jection into the embedded-software design
space of the platform resources, services, and
configurability space.
One option is to develop a UML-based
description of a target platform that can
become the target of refinement and analysis.
This description must represent the full range
of services (computation, communication, and
coordination) that the platform offers in both
the software and hardware domains, including
configurability options, but it should emphasize
the embedded-software perspective. For exam-
ple, the description could be a multilevel, hier-
archical, stack-oriented view of platform
services.
To carry the design into the platform hard-
ware domain for more detailed analysis of sys-
tem performance and more detailed hardware
and software verification, we can use emerging
C++ class-library-based languages such as
SystemC 2.0.6
Mapping
The mapping paradigm associates portions of
the embedded system’s refined specification with
specific implementation vehicles contained with-
in the target platform. Mapping is a fundamental
method of configuration and refinement down to
implementation. Constraint-driven mapping to a
choice of implementation fabrics is essential for
hardwareandsoftwarepartitioningandforchoos-
ing interfaces between implementation compo-
nents. Using the system platform API descriptions
of a target platform, we can map designs onto the
services that this platform offers and conduct var-
ious design space exploration analyses for per-
formance (and possibly other domains).
Link to implementation
Ideally, a platform-based design relies mostly
on the selection of reusable service components
offered by the platform along with the necessary
configuration. However, derivative products
often contain new or significantly modified func-
tionality. Thus, our methodology must support
software, hardware, and interface synthesis to
allow a comprehensive flow from specification
to implementation. In addition, we need static
configuration (selection of components and con-
nectivity) and dynamic control configuration (for
scheduling, resource allocation, and communi-
cations) to optimize the implementation.
Although designers could satisfactorily select
components by hand, the rest of the work
should be done automatically to minimize
human error and optimize productivity. The
value of automatic synthesis has been proven
both in software and hardware domains. For
software, we can see compilers as synthesis
tools: They map from high-level constructs into
machine code. For both hardware and software,
however, optimization is essential to automatic
synthesis. Without it, automatic synthesis would
be inferior to manual design.
In linking to the implementation, we also
need to go from a high-level representation of the
system’s functionality to the detailed implemen-
tation. Without constraints, this process would be
too difficult to carry out automatically. However,
because we have already selected an architec-
ture and mapped functionality onto virtual
blocks, the automatic approach is conceivable.
31November–December 2001
10. Software synthesis. Almost all system-level
design tools that comprehend software imple-
mentations (for example, statecharts, the combi-
nation of Simulink and State Flow, ASCET from
ETAS, and SPW from Cadence) have some code
generation capabilities. The approach is to start
from the high-level description that these tools
support—for example, graphical state machines,
dataflow graphs, and flow diagrams—and gener-
ate C code that implements the constructs of the
high-level(graphical)languages.Wecanthenuse
the C code for simulation, or we can output it
toward a C compiler for a target implementation.
In some cases, the code that these tools gen-
erate is acceptable to a system designer.
However, frequently a quick analysis lets an
experienced designer outperform the auto-
matic code generator. In addition, there has
been little work on automatic synthesis of
scheduling algorithms to coordinate the func-
tional components to meet constraints.
Communication- and component-based7
design include an overall approach to cope not
only with the functional blocks but also with
their compositions. Some researchers have
made progress in the communication area with
time-triggered architectures and synchronous
languages, but much work remains in terms of
optimal selection of a communication mecha-
nism among software components.
In the FSM domain, preliminary results stem-
ming from work on the Berkeley Polis system8
and additional research indicate that FSM-
based descriptions are more amenable to auto-
matic synthesis.
Over the years, researchers have published
some code generation work on dataflow
graphs, and the results are widely available.
Although quality is constantly improving,
humans can still outperform automatic tools in
many cases. We suspect that the problem lies
in taking a global view of the computational
structure and understanding the memory orga-
nization, which is quite complex for highly opti-
mized DSPs. This area of research is vital to
improving the quality of dataflow code gener-
ation to automate software synthesis.
Hardware synthesis. Researchers have
explored hardware synthesis for many years,
and good algorithms are available for generat-
ing netlists of gates from an RTL description of
the logic. If the modules to be generated use
an implementation fabric of gates or
low-complexity logic (standard cells or FPGAs),
we can apply existing commercial tools.
However, if the implementation modules’ hard-
ware architecture is not specified, we must rely
on behavioral synthesis, which, despite much
research, has not been widely adopted. By
leveraging an understanding of platform-based
design, we can advance the state of the art in
this domain.
Another interesting area is mapping onto
highly programmable platforms, including gen-
erating customized application-optimized
processors, as in the HP Labs PICO project.9
Finally, hardware synthesis must also
encompass interface synthesis based on opti-
mized generation from various easily config-
urable communication pattern templates.
OUR VISION for embedded systems unites plat-
form-based design with a unified embedded-
software flow from specification through
implementation. Realizing this vision will
require additional research, tool and method-
ology development, and experimental appli-
cations. Moreover, cooperation is needed
between system houses and IC makers.
System houses must increase software pro-
ductivity and quality while facing increasing
complexity in both customer product require-
ments and the hardware they use to improve
performance and meet constraints. IC makers
must develop platforms that system houses can
use promptly and effectively, with as little risk
as possible in terms of market share and archi-
tecture adequacy. If these two sides of the elec-
tronics world do not adopt a cooperative
vision, the future of embedded systems will be
difficult. If they do, a codesign methodology
will be a reality.
Our work at the Gigascale Silicon Research
Center, UC Berkeley, and Cadence Research
Laboratories is tackling many of the details need-
ed to bring about this reality. The Metropolis sys-
tem, under development with GSRC sponsorship,
embodies the approach we have presented.
Roadmaps and Visions
32 IEEE Design & Test of Computers
11. Other design systems also address the main
issues we’ve described, although with different
emphases and modeling techniques. Examples
are the Fresco system developed under the direc-
tion of Thomas A. Henzinger, the Mescal system
under the direction of Kurt Keutzer and Sharad
Malik, and the Ptolemy II system under the direc-
tion of Edward A. Lee.
Although our work is still conceptual, we
believe that practical progress is possible, and
that the next few years will see substantive steps
toward our goals. I
Acknowledgments
The two areas of investigation presented here
constitute the main research lines of the System-
Level Design effort at the Gigascale Silicon
Research Center—a large, multisite research pro-
ject under the direction of Richard Newton.3
Cadence Research Laboratories and the System-
Level Design Group have also heavily invested in
these areas. We acknowledge, with thanks, the
ideas and suggestions we received for this vision
from Alberto Ferrari, Felice Balarin, Jerry Burch,
Tom Henzinger, Ed Lee, Luciano Lavagno, Lane
Lewis, Richard Newton, Roberto Passerone,
Ellen Sentovich, Frank Schirrmeister, Marco
Sgroi, Ted Vucurevich, and Yoshi Watanabe.
References
1. A. Sangiovanni-Vincentelli and A. Ferrari, “System
Design—Traditional Concepts and New Para-
digms,” Proc. Int’l Conf. Computer Design, IEEE
CS Press, Los Alamitos, Calif., 1999, pp. 2-12.
2. H. Chang et al., Surviving the SOC Revolution:
A Guide to Platform-Based Design, Kluwer
Academic, Norwell, Mass., 1999.
3. K. Keutzer et al., “System Level Design: Orthogo-
nalization of Concerns and Platform-Based Design,”
IEEE Trans. CAD of Integrated Circuits and
Systems, vol. 19, no. 12, Dec. 2000, pp. 1523-1543.
4. G. Martin, L. Lavagno, and J. Louis-Guerin,
“Embedded UML: A Merger of Real-Time UML
and Co-Design,” Proc. Int’l Workshop
Hardware/Software Co-Design (Codes 01), IEEE
CS Press, Los Alamitos, Calif., 2001, pp. 23-28.
5. F. Balarin, “Worst-Case Analysis of Discrete
Systems,” Proc. Int’l Conf. Computer-Aided Design
(ICCAD 99), ACM Press, New York, 1999,
pp. 347-352.
6. S. Swan, “An Introduction to System-Level Model-
ing in SystemC 2.0,” Cadence Design Systems,
San Jose, Calif., 2001; http://www.systemc.org/
papers/SystemC_WP20.pdf.
7. M. Sgroi, L. Lavagno, and A. Sangiovanni-Vincen-
telli, “Formal Models for Embedded System
Design,” IEEE Design & Test of Computers, vol.
17, no. 2, Apr.-June 2000, pp. 14-27.
8. F. Balarin et al., Hardware-Software Co-Design of
Embedded Systems: The POLIS Approach, Kluw-
er Academic, Norwell, Mass., 1997.
9. S. Aditya, B. Ramakrishna Rau, and V. Kathail,
“Automatic Architectural Synthesis of VLIW and
EPIC Processors,” Proc. Int’l Symp. System Syn-
thesis (ISSS 99), IEEE CS Press, Los Alamitos,
Calif., 1999, pp. 107-113.
Alberto Sangiovanni-
Vincentelli holds the Edgar
L. and Harold H. Buttner Chair
of Electrical Engineering and
Computer Sciences at the
University of California, Berke-
ley, where he is also vice chair for Industrial Rela-
tions. He cofounded Cadence and Synopsys and
founded Cadence Research Labs. His research
interests include design tools and methodologies,
large-scale systems, embedded controllers, and
hybrid systems. Sangiovanni-Vincentelli has a
doctoral degree in electrical engineering and
computer science from Politecnico di Milano, Italy.
He is a fellow of the IEEE and a member of the
National Academy of Engineering.
Grant Martin is a fellow in
the labs of Cadence Design
Systems. His research inter-
ests include system-level
design, systems on chips,
platform-based design, and
embedded software. Martin has a BMath and an
MMath in combinatorics and optimization from
the University of Waterloo, Canada.
Direct questions and comments about this arti-
cle to Grant Martin, Cadence Design Systems, 555
River Oaks Parkway, San Jose, CA 95134;
gmartin@cadence.com.
33November–December 2001