This paper presents a systematic approach to the complex problem of high confidence performance
assurance of high performance architectures based on methods used over several generations of industrial
microprocessors. A taxonomy is presented for performance assurance through three key stages of a product
life cycle-high level performance, RTL performance, and silicon performance. The proposed taxonomy
includes two components-independent performance assurance space for each stage and a correlation
performance assurance space between stages.
A FRAMEWORK FOR ASPECTUAL REQUIREMENTS VALIDATION: AN EXPERIMENTAL STUDYijseajournal
Requirements engineering is a discipline of software engineering that is concerned with the
identification and handling of user and system requirements. Aspect-Oriented Requirements
Engineering (AORE) extends the existing requirements engineering approaches to cope with the
issue of tangling and scattering resulted from crosscutting concerns. Crosscutting concerns are
considered as potential aspects and can lead to the phenomena “tyranny of the dominant
decomposition”. Requirements-level aspects are responsible for producing scattered and tangled
descriptions of requirements in the requirements document. Validation of requirements artefacts
is an essential task in software development. This task ensures that requirements are correct and
valid in terms of completeness and consistency, hence, reducing the development cost,
maintenance and establish an approximately correct estimate of effort and completion time of the
project. In this paper, we present a validation framework to validate the aspectual requirements
and the crosscutting relationship of concerns that are resulted from the requirements engineering
phase. The proposed framework comprises a high-level and low-level validation to implement on
software requirements specification (SRS). The high-level validation validates the concerns with
stakeholders, whereas the low-level validation validates the aspectual requirement by
requirements engineers and analysts using a checklist. The approach has been evaluated using
an experimental study on two AORE approaches. The approaches are viewpoint-based called
AORE with ArCaDe and lexical analysis based on Theme/Doc approach. The results obtained
from the study demonstrate that the proposed framework is an effective validation model for
AORE artefacts.
PRODUCT QUALITY EVALUATION METHOD (PQEM): TO UNDERSTAND THE EVOLUTION OF QUAL...ijseajournal
Promoting quality within the context of agile software development, it is extremely important as well as
useful to improve not only the knowledge and decision-making of project managers, product owners, and
quality assurance leaders but also to support the communication between teams. In this context, quality
needs to be visible in a synthetic and intuitive way in order to facilitate the decision of accepting or
rejecting each iteration within the software life cycle. This article introduces a novel solution called
Product Quality Evaluation Method (PQEM) which can be used to evaluate a set of quality characteristics
for each iteration within a software product life cycle. PQEM is based on the Goal-Question-Metric
approach, the standard ISO/IEC 25010, and the extension made of testing coverage in order to obtain the
quality coverage of each quality characteristic. The outcome of PQEM is a unique multidimensional value,
that represents the quality level reached by each iteration of a product, as an aggregated measure. Even
though a value it is not the regular idea of measuring quality, we believe that it can be useful to use this
value to easily understand the quality level of each iteration. An illustrative example of the PQEM method
was carried out with two iterations from a web and mobile application, within the healthcare environment.
A single measure makes it possible to observe the evolution of the level of quality reached in the evolution
of the product through the iterations.
STATISTICAL ANALYSIS FOR PERFORMANCE COMPARISONijseajournal
Performance responsiveness and scalability is a make-or-break quality for software. Nearly everyone runs into performance problems at one time or another. This paper discusses about performance issues faced during Pre Examination Process Automation System (PEPAS) implemented in java technology. The challenges faced during the life cycle of the project and the mitigation actions performed. It compares 3 java technologies and shows how improvements are made through statistical analysis in response time of the application. The paper concludes with result analysis.
THE UNIFIED APPROACH FOR ORGANIZATIONAL NETWORK VULNERABILITY ASSESSMENTijseajournal
The present business network infrastructure is quickly varying with latest servers, services, connections,
and ports added often, at times day by day, and with a uncontrollably inflow of laptops, storage media and
wireless networks. With the increasing amount of vulnerabilities and exploits coupled with the recurrent
evolution of IT infrastructure, organizations at present require more numerous vulnerability assessments.
In this paper new approach the Unified process for Network vulnerability Assessments hereafter called as
a unified NVA is proposed for network vulnerability assessment derived from Unified Software
Development Process or Unified Process, it is a popular iterative and incremental software development
process framework.
Integrating profiling into mde compilersijseajournal
Scientific computation requires more and more performance in its algorithms. New massively parallel
architectures suit well to these algorithms. They are known for offering high performance and power
efficiency. Unfortunately, as parallel programming for these architectures requires a complex distribution
of tasks and data, developers find difficult to implement their applications effectively. Although approaches
based on source-to-source intends to provide a low learning curve for parallel programming and take
advantage of architecture features to create optimized applications, programming remains difficult for
neophytes. This work aims at improving performance by returning to the high-level models, specific
execution data from a profiling tool enhanced by smart advices computed by an analysis engine. In order to
keep the link between execution and model, the process is based on a traceability mechanism. Once the
model is automatically annotated, it can be re-factored aiming better performances on the re-generated
code. Hence, this work allows keeping coherence between model and code without forgetting to harness the
power of parallel architectures. To illustrate and clarify key points of this approach, we provide an
experimental example in GPUs context. The example uses a transformation chain from UML-MARTE
models to OpenCL code.
A FRAMEWORK FOR ASPECTUAL REQUIREMENTS VALIDATION: AN EXPERIMENTAL STUDYijseajournal
Requirements engineering is a discipline of software engineering that is concerned with the
identification and handling of user and system requirements. Aspect-Oriented Requirements
Engineering (AORE) extends the existing requirements engineering approaches to cope with the
issue of tangling and scattering resulted from crosscutting concerns. Crosscutting concerns are
considered as potential aspects and can lead to the phenomena “tyranny of the dominant
decomposition”. Requirements-level aspects are responsible for producing scattered and tangled
descriptions of requirements in the requirements document. Validation of requirements artefacts
is an essential task in software development. This task ensures that requirements are correct and
valid in terms of completeness and consistency, hence, reducing the development cost,
maintenance and establish an approximately correct estimate of effort and completion time of the
project. In this paper, we present a validation framework to validate the aspectual requirements
and the crosscutting relationship of concerns that are resulted from the requirements engineering
phase. The proposed framework comprises a high-level and low-level validation to implement on
software requirements specification (SRS). The high-level validation validates the concerns with
stakeholders, whereas the low-level validation validates the aspectual requirement by
requirements engineers and analysts using a checklist. The approach has been evaluated using
an experimental study on two AORE approaches. The approaches are viewpoint-based called
AORE with ArCaDe and lexical analysis based on Theme/Doc approach. The results obtained
from the study demonstrate that the proposed framework is an effective validation model for
AORE artefacts.
PRODUCT QUALITY EVALUATION METHOD (PQEM): TO UNDERSTAND THE EVOLUTION OF QUAL...ijseajournal
Promoting quality within the context of agile software development, it is extremely important as well as
useful to improve not only the knowledge and decision-making of project managers, product owners, and
quality assurance leaders but also to support the communication between teams. In this context, quality
needs to be visible in a synthetic and intuitive way in order to facilitate the decision of accepting or
rejecting each iteration within the software life cycle. This article introduces a novel solution called
Product Quality Evaluation Method (PQEM) which can be used to evaluate a set of quality characteristics
for each iteration within a software product life cycle. PQEM is based on the Goal-Question-Metric
approach, the standard ISO/IEC 25010, and the extension made of testing coverage in order to obtain the
quality coverage of each quality characteristic. The outcome of PQEM is a unique multidimensional value,
that represents the quality level reached by each iteration of a product, as an aggregated measure. Even
though a value it is not the regular idea of measuring quality, we believe that it can be useful to use this
value to easily understand the quality level of each iteration. An illustrative example of the PQEM method
was carried out with two iterations from a web and mobile application, within the healthcare environment.
A single measure makes it possible to observe the evolution of the level of quality reached in the evolution
of the product through the iterations.
STATISTICAL ANALYSIS FOR PERFORMANCE COMPARISONijseajournal
Performance responsiveness and scalability is a make-or-break quality for software. Nearly everyone runs into performance problems at one time or another. This paper discusses about performance issues faced during Pre Examination Process Automation System (PEPAS) implemented in java technology. The challenges faced during the life cycle of the project and the mitigation actions performed. It compares 3 java technologies and shows how improvements are made through statistical analysis in response time of the application. The paper concludes with result analysis.
THE UNIFIED APPROACH FOR ORGANIZATIONAL NETWORK VULNERABILITY ASSESSMENTijseajournal
The present business network infrastructure is quickly varying with latest servers, services, connections,
and ports added often, at times day by day, and with a uncontrollably inflow of laptops, storage media and
wireless networks. With the increasing amount of vulnerabilities and exploits coupled with the recurrent
evolution of IT infrastructure, organizations at present require more numerous vulnerability assessments.
In this paper new approach the Unified process for Network vulnerability Assessments hereafter called as
a unified NVA is proposed for network vulnerability assessment derived from Unified Software
Development Process or Unified Process, it is a popular iterative and incremental software development
process framework.
Integrating profiling into mde compilersijseajournal
Scientific computation requires more and more performance in its algorithms. New massively parallel
architectures suit well to these algorithms. They are known for offering high performance and power
efficiency. Unfortunately, as parallel programming for these architectures requires a complex distribution
of tasks and data, developers find difficult to implement their applications effectively. Although approaches
based on source-to-source intends to provide a low learning curve for parallel programming and take
advantage of architecture features to create optimized applications, programming remains difficult for
neophytes. This work aims at improving performance by returning to the high-level models, specific
execution data from a profiling tool enhanced by smart advices computed by an analysis engine. In order to
keep the link between execution and model, the process is based on a traceability mechanism. Once the
model is automatically annotated, it can be re-factored aiming better performances on the re-generated
code. Hence, this work allows keeping coherence between model and code without forgetting to harness the
power of parallel architectures. To illustrate and clarify key points of this approach, we provide an
experimental example in GPUs context. The example uses a transformation chain from UML-MARTE
models to OpenCL code.
International Journal of Engineering Research and DevelopmentIJERD Editor
• Electrical, Electronics and Computer Engineering,
• Information Engineering and Technology,
• Mechanical, Industrial and Manufacturing Engineering,
• Automation and Mechatronics Engineering,
• Material and Chemical Engineering,
• Civil and Architecture Engineering,
• Biotechnology and Bio Engineering,
• Environmental Engineering,
• Petroleum and Mining Engineering,
• Marine and Agriculture engineering,
• Aerospace Engineering.
Testing Throughout The Software Life Cycleelvira munanda
Testing is not a stand-alone activity. It has its place within a software development life cycle model and therefore the life cycle applied will largely determine how testing is organized
SOFTWARE REQUIREMENT CHANGE EFFORT ESTIMATION MODEL PROTOTYPE TOOL FOR SOFTWA...ijseajournal
In software development phase software artifacts are not in consistent states such as: some of the class artifacts are fully developed some are half developed, some are major developed, some are minor developed and some are not developed yet. At this stage allowing too many software requirement changes may possibly delay in project delivery and increase development budget of the software. On the other hand rejecting too many changes may increase customer dissatisfaction. Software change effort estimation is one of the most challenging and important activity that helps software project managers in accepting or rejecting changes during software development phase. This paper extends our previous works on developing a software requirement change effort estimation model prototype tool for the software development phase. The significant achievements of the tool are demonstrated through an extensive experimental validation using several case studies. The experimental analysis shows improvement in the estimation accuracy over current change effort estimation models.
A Novel Method for Quantitative Assessment of Software QualityCSCJournals
This paper deals with quantitative quality model that needs to be practiced formally through out the software development life cycle at each phase. Proposed quality model emphasizes that various stakeholders need to be consulted for quality requirements. The quality goals are set through various measurements and metrics. Software under development is evaluated against expected value of set of metrics. The use of proposed quantitative model is illustrated through a simple case study. The unaddressed quality attribute reusability in ISO 9126 is also discussed.
With interconnectivity between IT Service Providers and their customers and partners growing, fueled by
proliferation of IT Services Outsourcing, with some providers gaining leading positions in marketplace
today, challenges are faced by teams who are tasked to deliver integration projects with much desired
efficiencies both in cost and schedule. Such integrations are growing both in volume and complexity.
Integrations between different autonomous systems such as workflow systems of the providers and their
customers are an important element of this emerging paradigm. In this paper we present an efficient model
to implement such interfaces between autonomous workflow systems with close attention given to major
phases of these projects, from requirement gathering/analysis, to configuration/coding, to
validation/verification, several levels of testing and finally deployment. By deploying a comprehensive
strategy and implementing it in a real corporate environment, a 10%-20% reduction in cost and schedule
year over year was achieved for past several years primarily by improving testing techniques and detecting
bugs earlier in the development life-cycle. Some practical considerations are outlined in addition to
detailing the strategy for testing the autonomous system integrations domain.
Lightweight processes are beginning to replace more formal methods. The motivation for this transition is based on many factors. The Internet, time to market, cost reduction, quality increases, market pressures, as well as the popularization of these programming methods. This series of articles will investigate the various lightweight methods, their impact on the management of software development projects and the processes by which managers can determine the appropriateness and usefulness of the various processes. The definition of a lightweight Process is more difficult than it would first appear. This article outlines the foundations of a heavyweight process and describes the appropriate pieces that can be converted to lightweight.
This is a presentation explaining the things that you must do if you want create an simple app for your android devices. this presentation is also for make us an idea to see the posibilities that processing and android allows us to meke with this.
International Journal of Engineering Research and DevelopmentIJERD Editor
• Electrical, Electronics and Computer Engineering,
• Information Engineering and Technology,
• Mechanical, Industrial and Manufacturing Engineering,
• Automation and Mechatronics Engineering,
• Material and Chemical Engineering,
• Civil and Architecture Engineering,
• Biotechnology and Bio Engineering,
• Environmental Engineering,
• Petroleum and Mining Engineering,
• Marine and Agriculture engineering,
• Aerospace Engineering.
Testing Throughout The Software Life Cycleelvira munanda
Testing is not a stand-alone activity. It has its place within a software development life cycle model and therefore the life cycle applied will largely determine how testing is organized
SOFTWARE REQUIREMENT CHANGE EFFORT ESTIMATION MODEL PROTOTYPE TOOL FOR SOFTWA...ijseajournal
In software development phase software artifacts are not in consistent states such as: some of the class artifacts are fully developed some are half developed, some are major developed, some are minor developed and some are not developed yet. At this stage allowing too many software requirement changes may possibly delay in project delivery and increase development budget of the software. On the other hand rejecting too many changes may increase customer dissatisfaction. Software change effort estimation is one of the most challenging and important activity that helps software project managers in accepting or rejecting changes during software development phase. This paper extends our previous works on developing a software requirement change effort estimation model prototype tool for the software development phase. The significant achievements of the tool are demonstrated through an extensive experimental validation using several case studies. The experimental analysis shows improvement in the estimation accuracy over current change effort estimation models.
A Novel Method for Quantitative Assessment of Software QualityCSCJournals
This paper deals with quantitative quality model that needs to be practiced formally through out the software development life cycle at each phase. Proposed quality model emphasizes that various stakeholders need to be consulted for quality requirements. The quality goals are set through various measurements and metrics. Software under development is evaluated against expected value of set of metrics. The use of proposed quantitative model is illustrated through a simple case study. The unaddressed quality attribute reusability in ISO 9126 is also discussed.
With interconnectivity between IT Service Providers and their customers and partners growing, fueled by
proliferation of IT Services Outsourcing, with some providers gaining leading positions in marketplace
today, challenges are faced by teams who are tasked to deliver integration projects with much desired
efficiencies both in cost and schedule. Such integrations are growing both in volume and complexity.
Integrations between different autonomous systems such as workflow systems of the providers and their
customers are an important element of this emerging paradigm. In this paper we present an efficient model
to implement such interfaces between autonomous workflow systems with close attention given to major
phases of these projects, from requirement gathering/analysis, to configuration/coding, to
validation/verification, several levels of testing and finally deployment. By deploying a comprehensive
strategy and implementing it in a real corporate environment, a 10%-20% reduction in cost and schedule
year over year was achieved for past several years primarily by improving testing techniques and detecting
bugs earlier in the development life-cycle. Some practical considerations are outlined in addition to
detailing the strategy for testing the autonomous system integrations domain.
Lightweight processes are beginning to replace more formal methods. The motivation for this transition is based on many factors. The Internet, time to market, cost reduction, quality increases, market pressures, as well as the popularization of these programming methods. This series of articles will investigate the various lightweight methods, their impact on the management of software development projects and the processes by which managers can determine the appropriateness and usefulness of the various processes. The definition of a lightweight Process is more difficult than it would first appear. This article outlines the foundations of a heavyweight process and describes the appropriate pieces that can be converted to lightweight.
This is a presentation explaining the things that you must do if you want create an simple app for your android devices. this presentation is also for make us an idea to see the posibilities that processing and android allows us to meke with this.
These slides use concepts from my (Jeff Funk) course entitled Biz Models for Hi-Tech Products to analyze the business model for Uber’s taxi service. Uber’s service enables anyone to provide taxi services and it provides dynamic pricing for better matching of supply and demand. Its value proposition for potential drivers is the opportunity to work as driver on their own hours. Its value proposition for user to lower taxi fares during most times of the day and a higher supply of taxis (and higher prices) during peak demand. The customers are tech-savvy and smart phone users who value their time. Uber receives payments directly from customers and keeps a percentage of these payments as its income. Uber’s patents for a demand-price algorithm represent a barrier of entry and thus a method of strategic control.
Advanced Verification Methodology for Complex System on Chip VerificationVLSICS Design
Verification remains the most significant challenge in getting advanced SOC devices in market. The
important challenge to be solved in the Semiconductor industry is the growing complexity of SOCs.
Industry experts consider that the verification effort is almost 70% to 75% of the overall design effort.
Verification language cannot alone increase verification productivity but it must be accompanied by a
methodology to facilitate reuse to the maximum extent under different design IP configurations. This
Advanced reusable test bench development will decrease the time to market for a chip. It will help in code
reuse so that the same code used in sub-block level can be used in block level and top level as well that
helps in saving cost for a tape-out of a chip. This test bench development technique will help us to achieve
faster time to market and will help reducing the cost for the chip up to a large extent.
Testing throughout the software life cycleyahdi sandra
YAHDI SANDRA
1143104752
Program Studi S1 Sistem Informasi
Fakultas Sains dan Teknologi
Universitas Islam Negeri Sultan Syarif Kasim Riau
http://sif.uin-suska.ac.id/
http://fst.uin-suska.ac.id/
http://www.uin-suska.ac.id/
A New Model for Study of Quality Attributes to Components Based Development A...Kiogyf
A New Model for Study of Quality Attributes to Components Based Development Approach
by bstract :
Software development costs, time - to release and quality product are important factors affecting the construction of software. Different types of tools and techniques are suggested by researchers to improve in delivering quality software systems with lower cost and reduce time to delivery. One such practice is development of software using ased Software Development (CBSD) techniques. CBSD recommended Component Bbuilding software systems using existing reusable components, instead of writing from scratch. The main objective of CBSD is to writes once and reuse any number of time with no or modification . Some of the advantages that a company may available by adapting CBSD for the Software development are shorter development time which results in meet tight dead line, increase productivity and Quality Product. CBSD also, s paper is to develop the new model of software support reusability. The aim of thiproduct and describe the characteristics of some selected of attributes of CBSD models that are widely practiced in software industries. We proposed a complete model for or reuse. This Model will cover both Component Based Software Development fcomponent based software development as well as Component development phases for
A - Model. This Model is represent one good solution for Component Based Development with reduce cost and time to deliverable and save the quality of product . Keywords: Component Based Approach, Quality Model, Quality Attributes, , A - Model for CBD .
1. Introduction
Mapping of traditional software development methods to agile methodologycsandit
Agility is bringing in responsibility and ownership in individuals, which will eventually bring
out effectiveness and efficiency in deliverables. Companies are drifting from traditional
Software Development Life Cycle models to Agile Environment for the purpose of attaining
quality and for the sake of saving cost and time. In Traditional models, life cycle is properly
defined and also phases are elaborated by specifying needed input and output parameters. On
the other hand, in Agile environment, phases are specific to methodologies of Agile - Extreme
Programming etc. In this paper a common life cycle approach is proposed that is applicable for
different kinds of methods. This paper also aims to describe a mapping function for mapping of
traditional methods to Agile methods
MAPPING OF TRADITIONAL SOFTWARE DEVELOPMENT METHODS TO AGILE METHODOLOGYcscpconf
Agility is bringing in responsibility and ownership in individuals, which will eventually bring out effectiveness and efficiency in deliverables. Companies are drifting from traditional Software Development Life Cycle models to Agile Environment for the purpose of attaining quality and for the sake of saving cost and time. In Traditional models, life cycle is properly defined and also phases are elaborated by specifying needed input and output parameters. On the other hand, in Agile environment, phases are specific to methodologies of Agile - Extreme
Programming etc. In this paper a common life cycle approach is proposed that is applicable for different kinds of methods. This paper also aims to describe a mapping function for mapping of traditional methods to Agile methods.
One of the core quality assurance feature which combines fault prevention and fault detection, is often known as testability approach also. There are many assessment techniques and quantification method evolved for software testability prediction which actually identifies testability weakness or factors to further help reduce test effort. This paper examines all those measurement techniques that are being proposed for software testability assessment at various phases of object oriented software development life cycle. The aim is to find the best metrics suit for software quality improvisation through software testability support. The ultimate objective is to establish the ground work for finding ways reduce the testing effort by improvising software testability and its assessment using well planned guidelines for object-oriented software development with the help of suitable metrics.
An Empirical Study of the Improved SPLD Framework using Expert Opinion TechniqueIJEACS
Due to the growing need for high-performance and low-cost software applications and the increasing competitiveness, the industry is under pressure to deliver products with low development cost, reduced delivery time and improved quality. To address these demands, researchers have proposed several development methodologies and frameworks. One of the latest methodologies is software product line (SPL) which utilizes the concepts like reusability and variability to deliver successful products with shorter time-to-market, least development and minimum maintenance cost with a high-quality product. This research paper is a validation of our proposed framework, Improved Software Product Line (ISPL), using Expert Opinion Technique. An extensive survey based on a set of questionnaires on various aspects and sub-processes of the ISPLD Framework was carried. Analysis of the empirical data concludes that ISPL shows significant improvements on several aspects of the contemporary SPL frameworks.
Tiara Ramadhani - Program Studi S1 Sistem Informasi - Fakultas Sains dan Tekn...Tiara Ramadhani
Tugas ini di buat untuk memenuhi salah satu tugas mata kuliah pada Program Studi S1 Sistem Informasi.
Oleh ;
Nama : Tiara Ramadhani.
NIM ; 11453201723
SIF VII E
UIN SUSKA RIAU
Software Development Models by Graham et alEmi Rahmi
Software Development Models - Graham et al Foundation of Software Testing
http://sif.uin-suska.ac.id/
http://fst.uin-suska.ac.id/
http://www.uin-suska.ac.id/
Similar to A TAXONOMY OF PERFORMANCE ASSURANCE METHODOLOGIES AND ITS APPLICATION IN HIGH PERFORMANCE COMPUTER ARCHITECTURES (20)
AN EFFECTIVE VERIFICATION AND VALIDATION STRATEGY FOR SAFETY-CRITICAL EMBEDDE...IJSEA
This paper presents the best practices to carry out the verification and validation (V&V) for a safetycritical
embedded system, part of a larger system-of-systems. The paper talks about the effectiveness of this
strategy from performance and time schedule requirement of a project. The best practices employed for the
V &Vis a modification of the conventional V&V approach. The proposed approach is iterative which
introduces new testing methodologies apart from the conventional testing methodologies, an effective way
of implementing the phases of the V&V and also analyzing the V&V results. The new testing methodologies
include the random and non-real time testing apart from the static and dynamic tests. The process phases
are logically carried out in parallel and credit of the results of the different phases are taken to ensure that
the embedded system that goes for the field testing is bug free. The paper also demonstrates the iterative
qualities of the process where the iterations successively find faults in the embedded system and executing
the process within a stipulated time frame, thus maintaining the required reliability of the system. This
approach is implemented in the most critical applications —-aerospace application where safety of the
system cannot be compromised
Computer information project planning is one of the most important activities in the modern software
development process. Without an objective and realistic plan of software project, the development of
software process cannot be managed effectively. This research will identify general measures for the
specific goals and its specific practices of Project Planning Process Area in Capability Maturity Model
Integration (CMMI). CMMI is developed in USA by Software Engineering Institute (SEI) in Carnegie
Mellon University. CMMI is a framework for assessment and improvement of computer information
systems. The procedure we used to determine the measures is to apply the Goal Questions Metrics (GQM)
approach to the three specific goals and its fourteen specific practices of Project Planning Process Area in
CMMI.
MANAGING S/W DESIGN CHANGES USING C.R. DESIGNERIJSEA
The development of any software product depends on how efficiently design documents are created. The
various kind of design document which are required to be created for the development of software product
are High level Design (HLD), Low Level Design (LLD) and Change Request Design (CRD) Document .
Low level design document gives the design of the actual software application. Low level design document
is based on High Level Design document [2]. After an application is implemented at client site, there can
be changes which the client may ask during the implementation or maintenance phase. For such changes,
Change Request Design documents are created. A good design document will make the application very
easy to develop/maintain by the developer. CR Designer tool is designed and developed to create a
standard Change Request Design (CRD) documents and tracing the changes done in processes of the
module in different versions of CRD document released. This paper presents a new dimension tool for
Change Request Design Document.
SIMULATION-BASED APPLICATION SOFTWARE DEVELOPMENT IN TIME-TRIGGERED COMMUNICA...IJSEA
This paper introduces a simulation-based approach for design and test of application software for timetriggered
communication systems. The approach is based on the SIDERA simulation system that supports
the time-triggered real-time protocols TTP and FlexRay. We present a software development platform for
FlexRay based communication systems that provides an implementation of the AUTOSAR standard
interface for communication between host application and FlexRay communication controllers. For
validation, we present an application example in the course of which SIDERA has been deployed for
development and test of software modules for an automotive project in the field of driving dynamics
control.
A PARADIGM FOR THE APPLICATION OF CLOUD COMPUTING IN MOBILE INTELLIGENT TUTOR...IJSEA
Nowadays, with the rapid growth of cloud computing, many industries are going to move their computing
activities to clouds. Researchers of virtual learning are also looking for the ways to use clouds through
mobile platforms. This paper offers a model to accompany the benefits of “Mobile Intelligent Learning”
technology and “Cloud Computing”. The architecture of purposed system is based on multi-layer
architecture of Mobile Cloud Computing. Despite the existing challenges, the system has increased the life
of mobile device battery. It will raise working memory capacity and processing capacity of the educational
system in addition to the greater advantage of the educational system. The proposed system allows the
users to enjoy an intelligent learning every-time and every-where, reduces training costs and hardware
dependency, and increases consistency, efficiency, and data reliability.
THE REMOVAL OF NUMERICAL DRIFT FROM SCIENTIFIC MODELSIJSEA
Computer programs often behave differently under different compilers or in different computing
environments. Relative debugging is a collection of techniques by which these differences are analysed.
Differences may arise because of different interpretations of errors in the code, because of bugs in the
compilers or because of numerical drift, and all of these were observed in the present study. Numerical
drift arises when small and acceptable differences in values computed by different systems are
integrated, so that the results drift apart. This is well understood and need not degrade the validity of the
program results. Coding errors and compiler bugs may degrade the results and should be removed. This
paper describes a technique for the comparison of two program runs which removes numerical drift and
therefore exposes coding and compiler errors. The procedure is highly automated and requires very little
intervention by the user. The technique is applied to the Weather Research and Forecasting model, the
most widely used weather and climate modelling code.
DEVELOPMENT OF AN ARABIC HANDWRITING LEARNING EDUCATIONAL SYSTEMIJSEA
In today many students produce a wrong and illegible handwriting and teachers need a lot of time to check
the handwriting errors. The traditional approach for handwriting teaching needs a long hours of
handwriting practice. Unfortunately, this is not advantageous in many cases. In this paper, we develop an
automated educational system for Arabic handwriting learning and detection of errors such as: the stroke
production errors, stroke sequence errors, stroke relationship errors and stroke interline errors. Then our
system can check the handwriting mistakes and immediately provide a feedback to the students to correct
themselves.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
A TAXONOMY OF PERFORMANCE ASSURANCE METHODOLOGIES AND ITS APPLICATION IN HIGH PERFORMANCE COMPUTER ARCHITECTURES
1. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.2, March 2013
A TAXONOMY OF PERFORMANCE ASSURANCE
METHODOLOGIES AND ITS A PPLICATION IN HIGH
PERFORMANCE COMPUTER A RCHITECTURES
Hemant Rotithor
Microprocessor Architecture Group (IDGa)
Intel Corporation Hillsboro, OR, 97124, USA
hemant.g.rotithor@intel.com
ABSTRACT
This paper presents a systematic approach to the complex problem of high confidence performance
assurance of high performance architectures based on methods used over several generations of industrial
microprocessors. A taxonomy is presented for performance assurance through three key stages of a product
life cycle-high level performance, RTL performance, and silicon performance. The proposed taxonomy
includes two components-independent performance assurance space for each stage and a correlation
performance assurance space between stages. It provides a detailed insight into the performance assurance
space in terms of coverage provided taking into account capabilities and limitations of tools and
methodologies used at each stage. An application of the taxonomy to cases described in the literature and
to high performance Intel architectures is shown. The proposed work should be of interest to manufacturers
of high performance microprocessor/chipset architectures and has not been discussed in the literature.
KEYWORDS
Taxonomy, high performance microprocessor, performance assurance, computer architecture, modeling
1. INTRODUCTION
Phases in the design of a high performance architecture include: generating ideas for performance
improvement, evaluation of those ideas, designing a micro-architecture to implement ideas, and
building silicon implementing key ideas. At each stage, potential performance improvements
need to be tested with high confidence. Three stages of developing a high performance
architecture correspond to three levels of abstraction for performance assurance- high level (HL)
performance, RTL performance, and silicon performance. Performance assurance consists of
performance analysis of key ideas at a high level, performance correlation of the implementation
of a micro-architecture of these ideas to high level analysis/expectations, and performance
measurement on the silicon implementing the micro-architecture. Examples of high performance
architectures include-microprocessors, special purpose processors, memory controller and IO
controller chipsets, accelerators etc.
A successful high performance architecture seeks major performance improvement over previous
generation and competitive products in the same era. Significant resources are applied in
developing methodologies that provide high confidence in meeting performance targets. A high
performance architecture may result in several products with different configurations, each of
which has a separate performance target. For example, a CPU core may be used in server,
DOI : 10.5121/ijsea.2013.4201 1
2. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.2, March 2013
desktop, mobile products with different cache sizes, core/uncore frequencies, number of memory
channels/size/speed. A performance assurance scheme should provide high confidence in
performance of each product. We propose a generalized taxonomy of performance assurance
methods that has been successfully deployed for delivering high performance architectures over
several generation of CPUs/chipsets. The proposed taxonomy is regular and designed to highlight
key similarities and differences in different performance methodologies. Such an insight is not
available in existing literature.
2. BACKGROUND
Literature pertaining to performance related taxonomies has focused on specific aspects of
performance evaluation-primarily on workloads and simulation methods or application specific
performance issues, for example, taxonomy for imaging performance [1]. A taxonomy of
hardware supported measurement approaches and instrumentation for multi-processor
performance is considered in [2]. A taxonomy for test workload generation is considered in [3]
that covers aspects of valid test workload generation and [4] that considers process execution
characteristics. A proposal for software performance taxonomy was discussed in [5]. Work on
performance simulation methods, their characteristics, and application is described in [6-9].
Another example describes specific aspects of validating pre-silicon performance verification of
HubChip chipset [10]. Other related work focuses on performance verification techniques for
processors and SOCs and describe specific methods used and experience from using them [11-
14]. The literature, while addressing specific aspects of performance verification, addresses only
part of the issues needed for complete performance assurance of a complex high performance
architecture. Significant more effort is needed in producing high performance architectures and
the goal of the paper is to provide a complete picture of this effort in the form of a unified
taxonomy that can’t be gathered through glimpses of pieces described in the literature. This paper
covers key aspects of product life-cycle performance assurance methods and proposes a
taxonomy to encapsulate these methods in a high level framework. We show in a later section
how the proposed taxonomy covers subsets of the performance verification methods described in
the literature and its application to real world high performance architectures.
Section 3 provides motivation for development of the taxonomy. Section 4 describes the proposed
taxonomy. Section 5 discusses examples of application of the proposed taxonomy. Section 6
concludes the paper.
3. MOTIVATION FOR THE PROPOSED TAXONOMY
Product performance assurance is not a new problem and manufacturers of high performance
architectures have provided snapshots of subset of the work done [10-14]. This paper unifies key
methods employed in performance assurance from inception to the delivery of silicon. Such a
taxonomy is useful in the following ways:
a. It depicts how high confidence performance assurance is conducted for modern
microprocessors/chipsets based on experience over several generations of products.
b. It provides new insight into the total solution space of performance assurance methods
employed for real high performance chips and a common framework within which new
methods can be devised and understood.
c. It provides a rational basis for comparison of different methods employed and shows
similarities and differences between methods employed at each stage of performance
assurance.
2
3. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.2, March 2013
d. Exposes the complexity, flexibility, and trade-offs involved in the total task and provides
a basis for identifying adequacy of performance assurance coverage obtained with a
different solutions and any potential gaps that might exist that can be filled to improve
coverage
e. Provides a framework for assessing risk with respect to product performance with
reference to initial expectations set through planning or competitive assessment and
provides a high level framework for creating a detailed performance assurance execution
plan
Why is it important to look at a detailed framework for components of performance assurance?
To understand this, it is useful to go through the process of specification of performance
requirements and their evaluation through product life cycle:
• Performance targets for a new architecture and its derived products are set via careful
planning for the time frame when it is introduced to make it competitive.
• A set of high level ideas to reach performance targets are investigated via a high level
model and a subset of these ideas is selected for implementation.
• A micro-architecture for implementing the selected ideas is designed and RTL (register
transfer level) model is created.
• Silicon implementing the RTL model is created and tested.
Performance evaluation is necessary at each stage to meet the set targets. The tools used for
performance analysis at each stage differ greatly in their capabilities, coverage, accuracy, and
speed. Table 1 shows how various attributes of performance assurance at each stage compare. A
high level performance model can be developed rapidly, can project performance for modest
number and sizes of workloads, stimulus can be injected and observed at fine granularity but may
not capture all micro-architecture details. Performance testing with an RTL model needs longer
development time, runs slow, and can project performance for a small set of workloads over short
durations but captures details of the micro-architecture. Performance testing with silicon can run
full set of workloads, captures all details of micro-architecture and provides significant coverage
of performance space, however, ability to inject stimulus and observability of results is limited.
The goals of performance testing in these stages are also different. In the high level model the
goal is feature definition and initial performance projections to help reach the goals and evaluate
performance trade-off vs. micro-architecture changes needed from initial definition at a later stage
to see if it is still acceptable. The goal of RTL performance testing is to validate that the
policies/algorithms specified by high level feature definition are correctly implemented and
correlated on a preselected set of tests on key metrics and that performance is regularly regressed
against implementation changes. Silicon performance is what is seen by the customer of the
product and its goal is to test that the initial performance targets are met and published externally,
it also provides key insights for development of next architecture via measured data with any
programmable features and de-features in the chip.
Considering these differences in the capabilities and goals of performance assurance at each
stage, thinking of performance in a monolithic manner does not help one easily comprehend the
complete space needed to deliver high performance architectures. It is important to tackle
performance assurance at each stage of development process with a clear understanding of the
goals, capabilities, and limitations to understand the scope and gaps in coverage that is addressed
by the proposed taxonomy.
3
4. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.2, March 2013
Table 1. Comparison of attributes of performance testing with different abstraction levels.
HL Performance RTL Performance Silicon Performance
Development time Low Modest High
Workload size and Modest Short Long
length
Stimulus Injection Fine Fine Coarse
granule
Observation Granule Fine Fine Coarse
Result speed Modest Slow Fast
Microarchitecture Low High High
Detail captured/tested
(accuracy)
Perf space coverage Modest Modest High
Goal High level arch Validate arch Validate expected
partitions, pre-si policies get silicon performance
feature defn, pre-si implemented in from part, provide
pef projection, RTL, maintain input for next
implementation projected generation arch, perf
cost vs. perf performance over competition or
tradeoff next process shrink
4. A TAXONOMY FOR PERFORMANCE ASSURANCE
The total performance assurance space (PA) consists of a cross product of two spaces-
independent performance assurance space (IPA) and correlation performance assurance space
(CPA). IPA marks the space covered by independently testing each of the three abstraction levels
whereas CPA marks space covered by correlating performance between combinations of
abstraction levels. Examples of IPA space performance testing includes: performance comparison
with a feature on vs off, performance comparison with previous generation, performance
sensitivity to key micro-architecture parameters (policies, pipeline latency, buffer sizes, bus
width, speeds etc), benchmark score projections, transaction flow visual defect analysis (pipeline
bubbles), idle/loaded latency and peak bandwidth measurements, multi-source traffic interference
impact, etc. CPA space correlates measurements done in one space to that done in other space
with comparable configurations on various metrics to identify miscorrelations and gain
confidence. Coverage in both spaces is needed to get high confidence in performance. We
discuss each level of abstraction and propose a taxonomy consisting of the following four
components.
Let us denote:
α as the high level performance space,
β as the RTL performance space,
γ as the silicon performance space,
θ as the correlation performance assurance space (CPA), of individual spaces (α, β, γ), then the
taxonomy for performance assurance space for high performance architectures (PA) denoted by
is given as:
4
5. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.2, March 2013
XXX or {IPA X CPA} (1)
Where X denotes a Cartesian product of individual spaces. IPA is marked by { α X β X γ }.
4.1. High level performance assurance space α
Figure 1 depicts high level (HL) performance assurance space. Components of the space exploit
symmetry in providing coverage in all spaces to generate a regular taxonomy.
α ∈ { Analysis method (λ) X Stimulus (φ) X Component granularity (µ) X Transaction
source (η) X Metric (ρ) X Configuration (ξ) } (2)
Where:
Component granularity () ∈ {Platform, full chip, cluster, combination} (3)
Analysis method () ∈ {Analytical model, simulation model, emulation, combination} (4)
Stimulus () ∈ {Complete workload/benchmark, samples of execution traces, synthetic/directed
workload, combination} (5)
Traffic source () ∈ {Single source, multiple sources } (6)
Metric () ∈ {Benchmark score, throughput/runtime, latency /bandwidth, meeting
area/power/complexity constraints, combination} (7)
Configuration () ∈ {Single configuration, multiple configurations} (8)
Figure 1: IPA-space of high level performance assurance
High level performance analysis may be done using analytical model, simulation model,
emulation, or a combination of these methods. Analytical models are suitable for rapid high level
analysis of architectural partitions when the behavior and stimulus is well understood or can be
abstracted as such, simulation modeling may be trace or execution driven and can incorporate
5
6. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.2, March 2013
more details of the behavior to get higher confidence in performance analysis under complex
behavior and irregular stimulus, emulation is suitable when an emulation platform is available
and speed of execution is important. A behavioral high level simulation model may describe
different units with different abstraction levels (accuracy) and gets progressively more accurate
with respect to the implementation details as RTL is coded and correlated, the HL model serves
as a reference in later stages. A combination method can also be used for example, a spread-sheet
model that combines an analytical model with input from simulation model, if it is too expensive
to simulate underlying system with adequate accuracy and speed.
We may choose to test the system at different levels of component granularity. It is possible to
test at platform level (where the device under test is a component of the user platform), at full
chip level where the device under test is a chip implementing the high performance architecture,
for example, in a high volume manufacturing tester, at a large cluster level within the chip (for
example: out of order execution unit or last level cache in the uncore), or we may target all of
these depending on which pieces are critical for product performance. The test stimulus and test
environment for each component granularity may differ and needs infrastructure support to create
comparable stimulus, configuration etc. for performance correlation.
Stimulus may be provided in several forms depending on the device under test. We may use a
complete workload execution on a high level model, short trace samples from execution of a
workload (e. g. running on a previous generation platform or new arch simulator) driving a
simulation model, use synthetic/directed tests to exercise a specific performance feature or a
cluster level latency and bandwidth characterization. Synthetic stimulus may target for example,
idle or loaded latencies, cache hit/miss and memory page hit/miss bandwidth, peak read/write
interconnect bandwidth (BW) etc. Synthetic stimulus can also be directed toward testing
performance of new high risk features that may span across the micro-architecture. Synthetic
stimulus is targeted toward testing a specific behavior and/or metric whereas a real workload
trace captures combinations of micro-architecture conditions and flows that a synthetic behavior
may not generate and both are important from getting good coverage. Synthetic and real workload
stimuli may converge if the workload is a synthetic kernel and traces from its execution are used
in driving a simulator, however, in most cases the differentiation can be maintained. Stimulus
may also be a combination of these stimuli. The selected method depends on speed of execution
of the model, and the importance of the metric and workloads.
For traffic sources, depending on the device under test, we may test with a single traffic source or
a combination of traffic sources. Examples of a single traffic sources are CPU multi-core traffic,
integrated graphics traffic, or IO traffic that might be used to characterize core, graphics, IO
performance with a new feature. We may have a combination of above traffic sources to find
interesting micro-architecture performance bottlenecks. Examples of such bottlenecks include for
example, buffer sizes, forward progress mechanisms, coherency conflict resolution mechanisms.
Various metrics are used in evaluation. If the benchmark can be run on the HL model/silicon, a
benchmark score is used. If components of the benchmark or short traces of workload execution
are used, throughput (CPI) or run time is used. If performance testing is targeted to a specific
cluster, we may use latency of access or bandwidth to the unit as a metric. For a performance
feature to be viable, it needs to also meet area, power, and complexity constraints in
implementation. An addition of a new feature may need certain die area and incur leakage and
dynamic power that impacts TDP (thermal design point) power and battery life. Based on the
performance gain from a new feature and impact on the area/power, a feature may or may not be
viable depending on the product level guidelines and needs to be evaluated during HL and RTL
performance stages. Design/validation complexity of implementing the performance feature is a
key constraint for timely delivery. We may use a combination of these metrics depending on the
evaluation plan.
6
7. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.2, March 2013
There may be more than one product configuration supported with a given architecture. Several
possibilities exist: do complete performance testing on all configurations, a subset of the
performance testing on all configurations, or a subset of the performance testing on a subset of
configurations that differ in key ways to trade off effort against performance risk. The exact
configurations and the performance testing with each configuration depends on the context, the
proposed taxonomy differentiates between how much testing is done for each. An example of
multiple configurations for a core/uncore is use in several desktop, mobile, server configurations
that differ in key attributes (cache size, number of cores, core/uncore frequency, DRAM
speed/size/channels, PCI lanes etc.).
Not all combinations generated in the HL space are either valid, feasible, or equally important.
For example, although in principle one could specify an analytical model at platform granularity
to measure benchmark score, creating such a model with desired accuracy may not be feasible.
Performance testing with one configuration and traffic source may be more extensive than other
combinations due to the significance attached to those tests. A performance architect will specify
relevant components of the space that are deemed significant in a performance assurance
execution plan. We do not enumerate key combinations as their significance differs depending on
the context.
4.2. RTL performance assurance space
Figure 2 depicts the RTL performance assurance space.
β ∈ { Stimulus (φ) X Component granularity (µ) X transaction source (η) X Metric (ρ) X
configuration (ξ) } (9)
Where:
Stimulus () ∈ { Samples of execution traces, synthetic/directed workload, combination} (10)
Component granularity () ∈ { Full chip, cluster, combination} (11)
Traffic source () ∈ {Single source, multiple sources } (12)
Metric () ∈ {Throughput/runtime, latency /bandwidth, meeting area/power/complexity
constraints, combination} (13)
Configuration () ∈ {Single configuration, multiple configurations} (14)
Components of RTL performance assurance space are symmetric with the high level components
except for the following key differences arising from differences in environments. Performance
testing is done on RTL model that generally runs slow since it captures micro-architecture details.
Running large benchmarks is thus generally hard without a large compute capacity and it is best
to use short workload test snippets or directed tests. The execution results may be visually
inspected or measured using performance checker rules on result log files.
7
8. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.2, March 2013
Figure 2: IPA-space of RTL performance assurance
4.3. Silicon performance assurance space γ
Figure 3 depicts silicon performance assurance space.
γ ∈ { Stimulus (φ) X Component granularity (µ) X transaction source (η) X Metric (ρ) X
configuration (ξ) } (15)
Where:
Stimulus () ∈ {Complete workload/benchmark, synthetic/directed workload, combination}
(16)
Component granularity () ∈ {Platform, full chip, combination} (17)
Traffic source () ∈ {Single source, multiple sources } (18)
Metric () ∈ {Benchmark score , Throughput/runtime, latency /bandwidth, combination} (19)
Configuration () ∈ {Single configuration, multiple configurations} (20)
Components of silicon performance are symmetric to other spaces with notable differences
related to accessibility/observability notes earlier. Thus for devices under test, stimulus
component granularity is limited to full chip/platform.
Figure 3: IPA-space of silicon performance assurance
8
9. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.2, March 2013
4.4. Correlation performance Assurance (CPA) Space
Figure 4 shows four components of CPA using definitions symmetric to IPA space:
Let τ denote the correlation space between RTL and High level performance
Let ϖ denote the correlation space between High level and silicon performance
Let ∂ denote the correlation space between RTL and silicon performance
Let Ω denote the correlation space between HL, RTL, and silicon performance
Then CPA θ is given as:
θ ∈ {τ X X X } (21)
τ , , , ∈ { Stimulus (φ) X Component granularity (µ) X transaction source (η) X Metric
(ρ) X configuration (ξ) } (22)
For :
Stimulus () ∈ {Samples of execution traces, synthetic/directed workload, combination}
Component granularity () ∈ {Full chip, cluster, combination}
Traffic source () ∈ {Single source, multiple sources }
Metric () ∈ {Throughput/runtime, latency /bandwidth, area/power/complexity constraint,
combination}
Configuration () ∈ {Single configuration, multiple configurations}
For :
Stimulus () ∈ {Complete workload/benchmark, synthetic/directed workload, combination}
Component granularity () ∈ {Platform, full chip, combination}
Traffic source () ∈ {Single source, multiple sources }
Metric () ∈ { Benchmark score , Throughput/runtime, latency /bandwidth, combination}
Configuration () ∈ {Single configuration, multiple configurations}
For :
Stimulus () ∈ { Synthetic/directed workload}
Component granularity () ∈ {Full chip}
Traffic source () ∈ {Single source, multiple sources }
Metric () ∈ {Throughput/runtime, latency /bandwidth, combination}
Configuration () ∈ {Single configuration, multiple configurations}
For :
Stimulus () ∈ {Synthetic/directed workload}
Component granularity () ∈ {Full chip}
Traffic source () ∈ {Single source, multiple sources }
Metric () ∈ {Throughput/runtime, latency /bandwidth, combination}
Configuration () ∈ {Single configuration, multiple configurations}
CPA space denotes the part of the total coverage that is obtained by correlating between IPA
spaces using comparable stimulus, metric, traffic sources, components, and configurations. This
coverage is necessary because we are not able to test everything in individual spaces due to
limitations discussed earlier and correlation space improves that coverage. In CPA space, high
priority is on correlating the performance of the RTL model with the high level model. The high
level model runs fast enough and can be used to project benchmark level performance and if the
two models correlate, the high level model serves as a good proxy for what we may expect for
RTL benchmark level projection. The significance of each correlation space may differ. We have
9
10. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.2, March 2013
discussed individual components of each space earlier and their definition is not repeated here for
brevity.
The PA taxonomy for high performance architectures provides a new way to look at the complete
performance assurance space that is easily understood and extended using a well defined and
regular set of criteria. The criteria used in defining performance assurance space are represented
by a key set of issues that an architect would need to resolve while designing the solution. This
does not mean it includes every possible issue as a taxonomy based on such an endeavor would
be unwieldy. The selected criteria are relevant to all abstraction levels, capture key issues that
need to be addressed, and any significant differences between the levels can be isolated using the
criteria. We discuss application of this taxonomy in the next section.
Figure 4: Correlation performance assurance space (CPA)
5. APPLICATION AND CONSIDERATIONS
5.1. Solution Spaces and Coverage
Figure 5. shows that the proposed taxonomy partitions the total performance assurance space into
seven distinct spaces. The IPA is marked by spaces 1, 2, and 3. CPA space is marked by spaces 4,
5, 6, 7 that overlap IPA spaces. Table 2 illustrates high level characteristics of each space and
shows what areas they may cover. The table is meant to be illustrative and not an exhaustive
coverage of each space. For example, if synthetic/directed stimulus is missing from the selected
solution in all components and instead have only real workload/traces for stimulus, there may be
a hole in testing peak bandwidth of key micro-architecture components. If synthetic/directed tests
were present only in silicon performance, then the testing gap may propagate until silicon through
HL and RTL performance and may be expensive to fix later. Similar consideration applies to
dropping testing of a high risk feature from one or more of the spaces using synthetic/directed
tests. In these cases, real workload traces may not find a performance problem with the feature
without explicit directed testing and may result in a potential performance coverage hole. Similar
coverage comments apply to CPA space in the table depending on what coverage is sought. For
detailed gap/risk assessment, more details of each component of the solution need to be specified
in an assurance plan and the combinations reviewed over the PA space, for example, models
needed for evaluation, list of workloads, details of synthetic tests targeting specific
behaviors/features, details of clusters, traffic sources, detailed metrics and configurations.
10
11. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.2, March 2013
Space 4
Space 1 Space 3
HL Performance RTL performance
Space 7
Space 6 Space 2 Space 5
Silicon performance
Figure 5: Solution Spaces of Performance Assurance Methods
Depending on a product’s life stage and goals, coverage in all spaces may not be equally
important. For example, for a product design to deliver expected performance covering space 4
(RTL performance correlation with high level model) may be more important than covering space
5 that would test micro-architecture defeatures, hardware performance counters/events etc.
Similarly, space 7 may be higher priority than space 5 and one could make coverage, effort
tradeoffs/prioritization that way.
Table 2: Example of coverage provided by each solution space
Performance Validation Space Coverage
Space 1 (IPA- atures, less
micro-architecture details (more refined as micro-architecture is defined), set product
performance projections/expectations
Space 2 (IPA- Silicon performance. Product performance projections published for
various benchmarks with silicon implementation or measuring and comparing
performance with competitive products, tune parameters to optimize performance
(BIOS setting)
Space 3 (IPA-
after functional coding at unit/cluster level with details of micro-architecture
implemented, transaction flow inspection for defects (bubbles)
Space 4 (CPA-τ) Verify that RTL implemented algorithms specified in the architecture
specification derived from high level analysis by correlating with HL model with short
tests and snippets of workloads. Validate and correlate changes in micro-architecture
required from implementation complexity and their performance impact
Space 5 (CPA-∂)Test/validate cases that have performance impact and needs details of micro-
architecture not implemented in HL model, examples- product defeatures, rare
architecture/micro-architecture corner cases with short full chip tests, hardware
performance counters and events, other performance observability hooks
Space 6 (CPA-ϖ)Test full benchmark execution and correlate silicon performance to that
projected with a high level model to see if it meets targets when the full
implementation is considered, provides a method for correlation of pre and post silicon
measurements and validation of pre silicon methodologies, also useful for providing
input for next generation CPUs with targeted studies of features and defeatures
Space 7 (CPA-Ω)This is intersection of all three methods and used to test performance pillars
in all cases. For example running full chip micros/directed tests for key component
latencies and bandwidths and high risk features which can be regularly tracked in a
regression suite as the RTL and silicon steppings change
11
12. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.2, March 2013
We illustrate below application of the taxonomy to performance verification described in the
literature and then show more complete examples of application of taxonomy to specific
examples of high performance Intel processors and MCH (memory controller hub) chipsets.
These examples depict how the performance verification work in the literature can be described
under the proposed framework and how the taxonomy extends to testing with real chips.
5.2. Application Examples
Application of the taxonomy to work done in the literature is shown only to the specific methods
discussed in these papers and does not reflect on whether the products described were limited to
testing shown here. We consider example discussed in [10, 11, 12-13]. In 10, Doering et al
consider performance verification for high performance PERCS Hub chip developed by IBM that
binds several cores and IO. This work largely relates to high level
(analytical(queue)+simulation(OMNET) ) and VHDL RTL correlation for the chipset. In the
proposed taxonomy, the work described in the paper would be classified under CPA space and
HL-RTL correlation () branch of CPA as follows:
HL-RTL Correlation {Stimulus=Trace driven, Component granularity=full chip, Traffic
source=multiple, Metric=multiple (latency, throughput), Configuration= single}
In 11, Holt et al describe system level performance verification of multi-core SOC. Two methods
of performance verification have been described in this paper-top down and bottom up
verification. Under the proposed taxonomy, the top down performance verification would be
described under IPA HL performance assurance (
verification would be described under CPA HL-RTL correlation () branch as follows:
(Top down) IPA HL performance {Analysis method= emulation, Stimulus=synthetic,
Component granularity=full chip, Traffic source=multiple, Metric=combination (latency/BW,
throughput), Configuration= multiple}
(Bottom up) CPA HL-RTL correlation {Stimulus=synthetic, Component granularity=full chip,
Traffic source=multiple, Metric=combination (unloaded latency, throughput), Configuration=
single}
In 12, 13 Bose et al have described architecture performance verification of IBM’s PowerPC ™
Processors. Under the proposed taxonomy, the work described here would be included in the
CPA space and HL-RTL correlation () branch of CPA as follows:
HL-RTL Correlation {Stimulus=combination, Component granularity=full chip, Traffic
source=single (CPU core), Metric=combination (latency/BW, throughput), Configuration=
Multiple (Power3, Power4)}
These examples show that the performance verification work in the literature focuses on subset of
the PA space and there is no clear definition of the whole space. The proposed taxonomy achieves
two goals-describes the total space and provides a consistent terminology to describe parts of the
total space. The classification above also shows high level similarities and differences in the
methods used in these cases.
Next, we show application of the proposed taxonomy to three examples: IA™ CPU core, MCH
chipset, and a memory controller cluster in Table 3. The first table shows IPA space and the
second table shows CPA space. The taxonomy mapping for each example is illustrative and other
solutions are possible depending on the context.
12
13. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.2, March 2013
For IPA HL core performance, a combination of analytical model during early exploration and a
simulation model of the architecture are used. The stimulus is a combination of directed tests for
specific latency/BW characterization and real workload benchmark traces for high quality
coverage. The directed tests also cover new features introduced in the architecture. The testing is
done as a combination of cluster and full chip granularity. Single source of traffic is IA™ core
workloads/traces and the measurement granularity is a combination of benchmark score,
throughput/run time, and latency and BW of targeted units. Since the core is used in multiple
configurations (desktop, mobile, server), testing is done with multiple configurations. For core
RTL testing, similar considerations apply as high level testing except that the metrics are run
time/throughput and latency bandwidth combination and stimulus contains a combination of
traces and synthetic workload. For core Silicon testing, the stimulus consists of combination of
complete workload and directed full chip tests, and component granularity is platform and full
chip. Other considerations for metric and configurations with RTL and silicon performance are
comparable to that of HL performance.
For IPA MCH chipset performance testing (in chipset column), one significant difference is in the
traffic source. The core had a single source of traffic, MCH binds multiple sources that includes
cores, IO, graphics. The performance testing for MCH is done with multiple sources of
transactions and combination of metrics. If the MCH functionality is integrated into an uncore or
a SOC, it would have a comparable IPA scheme.
A memory controller (MC) is a cluster within the uncore or MCH and its performance testing is
shown as the third example. It can also be left as a part of uncore cluster/MCH testing if
considered adequate. In this example, we consider memory controller as a modular component
that may be used for targeting more than one architecture and thus needs to be independently
tested for high confidence. High level IPA testing of a memory controller is done with a
simulation model and synthetic micros directed at performance aspects of a memory controller
that test core timings, turnarounds, latency, and BW under various read write mixes and page
hit/miss proportions. It can be tested with multiple traffic sources with different memory
configurations (number of ranks, DIMMS, speeds, timings etc.). For silicon testing, memory
controller performance is tested as a combination of synthetic workloads and benchmarks
(streams) etc.
CPA space for all four components is shown in the second table. For example, for a CPU core
HL-RTL correlation, a combination of short real workload traces along with synthetic workloads
is tested on the HL model and RTL at full chip and cluster combination. The workload source is
an IA core and a combination of metrics throughput (for workload traces) and latency/BW (with
synthetic workload) is used for correlation. This correlation is done on multiple configurations.
For HL-silicon correlation, combination of full chip latency/BW micros and benchmarks are run
and correlation is done for benchmark scores and latency/BW metrics. This correlation also helps
improve the HL model accuracy and a useful reference for development of next generation
processors. For CPU core, RTL silicon correlation is done on a single configuration whereas
other three correlations are done on multiple configurations. This illustrates an example of trading
effort vs. coverage at a low risk since RTL silicon correlation covers uncommon cases from
performance perspective and get adequate testing on a single configuration. The HL-RTL-Silicon
correlation testing is done with targeted synthetic full chip micros that test the core metrics that
are key for product performance and the testing is done at full chip with combination of
throughput and latency/BW metrics in multiple configurations. Similar considerations apply to
chipset and memory controller CPA space.
13
14. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.2, March 2013
Table 3: Example of application of taxonomy to real world examples
IPA CPU core Chipset Memory Controller unit
High Level Analysis Combination Simulation Simulation
Testing method
Stimulus Combination Combination Synthetic
Component Combination Combination Cluster
Granularity
Traffic src Single Multiple Multiple
(IA/IO/GFX)
Metric Combination Combination Latency/BW
Configs Multiple Multiple Multiple
RTL Stimulus Combination Combination Synthetic
Component Combination Combination Cluster
Granularity
Traffic src Single Multiple Multiple
Metric Combination Combination Latency/BW
Configs Multiple Multiple Multiple
Silicon Stimulus Combination Combination Combination
Component Combination Combination Platform
Granularity
Traffic src Single Multiple Multiple
Metric Combination Multiple Latency/BW
Configs Multiple Multiple Multiple
14
15. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.2, March 2013
CPA CPU core Chipset Memory Controller unit
HL-RTL Stimulus Combination Synthetic Synthetic
Component Combination Full chip Cluster
Granularity
Traffic src Single Multiple Multiple
(IA/IO/GFX)
Metric Combination Combination Latency/BW
Configs Multiple Single Multiple
HL-Silicon Stimulus Combination Synthetic Synthetic
Component Combination Full chip Full chip
Granularity
Traffic src Single Multiple Multiple
Metric Combination Combination Latency/BW
Configs Multiple Single Multiple
RTL-Silicon Stimulus Synthetic Synthetic Synthetic
Component Full chip Full chip Full chip
Granularity
Traffic src Single Single Single
Metric Combination Throughput/ru Latency/BW
n time
Configs Single Single Single
HL-RTL- Stimulus Synthetic Synthetic Synthetic
Silicon Component Full chip Full chip Full chip
Granularity
Traffic src Single Multiple Single
Metric Combination Latency/BW Latency/BW
Configs Multiple Single Multiple
15
16. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.2, March 2013
6. CONCLUSIONS
This paper presented a systematic approach to the complex problem of performance assurance of
high performance architectures manufactured in high volume based on methods successfully
deployed over several generations of Intel cores/chipsets in a unified taxonomy. The taxonomy
extensively considers performance assurance through three key stages of a product that include
high level product performance, RTL performance, and silicon performance and has not been
discussed in the literature previously. The proposed taxonomy incorporated capabilities and
limitations of performance tools used at each stage and helps one construct a complete high level
picture of performance testing that needs to be done at each stage. An application of the
taxonomy to examples in the literature and real world examples of a CPU core, MCH chipset, and
memory controller cluster are shown.
The key advantages of proposed taxonomy are: it shows at high level where the performance
assurance methods need to be different, it makes one think through all phases of a product starting
from high level until silicon, enumeration of the taxonomy in a detailed performance assurance
execution plan identifies if there are holes in the performance testing that either need to be filled
or concomitant risk is appropriately assessed. The taxonomy helps with resource planning and
mapping and delivering a successful high performance product.
The proposed taxonomy has been successfully used in performance assurance of Intel’s
Nehalem/Westmere CPUs and several generations of chipsets. This systematic approach has been
instrumental in identifying many pre-silicon performance issues early on and any corner cases
identified in silicon due to several cross checks embedded in the methodology. It has helped
creating a rigorous performance assurance plan. The proposed work is new and should be of
interest to manufacturers of high performance architectures.
7. REFERENCES
[1] Don Williams, Peter D. Burns, Larry Scarff, (2009) “Imaging performance taxonomy”; Proc. SPIE
7242, 724208; doi:10.1117/12.806236, Monday 19 January 2009, San Jose, CA, USA
[2] Mink, A.; Carpenter, R.J.; Nacht, G.G.; Roberts, J.W.; (1990) “Multiprocessor performance-
measurement instrumentation”, Computer, Volume: 23 , Issue: 9, Digital Object Identifier:
10.1109/2.58219, Page(s): 63 - 75
[3] Mamrak, S.A.; Abrams, M.D, (1979) “Special Feature: A Taxonomy for Valid Test Workload
Generation “; Computer, Volume: 12, Issue: 12, Digital Object Identifier 10.1109/MC.1979.1658577,
Page(s): 60 – 65
[4] Oliver, R.L.; Teller, P.J.; (1999) “Are all scientific workloads equal?”, Performance, Computing and
Communications Conference, 1999. IPCCC '99. IEEE International, Digital Object Identifier:
10.1109/PCCC.1999.749450, Page(s): 284 - 290
[5] Mary Hesselgrave , (2002) “Panel: constructing a performance taxonomy”, July 2002 WOSP '02:
Proceedings of the 3rd international workshop on Software and performance
[6] S. Mukherjee, S Adve, T. Austin, J. Emer, P. Magnisson, (2002) “Performance simulation tools”,
Computer , Issue Date : Feb 2002, Volume : 35 , Issue:2, On page(s): 38, Digital Sponsored by :
IEEE Computer Society
[7] S. Mukherjee, S. Reinhardt, B. Falsafi, M. Litzkow, S. Huss-Lederman, M. Hill, J. Larus, and D.
Wood, (2000) “Fast and portable parallel architecture simulators: Wisconsin wind tunnel II”, IEEE
Concurrency, vol. 8, no. 4, pp. 12–20, Oct.–Dec. 2000.
[8] Heekyung Kim , Dukyoung Yun , (2009) “Scalable and re-targetable simulation techniques for
systems”, Proceeding CODES+ISSS '09 Proceedings of the 7th IEEE/ACM international conference
on Hardware/software codesign and system synthesis , NY 2009
[9] Hoe, James C.; Burger, Doug; Emer, Joel; Chiou, Derek; Sendag, Resit; Yi, Joshua; (2010) “The
Future of Architectural Simulation”, Micro, IEEE, Volume: 30 , Issue: 3, Digital Object Identifier:
10.1109/MM.2010.56, Page(s): 8 - 18
16
17. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.2, March 2013
[10] Andreas Doering and Hanspeter Ineichen, “Visualization of Simulation Results for the PERCS
HubChip Performance Verification”, Proc. SIMUTooLs 2011, 4th ICST conf on simulation tools and
techniques, March 21-25, Barcelona, Spain
[11] Holt, J.; Dastidar, J.; Lindberg, D.; Pape, J.; Peng Yang; “System-level Performance Verification of
Multicore Systems-on-Chip”, Microprocessor Test and Verification (MTV), 2009 10th International
Workshop on Digital Object Identifier: 10.1109/MTV.2009.10, Publication Year: 2009 , Page(s): 83 -
87
[12] Surya, S.; Bose, P.; Abraham, J.A.; “Architectural performance verification: PowerPC ”, Computer
Design: VLSI in Computers and Processors, 1994. ICCD '94. Proceedings., IEEE International
Conference on Digital Object Identifier: 10.1109/ICCD.1994.331922 , Publication Year: 1994 ,
Page(s): 344 - 347
[13] Bose, P.; “ Ensuring dependable processor performance: an experience report on pre-silicon
performance validation “, Dependable Systems and Networks, 2001. DSN 2001. International
Conference on, Digital Object Identifier: 10.1109/DSN.2001.941432, Publication Year: 2001 ,
Page(s): 481 - 486
[14] Richter, K.; Jersak, M.; Ernst, R.; “A formal approach to MpSoC performance verification”,
Computer, Volume: 36 , Issue: 4, Digital Object Identifier: 10.1109/MC.2003.1193230, Publication
Year: 2003 , Page(s): 60 – 67.
Authors
Hemant Rotithor: Received his M.S and Ph.D. in Electrical and computer Engineering
from IIT Bombay and University of Kentucky. He taught at Worcester Polytechnic
Institute, worked at DEC on compiler performance analysis; he is currently working at
Intel Corporation in Hillsboro Oregon in the microprocessor architecture group. At
Intel Corporation, he has worked on performance of many generations of
microprocessors and chipsets. He has several patents issued in the area of uncore
microarchitecture performance, memory scheduling, and power management. He has
published papers on performance analysis, and distributed computing, and validation.
17