SlideShare a Scribd company logo
1 of 2
Download to read offline
Maestro: GUI-based test automation and management
framework for Mixed Signal Validation
Hardik Patel, Kapil Dalal, Ying Xu
Mixed Signal Audio Validation Group
Cirrus Logic, Austin, USA
Hardik.Patel@cirrus.com, Kapil.Dalal@cirrus.com, Ying.Xu@cirrus.com
Abstract— The amplifier team of the validation group
implemented a new test automation and management framework
named Maestro. The main features of this approach are: a) Test
and project information stored in configuration files b) Graphical
User Interface (GUI) in MATLAB to process and display this
information c) Validation scripts underneath, in the form of
MATLAB functions and/or measurement classes. This approach
leads to significant reduction in testing time by about 66%, while
making the validation infrastructure reusable across any project.
Other benefits include: abstraction of test development and
execution, standardization of testing methodologies across
projects, and a system for monitoring test execution and logging
measurement data. Principles of this approach and the significant
enhancements it offers are briefly discussed in this abstract.
Keywords - Mixed-Signal validation, test infrastructure,
MATLAB scripts, automation, GUI, measurement class
I. INTRODUCTION
The validation team of the Mixed Signal Audio Group
performs functional validation and characterization of high
performance mixed-signal analog converters, Class D power
amplifiers and digital interface products. Manual and
automated test procedures are used to this effect. There was an
ongoing effort by the amplifier validation team to achieve code
reuse by changing the software architecture of the test scripts
and standardization of interfaces. Maestro, the resultant
framework, has three essential components:
a) Configuration files with project and test information
b) A graphical user interface (GUI) in MATLAB that
visually presents this information to the test operator
c) Wrappers underneath to process the test information,
interface with various endpoints (Device Under Test (DUT),
test equipment, validation platform (EEB), etc), monitor and
update test status and log results to the database
The GUI as the front-end and the new software structure as
the back-end leads to significant improvements in the
validation method workflow.
II. BACKGROUND AND METHODOLOGY
Any test script has three parts:
 Setup Section: This part of the script configures the
DUT and equipments used for testing (power supplies,
oscilloscopes, measurement equipments) for the current
test.
 Perform Section: This part of the script executes tests
by controlling the test equipment and DUT, generates
test input patterns, takes measurements and saves the
results. It is also responsible for monitoring the status
of tests under execution.
 Cleanup Section: This part of the script restores the test
equipments to their baseline state. This could be a
result of an unexpected or normal exit of the test script.
The following sections briefly describe the conventional
validation approach and the significant improvements made by
using the Maestro code re-use model (a.k.a. new style).
Figure 1. Code Snippets representing the conventional old-style approach
and the Maestro Code Re-Use Solution
A. Conventional Validation Approach
In this approach, the tests were implemented in procedural
style. Each test had a mix of setup steps: some common across
all other tests, others very specific to the current measurement.
During data collection, test conditions were edited in each test
script. Hence, multiple wrapper files were necessitated.
This approach had numerous drawbacks. Firstly, tests with
different conditions could not be executed together. Thus, to
change the test-script to be run would require wrapper-file
modifications. The common setups for each test had to be
repeatedly implemented. A single change in the common setup
necessitated the editing of multiple function files. In addition,
this process had the potential of introducing bugs into the test
script due to human oversight. Most importantly, it was not
easy to port existing tests from one project to another as device
information was a part of the test script. Also, common setups
function old_style()
total_time=0;s=0;p=0;cl=0;
for A=[0 1 2]
for B=[4 5 6]
for C=[7 8 9 10]
% call measurement
% function here
disp(' Setup...');
s=s+1;
disp(' Perform...');
p=p+1;
disp(' Cleanup...');
cl=cl+1;
end
end
End
total_time=s+p+cl;
function new_style(measobj,
conds)
total_time=0;
s=0;p=0;cl=0;
measobj.setup(); % s=1;
s=1;
for A= conds_A %[0 1 2]
for B=conds_B % [4 5 6]
for C=conds_C % [7 8 9 10]
px=measobj.perform(); %1
p=p+1;
end
end
end
measobj.cleanup(); % cl=1;
total_time=1+p+1; %1/3rd
runtime
were mixed together with project specific setups. In the
absence of a clear abstraction of device information, the rework
cost and effort were prohibitive.
The left block of Figure 1 shows a code snippet from a
script written in the former validation style. Setup, Perform and
Cleanup are executed 36 times each. Hence the total time taken
(assuming same time is taken by each of the three sections) =
108 units.
B. The Maestro Approach (Code Re-Use Solution)
The Code Re-Use Solution uses concepts of object-oriented
programming. Thus, the resulting test script has a modular
structure and a level of abstraction that provides a clearly
defined interface. This eases the processes of code modification
and reuse. Similar measurements have been grouped into a
generic and reusable class structure called measurement class.
Each test is an instantiation of a particular measurement class.
All classes have the same interface: setup(), perform(),
cleanup(). So each class looks alike and can be processed in the
same way.
DUT and measurement devices are completely abstracted
and decoupled from the measurement class. The advantage of
device abstraction is that common setup steps can be performed
only once at the beginning of the test execution, thus saving
time. Test-specific setups are performed by the m.setup(…) of
the measurement class. m.perform(…) performs the
measurements of the test and outputs the result. Every iteration
has the required set of actions to prepare for repeating the same
measurement using new conditions, thus allowing re-use of the
same code segment. m.cleanup(…) performs any necessary
cleanup steps.
The right block of Figure 1 shows a code snippet of a script
written in the new validation style. Herein, setup() and
cleanup() are executed only once while perform() is executed
36 times. Hence, the total time taken (assuming same time is
taken by each of the three sections) = 38 units. This represents
a 64% decrease in code execution time.
C. The GUI
The GUI has been implemented in MATLAB. It acts as the
front-end of the Code Re-Use Solution, providing an efficient
and intuitive one-stop interface to the user for the development,
control, test execution, monitoring and logging of test results
(to a database and/or a remote data server).
Figure 2. Maestro GUI Screengrab
III. RESULTS AND CONCLUSION
The software structure arising from the Code Re-Use and
object-oriented approach (as the back-end) coupled with the
GUI (as the front-end) offers a significantly more efficient and
easier way of validating mixed-signal products. A single
wrapper and sequencer file suffices for executing multiple
tests. In addition, tests with different conditions can be
executed together. The element of human error in the
conduction of tests is reduced to a great extent as editing test
conditions is completely delinked from the MATLAB code.
This allows validation techs to independently perform tasks and
collect data on tasks with developed scripts, enabling validation
engineers to focus on corner case testing. This significantly
enhances validation quality. This also allows quick ramping up
of new validation engineers.
Measurement classes have standardized interfaces, ensuring
that classes can be reused across projects. The concept of
configuration files allows porting of test conditions between
projects. It also enables recreating a particular test condition
and/or debugging in the lab for later inspection. DUT and
measurement instrument information is completely abstracted
from measurement classes. This allows standardization of
testing methods and their re-use across newer versions of the
IP.
Maestro also gives the user options for logging data to a
database or to a diary file or both. It also facilitates easy
interfacing with equipments like oscilloscopes and
temperature-controllers. The structure makes it easy to
adapt/incorporate existing functions into the Maestro structure.
There is generally about 66% improvement in the time
taken for testing using the Code Re-Use method over the
former one. As most IP from device to device does not change
the measurement methodology will remain the same. However,
it is important to note that especially in mixed signal validation
the work can be demanding and is subject to constant change.
This platform merely makes it far easier to adapt for any
changes in this test flow. Given these advantages, Maestro has
been successfully used in the validation lab environment for
more than past two years under amplifier and CODEC projects.
ACKNOWLEDGMENT
We take this opportunity to wholeheartedly thank our
managers, Mr. James Kandasamy and Mr. Randy Boudreaux
for enabling us and supporting us throughout the development
of Maestro. We also thank our validation team at Cirrus Logic
Inc. for enthusiastically using Maestro and offering valuable
suggestions.
REFERENCES
[1] Dalal, Kapil, “Code Re-Use Solution for Mixed-Signal Validation”,
August 2012, Project Report submitted to the University of Florida
[2] http://www.mathworks.com/videos/creating-a-gui-with-guide-
68979.html
[3] Bapiraju Vinnakota, "Analog and mixed-signal test", Upper Saddle
River, NJ : Prentice Hall PTR, c1998
[4] Burns, Mark and Roberts, Gordon, "An Introduction to Mixed Signal IC
Test and Measurement", New York : Oxford University Press, c2001

More Related Content

What's hot

ESTIMATING HANDLING TIME OF SOFTWARE DEFECTS
ESTIMATING HANDLING TIME OF SOFTWARE DEFECTSESTIMATING HANDLING TIME OF SOFTWARE DEFECTS
ESTIMATING HANDLING TIME OF SOFTWARE DEFECTScsandit
 
ECG beats classification using multiclass SVMs with ECOC
ECG beats classification using multiclass SVMs with ECOCECG beats classification using multiclass SVMs with ECOC
ECG beats classification using multiclass SVMs with ECOCYomna Mahmoud Ibrahim Hassan
 
Analysis of 3-D Model in ANSYS 9.0 by Java Program and Macros Using Interlink...
Analysis of 3-D Model in ANSYS 9.0 by Java Program and Macros Using Interlink...Analysis of 3-D Model in ANSYS 9.0 by Java Program and Macros Using Interlink...
Analysis of 3-D Model in ANSYS 9.0 by Java Program and Macros Using Interlink...IDES Editor
 
Software Engineering (Testing techniques)
Software Engineering (Testing techniques)Software Engineering (Testing techniques)
Software Engineering (Testing techniques)ShudipPal
 
Parameter Estimation of Software Reliability Growth Models Using Simulated An...
Parameter Estimation of Software Reliability Growth Models Using Simulated An...Parameter Estimation of Software Reliability Growth Models Using Simulated An...
Parameter Estimation of Software Reliability Growth Models Using Simulated An...Editor IJCATR
 
resume(Cheah Yaw Wah)
resume(Cheah Yaw Wah)resume(Cheah Yaw Wah)
resume(Cheah Yaw Wah)Yaw Wah Cheah
 
From Concepts to Implementation: Why I Missed My First Paper Submission Deadline
From Concepts to Implementation: Why I Missed My First Paper Submission DeadlineFrom Concepts to Implementation: Why I Missed My First Paper Submission Deadline
From Concepts to Implementation: Why I Missed My First Paper Submission DeadlineMiguel Velez
 
Program Performance Analysis Toolkit Adaptor
Program Performance Analysis Toolkit AdaptorProgram Performance Analysis Toolkit Adaptor
Program Performance Analysis Toolkit AdaptorMichael Pankov
 
ProModel Process Simulation Projects
ProModel Process Simulation ProjectsProModel Process Simulation Projects
ProModel Process Simulation Projectsdilbertdave
 
Model-Driven Testing with UML 2.0
Model-Driven Testing with UML 2.0Model-Driven Testing with UML 2.0
Model-Driven Testing with UML 2.0Ishara Amarasekera
 
Developing Tools for “What if…” Testing of Large-scale Software Systems
Developing Tools for “What if…” Testing of Large-scale Software SystemsDeveloping Tools for “What if…” Testing of Large-scale Software Systems
Developing Tools for “What if…” Testing of Large-scale Software Systems James Hill
 
Calibration of Deployment Simulation Models - A Multi-Paradigm Modelling Appr...
Calibration of Deployment Simulation Models - A Multi-Paradigm Modelling Appr...Calibration of Deployment Simulation Models - A Multi-Paradigm Modelling Appr...
Calibration of Deployment Simulation Models - A Multi-Paradigm Modelling Appr...Daniele Gianni
 
State monitoring configuration
State monitoring configurationState monitoring configuration
State monitoring configurationRamnGonzlezRuiz2
 

What's hot (17)

ESTIMATING HANDLING TIME OF SOFTWARE DEFECTS
ESTIMATING HANDLING TIME OF SOFTWARE DEFECTSESTIMATING HANDLING TIME OF SOFTWARE DEFECTS
ESTIMATING HANDLING TIME OF SOFTWARE DEFECTS
 
ECG beats classification using multiclass SVMs with ECOC
ECG beats classification using multiclass SVMs with ECOCECG beats classification using multiclass SVMs with ECOC
ECG beats classification using multiclass SVMs with ECOC
 
Analysis of 3-D Model in ANSYS 9.0 by Java Program and Macros Using Interlink...
Analysis of 3-D Model in ANSYS 9.0 by Java Program and Macros Using Interlink...Analysis of 3-D Model in ANSYS 9.0 by Java Program and Macros Using Interlink...
Analysis of 3-D Model in ANSYS 9.0 by Java Program and Macros Using Interlink...
 
Software Engineering (Testing techniques)
Software Engineering (Testing techniques)Software Engineering (Testing techniques)
Software Engineering (Testing techniques)
 
Parameter Estimation of Software Reliability Growth Models Using Simulated An...
Parameter Estimation of Software Reliability Growth Models Using Simulated An...Parameter Estimation of Software Reliability Growth Models Using Simulated An...
Parameter Estimation of Software Reliability Growth Models Using Simulated An...
 
resume(Cheah Yaw Wah)
resume(Cheah Yaw Wah)resume(Cheah Yaw Wah)
resume(Cheah Yaw Wah)
 
Fuzzy model reference learning control (1)
Fuzzy model reference learning control (1)Fuzzy model reference learning control (1)
Fuzzy model reference learning control (1)
 
ATK_PAPER.pdf
ATK_PAPER.pdfATK_PAPER.pdf
ATK_PAPER.pdf
 
From Concepts to Implementation: Why I Missed My First Paper Submission Deadline
From Concepts to Implementation: Why I Missed My First Paper Submission DeadlineFrom Concepts to Implementation: Why I Missed My First Paper Submission Deadline
From Concepts to Implementation: Why I Missed My First Paper Submission Deadline
 
Program Performance Analysis Toolkit Adaptor
Program Performance Analysis Toolkit AdaptorProgram Performance Analysis Toolkit Adaptor
Program Performance Analysis Toolkit Adaptor
 
ProModel Process Simulation Projects
ProModel Process Simulation ProjectsProModel Process Simulation Projects
ProModel Process Simulation Projects
 
Model-Driven Testing with UML 2.0
Model-Driven Testing with UML 2.0Model-Driven Testing with UML 2.0
Model-Driven Testing with UML 2.0
 
Chap3
Chap3Chap3
Chap3
 
2453
24532453
2453
 
Developing Tools for “What if…” Testing of Large-scale Software Systems
Developing Tools for “What if…” Testing of Large-scale Software SystemsDeveloping Tools for “What if…” Testing of Large-scale Software Systems
Developing Tools for “What if…” Testing of Large-scale Software Systems
 
Calibration of Deployment Simulation Models - A Multi-Paradigm Modelling Appr...
Calibration of Deployment Simulation Models - A Multi-Paradigm Modelling Appr...Calibration of Deployment Simulation Models - A Multi-Paradigm Modelling Appr...
Calibration of Deployment Simulation Models - A Multi-Paradigm Modelling Appr...
 
State monitoring configuration
State monitoring configurationState monitoring configuration
State monitoring configuration
 

Similar to Maestro_Abstract

Advanced Verification Methodology for Complex System on Chip Verification
Advanced Verification Methodology for Complex System on Chip VerificationAdvanced Verification Methodology for Complex System on Chip Verification
Advanced Verification Methodology for Complex System on Chip VerificationVLSICS Design
 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...Soham Mondal
 
RPG Program for Unit Testing RPG
RPG Program for Unit Testing RPG RPG Program for Unit Testing RPG
RPG Program for Unit Testing RPG Greg.Helton
 
RESEARCH ON DISTRIBUTED SOFTWARE TESTING PLATFORM BASED ON CLOUD RESOURCE
RESEARCH ON DISTRIBUTED SOFTWARE TESTING  PLATFORM BASED ON CLOUD RESOURCERESEARCH ON DISTRIBUTED SOFTWARE TESTING  PLATFORM BASED ON CLOUD RESOURCE
RESEARCH ON DISTRIBUTED SOFTWARE TESTING PLATFORM BASED ON CLOUD RESOURCEijcses
 
Introduction to testing.
Introduction to testing.Introduction to testing.
Introduction to testing.Jithinctzz
 
DevOps CI Automation Continuous Integration
DevOps CI Automation Continuous IntegrationDevOps CI Automation Continuous Integration
DevOps CI Automation Continuous IntegrationIRJET Journal
 
EXTRACTING THE MINIMIZED TEST SUITE FOR REVISED SIMULINK/STATEFLOW MODEL
EXTRACTING THE MINIMIZED TEST SUITE FOR REVISED SIMULINK/STATEFLOW MODELEXTRACTING THE MINIMIZED TEST SUITE FOR REVISED SIMULINK/STATEFLOW MODEL
EXTRACTING THE MINIMIZED TEST SUITE FOR REVISED SIMULINK/STATEFLOW MODELijaia
 
LabVIEW - Teaching Aid for Process Control
LabVIEW - Teaching Aid for Process ControlLabVIEW - Teaching Aid for Process Control
LabVIEW - Teaching Aid for Process ControlIDES Editor
 
Testware Hierarchy for Test Automation
Testware Hierarchy for Test AutomationTestware Hierarchy for Test Automation
Testware Hierarchy for Test AutomationGregory Solovey
 
09 cs491 st-t1
09 cs491 st-t109 cs491 st-t1
09 cs491 st-t1NikithaNag
 
Configuration Navigation Analysis Model for Regression Test Case Prioritization
Configuration Navigation Analysis Model for Regression Test Case PrioritizationConfiguration Navigation Analysis Model for Regression Test Case Prioritization
Configuration Navigation Analysis Model for Regression Test Case Prioritizationijsrd.com
 
Regression Optimizer
Regression OptimizerRegression Optimizer
Regression OptimizerShradha Singh
 
Cs 568 Spring 10 Lecture 5 Estimation
Cs 568 Spring 10  Lecture 5 EstimationCs 568 Spring 10  Lecture 5 Estimation
Cs 568 Spring 10 Lecture 5 EstimationLawrence Bernstein
 
Modeling and Testing Dovetail in MagicDraw
Modeling and Testing Dovetail in MagicDrawModeling and Testing Dovetail in MagicDraw
Modeling and Testing Dovetail in MagicDrawGregory Solovey
 
COVERAGE DRIVEN FUNCTIONAL TESTING ARCHITECTURE FOR PROTOTYPING SYSTEM USING ...
COVERAGE DRIVEN FUNCTIONAL TESTING ARCHITECTURE FOR PROTOTYPING SYSTEM USING ...COVERAGE DRIVEN FUNCTIONAL TESTING ARCHITECTURE FOR PROTOTYPING SYSTEM USING ...
COVERAGE DRIVEN FUNCTIONAL TESTING ARCHITECTURE FOR PROTOTYPING SYSTEM USING ...VLSICS Design
 
COVERAGE DRIVEN FUNCTIONAL TESTING ARCHITECTURE FOR PROTOTYPING SYSTEM USING ...
COVERAGE DRIVEN FUNCTIONAL TESTING ARCHITECTURE FOR PROTOTYPING SYSTEM USING ...COVERAGE DRIVEN FUNCTIONAL TESTING ARCHITECTURE FOR PROTOTYPING SYSTEM USING ...
COVERAGE DRIVEN FUNCTIONAL TESTING ARCHITECTURE FOR PROTOTYPING SYSTEM USING ...VLSICS Design
 
A Unique Test Bench for Various System-on-a-Chip
A Unique Test Bench for Various System-on-a-Chip A Unique Test Bench for Various System-on-a-Chip
A Unique Test Bench for Various System-on-a-Chip IJECEIAES
 
Test planning.ppt
Test planning.pptTest planning.ppt
Test planning.pptUmmERayyan2
 
Chapter 8 Testing Tactics.ppt Software engineering
Chapter 8 Testing Tactics.ppt Software engineeringChapter 8 Testing Tactics.ppt Software engineering
Chapter 8 Testing Tactics.ppt Software engineeringAnasHassan52
 
Learning Software Performance Models for Dynamic and Uncertain Environments
Learning Software Performance Models for Dynamic and Uncertain EnvironmentsLearning Software Performance Models for Dynamic and Uncertain Environments
Learning Software Performance Models for Dynamic and Uncertain EnvironmentsPooyan Jamshidi
 

Similar to Maestro_Abstract (20)

Advanced Verification Methodology for Complex System on Chip Verification
Advanced Verification Methodology for Complex System on Chip VerificationAdvanced Verification Methodology for Complex System on Chip Verification
Advanced Verification Methodology for Complex System on Chip Verification
 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
 
RPG Program for Unit Testing RPG
RPG Program for Unit Testing RPG RPG Program for Unit Testing RPG
RPG Program for Unit Testing RPG
 
RESEARCH ON DISTRIBUTED SOFTWARE TESTING PLATFORM BASED ON CLOUD RESOURCE
RESEARCH ON DISTRIBUTED SOFTWARE TESTING  PLATFORM BASED ON CLOUD RESOURCERESEARCH ON DISTRIBUTED SOFTWARE TESTING  PLATFORM BASED ON CLOUD RESOURCE
RESEARCH ON DISTRIBUTED SOFTWARE TESTING PLATFORM BASED ON CLOUD RESOURCE
 
Introduction to testing.
Introduction to testing.Introduction to testing.
Introduction to testing.
 
DevOps CI Automation Continuous Integration
DevOps CI Automation Continuous IntegrationDevOps CI Automation Continuous Integration
DevOps CI Automation Continuous Integration
 
EXTRACTING THE MINIMIZED TEST SUITE FOR REVISED SIMULINK/STATEFLOW MODEL
EXTRACTING THE MINIMIZED TEST SUITE FOR REVISED SIMULINK/STATEFLOW MODELEXTRACTING THE MINIMIZED TEST SUITE FOR REVISED SIMULINK/STATEFLOW MODEL
EXTRACTING THE MINIMIZED TEST SUITE FOR REVISED SIMULINK/STATEFLOW MODEL
 
LabVIEW - Teaching Aid for Process Control
LabVIEW - Teaching Aid for Process ControlLabVIEW - Teaching Aid for Process Control
LabVIEW - Teaching Aid for Process Control
 
Testware Hierarchy for Test Automation
Testware Hierarchy for Test AutomationTestware Hierarchy for Test Automation
Testware Hierarchy for Test Automation
 
09 cs491 st-t1
09 cs491 st-t109 cs491 st-t1
09 cs491 st-t1
 
Configuration Navigation Analysis Model for Regression Test Case Prioritization
Configuration Navigation Analysis Model for Regression Test Case PrioritizationConfiguration Navigation Analysis Model for Regression Test Case Prioritization
Configuration Navigation Analysis Model for Regression Test Case Prioritization
 
Regression Optimizer
Regression OptimizerRegression Optimizer
Regression Optimizer
 
Cs 568 Spring 10 Lecture 5 Estimation
Cs 568 Spring 10  Lecture 5 EstimationCs 568 Spring 10  Lecture 5 Estimation
Cs 568 Spring 10 Lecture 5 Estimation
 
Modeling and Testing Dovetail in MagicDraw
Modeling and Testing Dovetail in MagicDrawModeling and Testing Dovetail in MagicDraw
Modeling and Testing Dovetail in MagicDraw
 
COVERAGE DRIVEN FUNCTIONAL TESTING ARCHITECTURE FOR PROTOTYPING SYSTEM USING ...
COVERAGE DRIVEN FUNCTIONAL TESTING ARCHITECTURE FOR PROTOTYPING SYSTEM USING ...COVERAGE DRIVEN FUNCTIONAL TESTING ARCHITECTURE FOR PROTOTYPING SYSTEM USING ...
COVERAGE DRIVEN FUNCTIONAL TESTING ARCHITECTURE FOR PROTOTYPING SYSTEM USING ...
 
COVERAGE DRIVEN FUNCTIONAL TESTING ARCHITECTURE FOR PROTOTYPING SYSTEM USING ...
COVERAGE DRIVEN FUNCTIONAL TESTING ARCHITECTURE FOR PROTOTYPING SYSTEM USING ...COVERAGE DRIVEN FUNCTIONAL TESTING ARCHITECTURE FOR PROTOTYPING SYSTEM USING ...
COVERAGE DRIVEN FUNCTIONAL TESTING ARCHITECTURE FOR PROTOTYPING SYSTEM USING ...
 
A Unique Test Bench for Various System-on-a-Chip
A Unique Test Bench for Various System-on-a-Chip A Unique Test Bench for Various System-on-a-Chip
A Unique Test Bench for Various System-on-a-Chip
 
Test planning.ppt
Test planning.pptTest planning.ppt
Test planning.ppt
 
Chapter 8 Testing Tactics.ppt Software engineering
Chapter 8 Testing Tactics.ppt Software engineeringChapter 8 Testing Tactics.ppt Software engineering
Chapter 8 Testing Tactics.ppt Software engineering
 
Learning Software Performance Models for Dynamic and Uncertain Environments
Learning Software Performance Models for Dynamic and Uncertain EnvironmentsLearning Software Performance Models for Dynamic and Uncertain Environments
Learning Software Performance Models for Dynamic and Uncertain Environments
 

Maestro_Abstract

  • 1. Maestro: GUI-based test automation and management framework for Mixed Signal Validation Hardik Patel, Kapil Dalal, Ying Xu Mixed Signal Audio Validation Group Cirrus Logic, Austin, USA Hardik.Patel@cirrus.com, Kapil.Dalal@cirrus.com, Ying.Xu@cirrus.com Abstract— The amplifier team of the validation group implemented a new test automation and management framework named Maestro. The main features of this approach are: a) Test and project information stored in configuration files b) Graphical User Interface (GUI) in MATLAB to process and display this information c) Validation scripts underneath, in the form of MATLAB functions and/or measurement classes. This approach leads to significant reduction in testing time by about 66%, while making the validation infrastructure reusable across any project. Other benefits include: abstraction of test development and execution, standardization of testing methodologies across projects, and a system for monitoring test execution and logging measurement data. Principles of this approach and the significant enhancements it offers are briefly discussed in this abstract. Keywords - Mixed-Signal validation, test infrastructure, MATLAB scripts, automation, GUI, measurement class I. INTRODUCTION The validation team of the Mixed Signal Audio Group performs functional validation and characterization of high performance mixed-signal analog converters, Class D power amplifiers and digital interface products. Manual and automated test procedures are used to this effect. There was an ongoing effort by the amplifier validation team to achieve code reuse by changing the software architecture of the test scripts and standardization of interfaces. Maestro, the resultant framework, has three essential components: a) Configuration files with project and test information b) A graphical user interface (GUI) in MATLAB that visually presents this information to the test operator c) Wrappers underneath to process the test information, interface with various endpoints (Device Under Test (DUT), test equipment, validation platform (EEB), etc), monitor and update test status and log results to the database The GUI as the front-end and the new software structure as the back-end leads to significant improvements in the validation method workflow. II. BACKGROUND AND METHODOLOGY Any test script has three parts:  Setup Section: This part of the script configures the DUT and equipments used for testing (power supplies, oscilloscopes, measurement equipments) for the current test.  Perform Section: This part of the script executes tests by controlling the test equipment and DUT, generates test input patterns, takes measurements and saves the results. It is also responsible for monitoring the status of tests under execution.  Cleanup Section: This part of the script restores the test equipments to their baseline state. This could be a result of an unexpected or normal exit of the test script. The following sections briefly describe the conventional validation approach and the significant improvements made by using the Maestro code re-use model (a.k.a. new style). Figure 1. Code Snippets representing the conventional old-style approach and the Maestro Code Re-Use Solution A. Conventional Validation Approach In this approach, the tests were implemented in procedural style. Each test had a mix of setup steps: some common across all other tests, others very specific to the current measurement. During data collection, test conditions were edited in each test script. Hence, multiple wrapper files were necessitated. This approach had numerous drawbacks. Firstly, tests with different conditions could not be executed together. Thus, to change the test-script to be run would require wrapper-file modifications. The common setups for each test had to be repeatedly implemented. A single change in the common setup necessitated the editing of multiple function files. In addition, this process had the potential of introducing bugs into the test script due to human oversight. Most importantly, it was not easy to port existing tests from one project to another as device information was a part of the test script. Also, common setups function old_style() total_time=0;s=0;p=0;cl=0; for A=[0 1 2] for B=[4 5 6] for C=[7 8 9 10] % call measurement % function here disp(' Setup...'); s=s+1; disp(' Perform...'); p=p+1; disp(' Cleanup...'); cl=cl+1; end end End total_time=s+p+cl; function new_style(measobj, conds) total_time=0; s=0;p=0;cl=0; measobj.setup(); % s=1; s=1; for A= conds_A %[0 1 2] for B=conds_B % [4 5 6] for C=conds_C % [7 8 9 10] px=measobj.perform(); %1 p=p+1; end end end measobj.cleanup(); % cl=1; total_time=1+p+1; %1/3rd runtime
  • 2. were mixed together with project specific setups. In the absence of a clear abstraction of device information, the rework cost and effort were prohibitive. The left block of Figure 1 shows a code snippet from a script written in the former validation style. Setup, Perform and Cleanup are executed 36 times each. Hence the total time taken (assuming same time is taken by each of the three sections) = 108 units. B. The Maestro Approach (Code Re-Use Solution) The Code Re-Use Solution uses concepts of object-oriented programming. Thus, the resulting test script has a modular structure and a level of abstraction that provides a clearly defined interface. This eases the processes of code modification and reuse. Similar measurements have been grouped into a generic and reusable class structure called measurement class. Each test is an instantiation of a particular measurement class. All classes have the same interface: setup(), perform(), cleanup(). So each class looks alike and can be processed in the same way. DUT and measurement devices are completely abstracted and decoupled from the measurement class. The advantage of device abstraction is that common setup steps can be performed only once at the beginning of the test execution, thus saving time. Test-specific setups are performed by the m.setup(…) of the measurement class. m.perform(…) performs the measurements of the test and outputs the result. Every iteration has the required set of actions to prepare for repeating the same measurement using new conditions, thus allowing re-use of the same code segment. m.cleanup(…) performs any necessary cleanup steps. The right block of Figure 1 shows a code snippet of a script written in the new validation style. Herein, setup() and cleanup() are executed only once while perform() is executed 36 times. Hence, the total time taken (assuming same time is taken by each of the three sections) = 38 units. This represents a 64% decrease in code execution time. C. The GUI The GUI has been implemented in MATLAB. It acts as the front-end of the Code Re-Use Solution, providing an efficient and intuitive one-stop interface to the user for the development, control, test execution, monitoring and logging of test results (to a database and/or a remote data server). Figure 2. Maestro GUI Screengrab III. RESULTS AND CONCLUSION The software structure arising from the Code Re-Use and object-oriented approach (as the back-end) coupled with the GUI (as the front-end) offers a significantly more efficient and easier way of validating mixed-signal products. A single wrapper and sequencer file suffices for executing multiple tests. In addition, tests with different conditions can be executed together. The element of human error in the conduction of tests is reduced to a great extent as editing test conditions is completely delinked from the MATLAB code. This allows validation techs to independently perform tasks and collect data on tasks with developed scripts, enabling validation engineers to focus on corner case testing. This significantly enhances validation quality. This also allows quick ramping up of new validation engineers. Measurement classes have standardized interfaces, ensuring that classes can be reused across projects. The concept of configuration files allows porting of test conditions between projects. It also enables recreating a particular test condition and/or debugging in the lab for later inspection. DUT and measurement instrument information is completely abstracted from measurement classes. This allows standardization of testing methods and their re-use across newer versions of the IP. Maestro also gives the user options for logging data to a database or to a diary file or both. It also facilitates easy interfacing with equipments like oscilloscopes and temperature-controllers. The structure makes it easy to adapt/incorporate existing functions into the Maestro structure. There is generally about 66% improvement in the time taken for testing using the Code Re-Use method over the former one. As most IP from device to device does not change the measurement methodology will remain the same. However, it is important to note that especially in mixed signal validation the work can be demanding and is subject to constant change. This platform merely makes it far easier to adapt for any changes in this test flow. Given these advantages, Maestro has been successfully used in the validation lab environment for more than past two years under amplifier and CODEC projects. ACKNOWLEDGMENT We take this opportunity to wholeheartedly thank our managers, Mr. James Kandasamy and Mr. Randy Boudreaux for enabling us and supporting us throughout the development of Maestro. We also thank our validation team at Cirrus Logic Inc. for enthusiastically using Maestro and offering valuable suggestions. REFERENCES [1] Dalal, Kapil, “Code Re-Use Solution for Mixed-Signal Validation”, August 2012, Project Report submitted to the University of Florida [2] http://www.mathworks.com/videos/creating-a-gui-with-guide- 68979.html [3] Bapiraju Vinnakota, "Analog and mixed-signal test", Upper Saddle River, NJ : Prentice Hall PTR, c1998 [4] Burns, Mark and Roberts, Gordon, "An Introduction to Mixed Signal IC Test and Measurement", New York : Oxford University Press, c2001