This document describes an advanced verification methodology for complex system on chip verification. Key aspects of the methodology include:
1. Using constructs like sequencers, virtual sequencers, configuration databases, and channels to enable code reuse and reduce time to market.
2. Developing a reusable test bench architecture with components like drivers, monitors, agents, and an environment class.
3. Implementing a coverage-driven verification approach using phases like build, connect, end-of-elaboration, and run, along with self-checking scoreboards.
4. Applying the methodology resulted in 95% code coverage for verifying a communication system on chip design, with the methodology taking less time than alternatives like system
The key to successful testing is effective and timely planning. Rick Craig introduces proven test planning methods and techniques, including the Master Test Plan and level-specific test plans for acceptance, system, integration, and unit testing. Rick explains how to customize an IEEE-829-style test plan and test summary report to fit your organization’s needs. Learn how to manage test activities, estimate test efforts, and achieve buy-in. Discover a practical risk analysis technique to prioritize your testing and become more effective with limited resources. Rick offers test measurement and reporting recommendations for monitoring the testing process. Discover new methods and develop renewed energy for taking your organization’s test management to the next level.
QUALITY METRICS OF TEST SUITES IN TESTDRIVEN DESIGNED APPLICATIONSijseajournal
New techniques for writing and developing software have evolved in recent years. One is Test-Driven
Development (TDD) in which tests are written before code. No code should be written without first having
a test to execute it. Thus, in terms of code coverage, the quality of test suites written using TDD should be
high.
In this work, we analyze applications written using TDD and traditional techniques. Specifically, we
demonstrate the quality of the associated test suites based on two quality metrics: 1) structure-based
criterion, 2) fault-based criterion. We learn that test suites with high branch test coverage will also have
high mutation scores, and we especially reveal this in the case of TDD applications. We found that TestDriven
Development is an effective approach that improves the quality of the test suite to cover more of the
source code and also to reveal more.
Test Case Optimization and Redundancy Reduction Using GA and Neural Networks IJECEIAES
More than 50% of software development effort is spent in testing phase in a typical software development project. Test case design as well as execution consume a lot of time. Hence, automated generation of test cases is highly required. Here a novel testing methodology is being presented to test objectoriented software based on UML state chart diagrams. In this approach, function minimization technique is being applied and generate test cases automatically from UML state chart diagrams. Software testing forms an integral part of the software development life cycle. Since the objective of testing is to ensure the conformity of an application to its specification, a test “oracle” is needed to determine whether a given test case exposes a fault or not. An automated oracle to support the activities of human testers can reduce the actual cost of the testing process and the related maintenance costs. In this paper, a new concept is being presented using an UML state chart diagram and tables for the test case generation, artificial neural network as an optimization tool for reducing the redundancy in the test case generated using the genetic algorithm. A neural network is trained by the backpropagation algorithm on a set of test cases applied to the original version of the system.
The key to successful testing is effective and timely planning. Rick Craig introduces proven test planning methods and techniques, including the Master Test Plan and level-specific test plans for acceptance, system, integration, and unit testing. Rick explains how to customize an IEEE-829-style test plan and test summary report to fit your organization’s needs. Learn how to manage test activities, estimate test efforts, and achieve buy-in. Discover a practical risk analysis technique to prioritize your testing and become more effective with limited resources. Rick offers test measurement and reporting recommendations for monitoring the testing process. Discover new methods and develop renewed energy for taking your organization’s test management to the next level.
QUALITY METRICS OF TEST SUITES IN TESTDRIVEN DESIGNED APPLICATIONSijseajournal
New techniques for writing and developing software have evolved in recent years. One is Test-Driven
Development (TDD) in which tests are written before code. No code should be written without first having
a test to execute it. Thus, in terms of code coverage, the quality of test suites written using TDD should be
high.
In this work, we analyze applications written using TDD and traditional techniques. Specifically, we
demonstrate the quality of the associated test suites based on two quality metrics: 1) structure-based
criterion, 2) fault-based criterion. We learn that test suites with high branch test coverage will also have
high mutation scores, and we especially reveal this in the case of TDD applications. We found that TestDriven
Development is an effective approach that improves the quality of the test suite to cover more of the
source code and also to reveal more.
Test Case Optimization and Redundancy Reduction Using GA and Neural Networks IJECEIAES
More than 50% of software development effort is spent in testing phase in a typical software development project. Test case design as well as execution consume a lot of time. Hence, automated generation of test cases is highly required. Here a novel testing methodology is being presented to test objectoriented software based on UML state chart diagrams. In this approach, function minimization technique is being applied and generate test cases automatically from UML state chart diagrams. Software testing forms an integral part of the software development life cycle. Since the objective of testing is to ensure the conformity of an application to its specification, a test “oracle” is needed to determine whether a given test case exposes a fault or not. An automated oracle to support the activities of human testers can reduce the actual cost of the testing process and the related maintenance costs. In this paper, a new concept is being presented using an UML state chart diagram and tables for the test case generation, artificial neural network as an optimization tool for reducing the redundancy in the test case generated using the genetic algorithm. A neural network is trained by the backpropagation algorithm on a set of test cases applied to the original version of the system.
UVM BASED REUSABLE VERIFICATION IP FOR WISHBONE COMPLIANT SPI MASTER COREVLSICS Design
The System on Chip design industry relies heavily on functional verification to ensure that the designs are bug-free. As design engineers are coming up with increasingly dense chips with much functionality, the functional verification field has advanced to provide modern verification techniques. In this paper, we present verification of a wishbone compliant Serial Peripheral Interface (SPI) Master core using a System Verilog based standard verification methodology, the Universal Verification Methodology (UVM). The reason for using UVM factory pattern with parameterized classes is to develop a robust and reusable verification IP. SPI is a full duplex communication protocol used to interface components most likely in embedded systems. We have verified an SPI Master IP core design that is wishbone compliant and compatible with SPI protocol and bus and furnished the results of our verification. We have used QuestaSim for simulation and analysis of waveforms, Integrated Metrics Center, Cadence for coverage analysis. We also propose interesting future directions for this work in developing reliable systems.
SRGM Analyzers Tool of SDLC for Software Improving QualityIJERA Editor
Software Reliability Growth Models (SRGM) have been developed to estimate software reliability measures such as
software failure rate, number of remaining faults and software reliability. In this paper, the software analyzers tool proposed
for deriving several software reliability growth models based on Enhanced Non-homogeneous Poisson Process (ENHPP) in
the presence of imperfect debugging and error generation. The proposed models are initially formulated for the case when
there is no differentiation between failure observation and fault removal testing processes and then this extended for the case
when there is a clear differentiation between failure observation and fault removal testing processes. Many Software
Reliability Growth Models (SRGM) have been developed to describe software failures as a random process and can be used
to measure the development status during testing. With SRGM software consultants can easily measure (or evaluate) the
software reliability (or quality) and plot software reliability growth charts.
With the rise of agile development and the adoption of continuous integration, the software industry has seen an increasing interest in test automation. Many organizations invest in test automation but fail to reap the expected benefits, most likely due to a lack of test-automation maturity. In this talk, we present the results of a test automation maturity survey collecting responses of 151 practitioners coming from 101 organizations in 25 countries. We make observations regarding the state of the practice and provide a benchmark for assessing the maturity of an agile team. The benchmark resulted in a self-assessment tool for practitioners to be released under an open source license. An alfa version is presented herein. The research underpinning the survey has been conducted through the TESTOMAT project, a European project with 34 partners coming from 6 different countries.
(Presentation delivered at the Test Automation Days and the Testnet Autumn Event; October 2020)
The key to successful testing is effective and timely planning. Rick Craig introduces proven test planning methods and techniques, including the Master Test Plan and level-specific test plans for acceptance, system, integration, and unit testing. Rick explains how to customize an IEEE-829-style test plan and test summary report to fit your organization’s needs. Learn how to manage test activities, estimate test efforts, and achieve buy-in. Discover a practical risk analysis technique to prioritize your testing and become more effective with limited resources. Rick offers test measurement and reporting recommendations for monitoring the testing process. Discover new methods and develop renewed energy for taking your organization’s test management to the next level.
AN EFFECTIVE VERIFICATION AND VALIDATION STRATEGY FOR SAFETY-CRITICAL EMBEDDE...IJSEA
This paper presents the best practices to carry out the verification and validation (V&V) for a safetycritical
embedded system, part of a larger system-of-systems. The paper talks about the effectiveness of this
strategy from performance and time schedule requirement of a project. The best practices employed for the
V &Vis a modification of the conventional V&V approach. The proposed approach is iterative which
introduces new testing methodologies apart from the conventional testing methodologies, an effective way
of implementing the phases of the V&V and also analyzing the V&V results. The new testing methodologies
include the random and non-real time testing apart from the static and dynamic tests. The process phases
are logically carried out in parallel and credit of the results of the different phases are taken to ensure that
the embedded system that goes for the field testing is bug free. The paper also demonstrates the iterative
qualities of the process where the iterations successively find faults in the embedded system and executing
the process within a stipulated time frame, thus maintaining the required reliability of the system. This
approach is implemented in the most critical applications —-aerospace application where safety of the
system cannot be compromised
Determination of Software Release Instant of Three-Tier Client Server Softwar...Waqas Tariq
Quality of any software system mainly depends on how much time testing take place, what kind of testing methodologies are used, how complex the software is, the amount of efforts put by software developers and the type of testing environment subject to the cost and time constraint. More time developers spend on testing more errors can be removed leading to better reliable software but then testing cost will also increase. On the contrary, if testing time is too short, software cost could be reduced provided the customers take risk of buying unreliable software. However, this will increase the cost during operational phase since it is more expensive to fix an error during operational phase than during testing phase. Therefore it is essentially important to decide when to stop testing and release the software to customers based on cost and reliability assessment. In this paper we present a mechanism of when to stop testing process and release the software to end-user by developing a software cost model with risk factor. Based on the proposed method we specifically address the issues of how to decide that we should stop testing and release the software based on three-tier client server architecture which would facilitates software developers to ensure on-time delivery of a software product meeting the criteria of achieving predefined level of reliability and minimizing the cost. A numerical example has been cited to illustrate the experimental results showing significant improvements over the conventional statistical models based on NHPP.
A Unique Test Bench for Various System-on-a-Chip IJECEIAES
This paper discusses a standard flow on how an automated test bench environment which is randomized with constraints can verify a SOC efficiently for its functionality and coverage. Today, in the time of multimillion gate ASICs, reusable intellectual property (IP), and system-ona-chip (SoC) designs, verification consumes about 70 % of the design effort. Automation means a machine completes a task autonomously, quicker and with predictable results. Automation requires standard processes with welldefined inputs and outputs. By using this efficient methodology it is possible to provide a general purpose automation solution for verification, given today’s technology. Tools automating various portions of the verification process are being introduced. Here, we have Communication based SOC The content of the paper discusses about the methodology used to verify such a SOC-based environment. Cadence Efficient Verification Methodology libraries are explored for the solution of this problem. We can take this as a state of art approach in verifying SOC environments. The goal of this paper is to emphasize the unique testbench for different SOC using Efficient Verification Constructs implemented in system verilog for SOC verification.
Software Testing and Quality Assurance Assignment 3Gurpreet singh
Short questions :
Que 1 : Define Software Testing.
Que 2 : What is risk identification ?
Que 3 : What is SCM ?
Que 4 : Define Debugging.
Que 5 : Explain Configuration audit.
Que 6 : Differentiate between white box testing & black box testing.
Que 7 : What do you mean by metrics ?
Que 8 : What do you mean by version control ?
Que 9 : Explain Object Oriented Software Engineering.
Que 10 : What are the advantages and disadvantages of manual testing tools ?
Long Questions:
Que 1 : What do you mean by baselines ? Explain their importance.
Que 2 : What do you mean by change control ? Explain the various steps in detail.
Que 3 : Explain various types of testing in detail.
Que 4 : Differentiate between automated testing and manual testing.
Que 5 : What is web engineering ? Explain in detail its model and features.
AN APPROACH FOR TEST CASE PRIORITIZATION BASED UPON VARYING REQUIREMENTS IJCSEA Journal
Software testing is a process continuously performed by the development team during the life cycle of the software with the motive to detect the faults as early as possible. Regressing testing is the most suitable technique for this in which we test number of test cases. As the number of test cases can be very large it is always preferable to prioritize test cases based upon certain criterions.In this paper prioritization strategy is proposed which prioritize test cases based on requirements analysis. By regressing testing if the requirements will vary in future, the software will be modified in such a manner that it will not affect the remaining parts of the software. The proposed system improves the testing process and its efficiency to achieve goals regarding quality, cost, and effort as well user satisfaction and the result of the proposed method evaluated with the help of performance evaluation metric.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Reproducible Crashes: Fuzzing Pharo by Mutating the Test MethodsUniversity of Antwerp
Fuzzing (or Fuzz Testing) is a technique to verify the robustness of a program-under-test. Valid input is replaced by random values with the goal to force the program-under-test into unresponsive states. In this position paper, we propose a white box Fuzzing approach by transforming (mutating) existing test methods. We adopt the mechanisms used for test amplification to generate crash inducing tests, which developers can reproduce later. We provide anecdotal evidence that our approach towards Fuzzing reveals crashing issues in the Pharo environment.
DESIGN OF DIGITAL PLL USING OPTIMIZED PHASE NOISE VCOVLSICS Design
In order to keep electronic world properly PLL plays a very important role. Designing of low
phase noise and less jittery PLL in generation of clock signals is an important task. Clock signals
are required for providing reference timing to electrical system and also to ICs. So in this paper
PLL is designed with improved Phase noise and also jitter. Where such types of design is
important when sophisticated timing requirements are needed to provide synchronization and
distribution of clocks like in ADC, DAC, high speed networking, medical imaging systems. The
clock signal’s quality depends upon jitter and phase noise. An ideal clock source has zero phase
noise and jitter but in reality it has some modulated phase noise. This modulated phase noise
spreads the power to the adjacent frequencies, hence produces noise sidebands. The phase noise
is typically frequency domain analysis which is expressed in terms of dBc/Hz measured at offset
frequency with respect to ideal clock frequency. The low phase noise is important factor mainly
in RF and ADC applications. In RF wireless high speed applications, increased PN will leads to
channel to channel interference, attenuates quality of signal. In ADC, increased PN limits the
SNR and data converter’s equivalent no. of bits (ENOB). Jitter is time domain meas
A charge recycling three phase dual rail pre charge logic based flip-flopVLSICS Design
Providing resistance against side channel attacks especially differential power analysis (DPA) attacks,
which aim at disclosing the secret key of cryptographic algorithm is one of the biggest challenges of
designers of cryptographic devices. In this paper design of novel data flip-flop compatible with three-phase
dual-rail logic (TDPL), called Charge recycling TDPL flip-flop is investigated. The new flip-flop uses
inverters that uses the charge recycling technique where charge stored on high output node during
evaluation phase is used to partially charge the low output node in subsequent pre-charge phases. As a
result less charge comes from the power supply thus lowering the power consumption. Simulation results in
Cadence Virtuoso 45 nm CMOS process show improvement in power consumption in inverter up to 60%
while CRTDPL flip-flop consumes around 50% less power compared to TDPL flip-flop.
Performance and analysis of ultra deep sub micron technology using complement...VLSICS Design
CMOS technology had attained remarkable to progress and advances thus this progress had been achieved
by certain downsizing of the MOSFETs. The dimension of the MOSFETs were scaled upon by factor which
has historically found to be 0.7 in very large scale integration technology, power and delay analysis have
become crucial design concern. tn this paper we emphasize the comparative study of delay, average power
and leakage power of CMOS inverter in Ultra Deep Submicron Technology range. This study shows
variation of following as follows by delay to UDSM technology and also for average power and leakage
power by diminishing to certain technology. The simulation results are taken for 45nm in Ultra Deep
Submicron Technology range with the help of Cadence Tool and also analyzing the effect of load
capacitance, transistor width and supply voltage on average power and delay of CMOS inverter of 45nm
technology. Therefore the analysis has done with the aim to observe about the certain variation in delay
and power with variation in transistor width in UDSM CMOS inverter and also had variation in load
capacitance and supply voltage had been studied
Performance analysis of gated ring oscillator designed for audio frequency ra...VLSICS Design
This paper presents performance analysis of Gated Ring Oscillator (GRO). Proposed GRO is designed to
employ in implementation of Time to Digital Converter (TDC) block of Asynchronous ADC. For an audio
frequency range ADC, minimum GRO stages are designed using asynchronous technique. So leads to
reduced area and power. Compared to conventional Ring Oscillator (RO), we avoided to employ the gated
clock; to evade clock design related problems like jitter, additional area and power. Instead we preferred
gating of ring oscillator itself. Consequently during sleep mode, GRO disables automatically which saves
the dynamic power. Furthermore it also provides first order noise shaping of the quantization and
mismatch noise. Proposed GRO is implemented with 0.18μm CMOS Digital Technology in Cadence
Virtuso environment. GRO performance analysis shows oscillation frequency as 286 KHz with 327ps jitter
and average power consumption of 1.08μW
A low power front end analog multiplexing unit for 12 lead ecg signal acquisi...VLSICS Design
The design of CMOS analog circuitry for acquiring 12 lead ECG is presented. The existing methods
employ separate multiplexers and associated circuitry for signal acquisition operating at typical voltage of
± 5V. The proposed system employs dynamic threshold logic to achieve low power, wide dynamic range
good linearity with a supply voltage of 0.4V. The power dissipation obtained was 22.12μW. Utilizing the
dynamic threshold logic the proposed circuitry is implemented with 0.18μm CMOS technology. This ECG
signal processor is highly suitable for wearable applications of long term cardiac monitoring.
UVM BASED REUSABLE VERIFICATION IP FOR WISHBONE COMPLIANT SPI MASTER COREVLSICS Design
The System on Chip design industry relies heavily on functional verification to ensure that the designs are bug-free. As design engineers are coming up with increasingly dense chips with much functionality, the functional verification field has advanced to provide modern verification techniques. In this paper, we present verification of a wishbone compliant Serial Peripheral Interface (SPI) Master core using a System Verilog based standard verification methodology, the Universal Verification Methodology (UVM). The reason for using UVM factory pattern with parameterized classes is to develop a robust and reusable verification IP. SPI is a full duplex communication protocol used to interface components most likely in embedded systems. We have verified an SPI Master IP core design that is wishbone compliant and compatible with SPI protocol and bus and furnished the results of our verification. We have used QuestaSim for simulation and analysis of waveforms, Integrated Metrics Center, Cadence for coverage analysis. We also propose interesting future directions for this work in developing reliable systems.
SRGM Analyzers Tool of SDLC for Software Improving QualityIJERA Editor
Software Reliability Growth Models (SRGM) have been developed to estimate software reliability measures such as
software failure rate, number of remaining faults and software reliability. In this paper, the software analyzers tool proposed
for deriving several software reliability growth models based on Enhanced Non-homogeneous Poisson Process (ENHPP) in
the presence of imperfect debugging and error generation. The proposed models are initially formulated for the case when
there is no differentiation between failure observation and fault removal testing processes and then this extended for the case
when there is a clear differentiation between failure observation and fault removal testing processes. Many Software
Reliability Growth Models (SRGM) have been developed to describe software failures as a random process and can be used
to measure the development status during testing. With SRGM software consultants can easily measure (or evaluate) the
software reliability (or quality) and plot software reliability growth charts.
With the rise of agile development and the adoption of continuous integration, the software industry has seen an increasing interest in test automation. Many organizations invest in test automation but fail to reap the expected benefits, most likely due to a lack of test-automation maturity. In this talk, we present the results of a test automation maturity survey collecting responses of 151 practitioners coming from 101 organizations in 25 countries. We make observations regarding the state of the practice and provide a benchmark for assessing the maturity of an agile team. The benchmark resulted in a self-assessment tool for practitioners to be released under an open source license. An alfa version is presented herein. The research underpinning the survey has been conducted through the TESTOMAT project, a European project with 34 partners coming from 6 different countries.
(Presentation delivered at the Test Automation Days and the Testnet Autumn Event; October 2020)
The key to successful testing is effective and timely planning. Rick Craig introduces proven test planning methods and techniques, including the Master Test Plan and level-specific test plans for acceptance, system, integration, and unit testing. Rick explains how to customize an IEEE-829-style test plan and test summary report to fit your organization’s needs. Learn how to manage test activities, estimate test efforts, and achieve buy-in. Discover a practical risk analysis technique to prioritize your testing and become more effective with limited resources. Rick offers test measurement and reporting recommendations for monitoring the testing process. Discover new methods and develop renewed energy for taking your organization’s test management to the next level.
AN EFFECTIVE VERIFICATION AND VALIDATION STRATEGY FOR SAFETY-CRITICAL EMBEDDE...IJSEA
This paper presents the best practices to carry out the verification and validation (V&V) for a safetycritical
embedded system, part of a larger system-of-systems. The paper talks about the effectiveness of this
strategy from performance and time schedule requirement of a project. The best practices employed for the
V &Vis a modification of the conventional V&V approach. The proposed approach is iterative which
introduces new testing methodologies apart from the conventional testing methodologies, an effective way
of implementing the phases of the V&V and also analyzing the V&V results. The new testing methodologies
include the random and non-real time testing apart from the static and dynamic tests. The process phases
are logically carried out in parallel and credit of the results of the different phases are taken to ensure that
the embedded system that goes for the field testing is bug free. The paper also demonstrates the iterative
qualities of the process where the iterations successively find faults in the embedded system and executing
the process within a stipulated time frame, thus maintaining the required reliability of the system. This
approach is implemented in the most critical applications —-aerospace application where safety of the
system cannot be compromised
Determination of Software Release Instant of Three-Tier Client Server Softwar...Waqas Tariq
Quality of any software system mainly depends on how much time testing take place, what kind of testing methodologies are used, how complex the software is, the amount of efforts put by software developers and the type of testing environment subject to the cost and time constraint. More time developers spend on testing more errors can be removed leading to better reliable software but then testing cost will also increase. On the contrary, if testing time is too short, software cost could be reduced provided the customers take risk of buying unreliable software. However, this will increase the cost during operational phase since it is more expensive to fix an error during operational phase than during testing phase. Therefore it is essentially important to decide when to stop testing and release the software to customers based on cost and reliability assessment. In this paper we present a mechanism of when to stop testing process and release the software to end-user by developing a software cost model with risk factor. Based on the proposed method we specifically address the issues of how to decide that we should stop testing and release the software based on three-tier client server architecture which would facilitates software developers to ensure on-time delivery of a software product meeting the criteria of achieving predefined level of reliability and minimizing the cost. A numerical example has been cited to illustrate the experimental results showing significant improvements over the conventional statistical models based on NHPP.
A Unique Test Bench for Various System-on-a-Chip IJECEIAES
This paper discusses a standard flow on how an automated test bench environment which is randomized with constraints can verify a SOC efficiently for its functionality and coverage. Today, in the time of multimillion gate ASICs, reusable intellectual property (IP), and system-ona-chip (SoC) designs, verification consumes about 70 % of the design effort. Automation means a machine completes a task autonomously, quicker and with predictable results. Automation requires standard processes with welldefined inputs and outputs. By using this efficient methodology it is possible to provide a general purpose automation solution for verification, given today’s technology. Tools automating various portions of the verification process are being introduced. Here, we have Communication based SOC The content of the paper discusses about the methodology used to verify such a SOC-based environment. Cadence Efficient Verification Methodology libraries are explored for the solution of this problem. We can take this as a state of art approach in verifying SOC environments. The goal of this paper is to emphasize the unique testbench for different SOC using Efficient Verification Constructs implemented in system verilog for SOC verification.
Software Testing and Quality Assurance Assignment 3Gurpreet singh
Short questions :
Que 1 : Define Software Testing.
Que 2 : What is risk identification ?
Que 3 : What is SCM ?
Que 4 : Define Debugging.
Que 5 : Explain Configuration audit.
Que 6 : Differentiate between white box testing & black box testing.
Que 7 : What do you mean by metrics ?
Que 8 : What do you mean by version control ?
Que 9 : Explain Object Oriented Software Engineering.
Que 10 : What are the advantages and disadvantages of manual testing tools ?
Long Questions:
Que 1 : What do you mean by baselines ? Explain their importance.
Que 2 : What do you mean by change control ? Explain the various steps in detail.
Que 3 : Explain various types of testing in detail.
Que 4 : Differentiate between automated testing and manual testing.
Que 5 : What is web engineering ? Explain in detail its model and features.
AN APPROACH FOR TEST CASE PRIORITIZATION BASED UPON VARYING REQUIREMENTS IJCSEA Journal
Software testing is a process continuously performed by the development team during the life cycle of the software with the motive to detect the faults as early as possible. Regressing testing is the most suitable technique for this in which we test number of test cases. As the number of test cases can be very large it is always preferable to prioritize test cases based upon certain criterions.In this paper prioritization strategy is proposed which prioritize test cases based on requirements analysis. By regressing testing if the requirements will vary in future, the software will be modified in such a manner that it will not affect the remaining parts of the software. The proposed system improves the testing process and its efficiency to achieve goals regarding quality, cost, and effort as well user satisfaction and the result of the proposed method evaluated with the help of performance evaluation metric.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Reproducible Crashes: Fuzzing Pharo by Mutating the Test MethodsUniversity of Antwerp
Fuzzing (or Fuzz Testing) is a technique to verify the robustness of a program-under-test. Valid input is replaced by random values with the goal to force the program-under-test into unresponsive states. In this position paper, we propose a white box Fuzzing approach by transforming (mutating) existing test methods. We adopt the mechanisms used for test amplification to generate crash inducing tests, which developers can reproduce later. We provide anecdotal evidence that our approach towards Fuzzing reveals crashing issues in the Pharo environment.
DESIGN OF DIGITAL PLL USING OPTIMIZED PHASE NOISE VCOVLSICS Design
In order to keep electronic world properly PLL plays a very important role. Designing of low
phase noise and less jittery PLL in generation of clock signals is an important task. Clock signals
are required for providing reference timing to electrical system and also to ICs. So in this paper
PLL is designed with improved Phase noise and also jitter. Where such types of design is
important when sophisticated timing requirements are needed to provide synchronization and
distribution of clocks like in ADC, DAC, high speed networking, medical imaging systems. The
clock signal’s quality depends upon jitter and phase noise. An ideal clock source has zero phase
noise and jitter but in reality it has some modulated phase noise. This modulated phase noise
spreads the power to the adjacent frequencies, hence produces noise sidebands. The phase noise
is typically frequency domain analysis which is expressed in terms of dBc/Hz measured at offset
frequency with respect to ideal clock frequency. The low phase noise is important factor mainly
in RF and ADC applications. In RF wireless high speed applications, increased PN will leads to
channel to channel interference, attenuates quality of signal. In ADC, increased PN limits the
SNR and data converter’s equivalent no. of bits (ENOB). Jitter is time domain meas
A charge recycling three phase dual rail pre charge logic based flip-flopVLSICS Design
Providing resistance against side channel attacks especially differential power analysis (DPA) attacks,
which aim at disclosing the secret key of cryptographic algorithm is one of the biggest challenges of
designers of cryptographic devices. In this paper design of novel data flip-flop compatible with three-phase
dual-rail logic (TDPL), called Charge recycling TDPL flip-flop is investigated. The new flip-flop uses
inverters that uses the charge recycling technique where charge stored on high output node during
evaluation phase is used to partially charge the low output node in subsequent pre-charge phases. As a
result less charge comes from the power supply thus lowering the power consumption. Simulation results in
Cadence Virtuoso 45 nm CMOS process show improvement in power consumption in inverter up to 60%
while CRTDPL flip-flop consumes around 50% less power compared to TDPL flip-flop.
Performance and analysis of ultra deep sub micron technology using complement...VLSICS Design
CMOS technology had attained remarkable to progress and advances thus this progress had been achieved
by certain downsizing of the MOSFETs. The dimension of the MOSFETs were scaled upon by factor which
has historically found to be 0.7 in very large scale integration technology, power and delay analysis have
become crucial design concern. tn this paper we emphasize the comparative study of delay, average power
and leakage power of CMOS inverter in Ultra Deep Submicron Technology range. This study shows
variation of following as follows by delay to UDSM technology and also for average power and leakage
power by diminishing to certain technology. The simulation results are taken for 45nm in Ultra Deep
Submicron Technology range with the help of Cadence Tool and also analyzing the effect of load
capacitance, transistor width and supply voltage on average power and delay of CMOS inverter of 45nm
technology. Therefore the analysis has done with the aim to observe about the certain variation in delay
and power with variation in transistor width in UDSM CMOS inverter and also had variation in load
capacitance and supply voltage had been studied
Performance analysis of gated ring oscillator designed for audio frequency ra...VLSICS Design
This paper presents performance analysis of Gated Ring Oscillator (GRO). Proposed GRO is designed to
employ in implementation of Time to Digital Converter (TDC) block of Asynchronous ADC. For an audio
frequency range ADC, minimum GRO stages are designed using asynchronous technique. So leads to
reduced area and power. Compared to conventional Ring Oscillator (RO), we avoided to employ the gated
clock; to evade clock design related problems like jitter, additional area and power. Instead we preferred
gating of ring oscillator itself. Consequently during sleep mode, GRO disables automatically which saves
the dynamic power. Furthermore it also provides first order noise shaping of the quantization and
mismatch noise. Proposed GRO is implemented with 0.18μm CMOS Digital Technology in Cadence
Virtuso environment. GRO performance analysis shows oscillation frequency as 286 KHz with 327ps jitter
and average power consumption of 1.08μW
A low power front end analog multiplexing unit for 12 lead ecg signal acquisi...VLSICS Design
The design of CMOS analog circuitry for acquiring 12 lead ECG is presented. The existing methods
employ separate multiplexers and associated circuitry for signal acquisition operating at typical voltage of
± 5V. The proposed system employs dynamic threshold logic to achieve low power, wide dynamic range
good linearity with a supply voltage of 0.4V. The power dissipation obtained was 22.12μW. Utilizing the
dynamic threshold logic the proposed circuitry is implemented with 0.18μm CMOS technology. This ECG
signal processor is highly suitable for wearable applications of long term cardiac monitoring.
Design and implementation of 4 t, 3t and 3t1d dram cell design on 32 nm techn...VLSICS Design
In this paper average power consumption, write access time, read access time and retention time of dram
cell designs have been analyzed for the nano-meter scale memories. Many modern day processors use
dram cell for on chip data and program memory storage. The major power in dram is the off state leakage
current. Improving on the power efficiency of a dram cell is critical for the improvement in average power
consumption of the overall system. 3T dram cell, 4T dram and 3T1D DRAM cells are designed with the
schematic design technique and their average power consumption are compared using TANNER EDA tool
.average power consumption, write access time, read access time and retention time of 4T, 3T dram and
3T1D DRAM cell are simulated and compared on 32 nm technology
A 20 gbs injection locked clock and data recovery circuitVLSICS Design
This paper presents a 20 Gb/s injection-locked clock and data recovery (CDR) circuit for burst mode
applications. Utilizing a half rate injection-locked oscillator (ILO) in the proposed CDR circuit leads to
higher speed operation and lower power consumption. In addition, to accommodate process, voltage, and
temperature (PVT) variations and to increase the lock range, a frequency locked loop is proposed to use in
this circuit. The circuit is designed in 0.18 μm CMOS and the simulations for 27-1 pseudo random bit
sequence (PRBS) show that the circuit consumes 55.3 mW at 20 Gb/s, while the recovered clock rms jitter
is 1.1 ps
Comparative Design of Regular Structured Modified Booth MultiplierVLSICS Design
Multiplication is a crucial function and plays a vital role for practically any DSP system. Several DSP
algorithms require different types of multiplications, specifically modified booth multiplication algorithm.
In this paper, a simple approach is proposed for generating last partial product row for reducing extra
sign (negative bit) bit to achieve more regular structure. As compared to the conventional multipliers these
proposed modified Booth’s multipliers can achieve improved reduction in area 5.9%, power 3.2%, and
delay 0.5% for 8 x 8 multipliers. We can also observe that achievable improvement for 16 x 16 multiplier
in area, power, delay are 4.0%, 2.3%, 0.3% respectively. These multipliers are implemented using verilog
HDL and synthesized by using synopsis design compiler with an Artisan TSMC 90nm Technology
Fpga based efficient multiplier for image processing applications using recur...VLSICS Design
The Digital Image processing applications like medical imaging, satellite imaging, Biometric trait images
etc., rely on multipliers to improve the quality of image. However, existing multiplication techniques
introduce errors in the output with consumption of more time, hence error free high speed multipliers has
to be designed. In this paper we propose FPGA based Recursive Error Free Mitchell Log Multiplier
(REFMLM) for image Filters. The 2x2 error free Mitchell log multiplier is designed with zero error by
introducing error correction term is used in higher order Karastuba-Ofman Multiplier (KOM)
Architectures. The higher order KOM multipliers is decomposed into number of lower order multipliers
using radix 2 till basic multiplier block of order 2x2 which is designed by error free Mitchell log multiplier.
The 8x8 REFMLM is tested for Gaussian filter to remove noise in fingerprint image. The Multiplier is
synthesized using Spartan 3 FPGA family device XC3S1500-5fg320. It is observed that the performance
parameters such as area utilization, speed, error and PSNR are better in the case of proposed architecture
compared to existing architectures.
UVM BASED REUSABLE VERIFICATION IP FOR WISHBONE COMPLIANT SPI MASTER COREVLSICS Design
The System on Chip design industry relies heavily on functional verification to ensure that the designs are bug-free. As design engineers are coming up with increasingly dense chips with much functionality, the functional verification field has advanced to provide modern verification techniques. In this paper, we
present verification of a wishbone compliant Serial Peripheral Interface (SPI) Master core using a System Verilog based standard verification methodology, the Universal Verification Methodology (UVM). The reason for using UVM factory pattern with parameterized classes is to develop a robust and reusable
verification IP. SPI is a full duplex communication protocol used to interface components most likely in embedded systems. We have verified an SPI Master IP core design that is wishbone compliant and compatible with SPI protocol and bus and furnished the results of our verification. We have used
QuestaSim for simulation and analysis of waveforms, Integrated Metrics Center, Cadence for coverage analysis. We also propose interesting future directions for this work in developing reliable systems.
UVM BASED REUSABLE VERIFICATION IP FOR WISHBONE COMPLIANT SPI MASTER COREVLSICS Design
The System on Chip design industry relies heavily on functional verification to ensure that the designs are bug-free. As design engineers are coming up with increasingly dense chips with much functionality, the functional verification field has advanced to provide modern verification techniques. In this paper, we
present verification of a wishbone compliant Serial Peripheral Interface (SPI) Master core using a System Verilog based standard verification methodology, the Universal Verification Methodology (UVM). The reason for using UVM factory pattern with parameterized classes is to develop a robust and reusable
verification IP. SPI is a full duplex communication protocol used to interface components most likely in embedded systems. We have verified an SPI Master IP core design that is wishbone compliant and compatible with SPI protocol and bus and furnished the results of our verification. We have used
QuestaSim for simulation and analysis of waveforms, Integrated Metrics Center, Cadence for coverage analysis. We also propose interesting future directions for this work in developing reliable systems.
A Survey of functional verification techniquesIJSRD
In this paper, we present a survey of various techniques used in functional verification of industry hardware designs. Although the use of formal verification techniques has been increasing over time, there is still a need for an immediate practical solution resulting in an increased interest in hybrid verification techniques. Hybrid techniques combine formal and informal (traditional simulation based) techniques to take the advantage of both the worlds. A typical hybrid technique aims to address the verification bottleneck by enhancing the state space coverage.
The System Verilog UVM promises to improve verification productivity while enabling teams to share tests and test benches between projects and divisions. This promise can be achieved; the UVM is a powerful methodology for using constrained randomization to reach higher functional coverage goals and to explore combinations of tests without having to write each one individually. Unfortunately the UVM promise can be hard to reach without training, practice and some significant expertise. Verification is one of the most important activities in the flow of ASIC/VLSI design. Verification consumes large amount of design flow cycle & efforts to ensure design is bug free. Hence it becomes intense requirement for powerful and reusable methodology for verification.
A TAXONOMY OF PERFORMANCE ASSURANCE METHODOLOGIES AND ITS APPLICATION IN HIGH...IJSEA
This paper presents a systematic approach to the complex problem of high confidence performance
assurance of high performance architectures based on methods used over several generations of industrial
microprocessors. A taxonomy is presented for performance assurance through three key stages of a product
life cycle-high level performance, RTL performance, and silicon performance. The proposed taxonomy
includes two components-independent performance assurance space for each stage and a correlation
performance assurance space between stages.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...Soham Mondal
Identified huge error count and US$1.7M excess expense in product engineering and product development; Spearheaded from scratch product roadmap and end-to-end engineering and deployment of a custom novel software for automatic creation of error-free verification infrastructure for a customizable Network-interconnect, across 6 global teams, saved 70+ man hours per integration and testing cycle and reduced time-to-first-test by 60%, resulting in an estimated annual savings of US$4.5M in purchased product licenses and 100% reduction in error-count in engineering process. Enabled a 4-member cross-cultural global team in Seoul for 6+ months for E2E-auto-testbench product during its’ adoption, prototype testing, and life cycle. Conducted 120+ user interviews, market analysis, customer research to define key product requirements for new features resulting in 100% user adoption, 80% increase in user satisfaction. Received appreciation award from VP of Engineering, Samsung Memory Solutions.
Disclaimer: - The slides presented here are a minimised version of the actual detailed content/implementation/publication presented to the stakeholders.
If the originals are needed, they will be provided based on mutual agreement.
(All Rights Reserved)
International Journal of Engineering Research and DevelopmentIJERD Editor
• Electrical, Electronics and Computer Engineering,
• Information Engineering and Technology,
• Mechanical, Industrial and Manufacturing Engineering,
• Automation and Mechatronics Engineering,
• Material and Chemical Engineering,
• Civil and Architecture Engineering,
• Biotechnology and Bio Engineering,
• Environmental Engineering,
• Petroleum and Mining Engineering,
• Marine and Agriculture engineering,
• Aerospace Engineering.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
JMeter webinar - integration with InfluxDB and Grafana
Advanced Verification Methodology for Complex System on Chip Verification
1. International Journal of VLSI design & Communication Systems (VLSICS) Vol.6, No.6, December 2015
DOI : 10.5121/vlsic.2015.6602 11
ADVANCED VERIFICATION METHODOLOGY
FOR COMPLEX SYSTEM ON CHIP
VERIFICATION
G.Renuka1
, V.Ushashree2
and P.Chandrasekhar Reddy3
1
Department of Electronics & Communication Engg.,
SREC , Warangal, Telangana, India.
2
Department of Electronics & Communication Engg.,
JBIET, Hyderabad, Telangana, India.
3
Department of Electronics & Communication Engg.,
JNTU, Hyderabad, Telangana, India.
ABSTRACT
Verification remains the most significant challenge in getting advanced SOC devices in market. The
important challenge to be solved in the Semiconductor industry is the growing complexity of SOCs.
Industry experts consider that the verification effort is almost 70% to 75% of the overall design effort.
Verification language cannot alone increase verification productivity but it must be accompanied by a
methodology to facilitate reuse to the maximum extent under different design IP configurations. This
Advanced reusable test bench development will decrease the time to market for a chip. It will help in code
reuse so that the same code used in sub-block level can be used in block level and top level as well that
helps in saving cost for a tape-out of a chip. This test bench development technique will help us to achieve
faster time to market and will help reducing the cost for the chip up to a large extent.
KEYWORDS
Advanced verification Methodology, Verification Simulation software, Test Bench.
1. INTRODUCTION
The complexity of the chip has increased in present years and integration of more numbers of
components in a single Soc makes verification of any Soc design very critical. We need proper
verification methodology for any Soc or IP. The object oriented programming (OOP) concepts in
verification make it easy.In this paper, the problems regarding code reusability, faster time to
market, flexibility are resolved by developing the test bench environment by an advanced
Verification reusable methodology. Less energy consumption, reusability, better performance,
lesser simulation time were the targets achieved by using this advanced methodology.
A test bench is an environment used to verify the correctness of a model as well as of a design. It
also provides various functions including applying, creating, stimulus and verifying the
correctness of interfacing and responses. Developing a test bench environment is the most time-
consuming task for an advanced verification team.
3. International Journal of VLSI design & Communication Systems (VLSICS) Vol.6, No.6, December 2015
13
reference model of the design is the key component to enable the early development of
Verification Environment without waiting for RTL to be available. UVM based early verification
Environment is developed using Environment is developed both with Host interface and Core
using Virtual Register Interface (VRI) approach. Testing of features of Verification Environment
at TLM abstraction level runs faster and thus, it overall speeds up functional verification. Same
environment can be reused from IP level to SOC level or from one SOC to another SOC with
no/minimal change.
3. VERIFICATION METHODOLOGY
This verification methodology provides an appropriate framework to attain coverage driven
verification (CDV).This CDV combines self checking test benches, automatic test generation and
coverage metrics to appreciably minimizing the time spent to verify a design. The purpose of
CDV is to ensure that thorough verification is done using up -front goal setting and eliminate the
effort and time spent for creating hundreds of tests.
It also helps in receiving early notifications of errors and deploys error analysis to simplify
debugging and runtime checking. The traditional directed testing flow is unlike than the CDV
flow. Verification goals are set in CDV by using a controlled planning process. Then a test bench
that generates and sends legal stimuli to the DUT is created. Coverage monitors are included to
the environment for measuring progress and identify non-exercised functionality. For
identification of undesired DUT behaviour, checkers are added. Simulations are carried out after
both the test bench and coverage model are implemented. Using CDV, thorough verification of
the design is achieved by changing the randomization seed or test bench parameters. Test
constraints are included on top of the infrastructure so that the verification goals can be achieved
quickly.
Some of the important constructs used in Advanced verification Methodology are as follows:
1. Sequencer:
vm_sequencer#(s_item) sequencer
2. Virtual Sequencer
3. TLM PORTS FIFO:
eg : Ports – Set of Methods Ex. Get(), Put(), Peek(), etc
4. Call back:
eg:ClassDriver_callbackextendsvm_callback;
endclass : Driver_callback
5. Virtual Sequencer:
6. Configuration Database:
vm_config_db#(T)::set,vm_config_db #(T)::get
7. Channel
vm_channel(vm_data)
8. Atomic generator
vm_atomic_gen(Pack,"Atomic Pack Generator")
These important constructs helps in code reusability and reduce the time to market for a chip.
Figure 1 shows the test bench architecture for this advanced verification methodology.
4. International Journal of VLSI design & Communication Systems (VLSICS) Vol.6, No.6, December 2015
14
Fig. 1. Verification methodology Test Bench setup
The following are the different Verification Components used in testbench development for the
SoC:
1. Data Item (Transaction)
2. Driver (BFM)
3. Sequencer
4. Monitor
5. Agent
6. Environment
1. Data Item
The input to the DUT are Data items Examples includes instructions and bus transactions .The
data item’s specifications derives attributes and fields of a data item. Generally, many data items
are generated and sent to the Design under test by smartly randomizing data item fields using
System Verilog constraints which results in more number of tests and helps in maximizing
coverage.
class Pack extends vm _transaction
rand bit [7:0] ra;
rand bit [7:0] da;
rand bit [7:0] data[];
rand bit [7:0] length;
rand byte fc;
endclass
2. Driver (BFM)
A driver is an active entity which emulates logic that drives DUT. The data items are repeatedly
received by the driver and samples them drives it to the DUT. (If verification environment is
created in the past, driver functionality can be implemented) For example, a driver controls data
bus, address bus and the read/write signal to perform a write transfer.
5. International Journal of VLSI design & Communication Systems (VLSICS) Vol.6, No.6, December 2015
15
Class driver;
// virtual interface for driver.
// object for collecting data from generator and transfer to sb.
// mailbox from generator to driver.
// mailbox from driver to scoreboard.
event drive_done;
Function new ();
// write virtual interface,mailboxes and event in the argument list.
// connect the arguments with the respective variables inside class driver using “this” operator.
Endfunction
3. Sequencer
An advanced stimulus generator is sequencer that controls the items which are provided to driver
for execution. In default case, a sequencer behaves similar to a simple stimulus generator and
also returns a random data item on request from driver. The default behaviour of driver allows to
include constraints in data item class for controlling the distribution of randomized values.
Generators are used to randomize arrays of transactions whereas sequencer is used to capture
important randomization requirements.
class instruct_sequencer extends vm_sequencer #(instruction);
function new (string name, vm_componentparent);
super.new(name, parent);
`vm_update_sequence_lib_and_item(instruction)
endfunction
`vm_sequencer_utils(instruct_sequencer)
endclass
4. Monitor
Monitor is a passive entity which samples DUT signals without driving them. Monitors gather
coverage information as well as perform checking. A monitor: Collects transactions (data items).
and extracts signal information from bus and next translates the information to a transaction
which can be made available for other components and to the test writer as well.
Extracts events : The monitor checks the availability of transaction (information) structures the
data, and emits an event to notify the availability of other components. It also captures status
information so that it can be available to other components and to the test writer. Performs
checking and coverage
Class monitor;
// virtual interface for monitor .
// transaction object from generator.
// object to be send to scoreboard.
//mailbox for scoreboard.
Event drive_done;
Function new();
// write virtual interface,mailbox and event as argument.
//connect the arguments with the respective variables inside class driver using “this” operator.
//create object to be send to sb.
Endfunction
6. International Journal of VLSI design & Communication Systems (VLSICS) Vol.6, No.6, December 2015
16
5. Agent
Sequencers, drivers and monitors can be used independently. For decreasing the amount of
work and knowledge as per the requirement of test writer, this methodology recommends the
creation of a more abstract model known as agent. Agents can verify DUT devices. Some agents
also initiate transactions to the DUT, for example master or transmit agents, while other agents
respond to transaction requests which are known as slave or receive agents. Agents should be
configurable to be as either active or passive. Transactions are driven according to test directives
by Active agents. DUT activity is monitored by passive agents.
Class r_write agent extends vm_agent
//declare the instances of driver, monitor and sequencer
//include a flag is_active to control the agent
//use build() phase to connect the agent subcomponent
Vitual Function void build_phase(vm_phase phase);
Super..build_phase(phase);
Monitor=r_w_monitor::type_id::create(“monitor”,this);
If(is_active==vm_active)
Begin
-------
------
end
endfunction:build_phase
//use connect() phase to connect the agent’s sub connect
6. Environment
The top-level component of the Verification Component is the environment (env). It can contain
one or more agents, as well as a bus monitor. The env contains config. Properties which enables
in customizing the topology and behaviour to make it reusable. For example, active agents can be
converted to passive agents when verification env is reused in system verification units.
// Include all the transactors in the environment class.
Class communication based SOC_env;
// declare virtual interface for both driver and receiver.
// declare all the mailbox used in the environment.
// declare all the handles of the transactors.
//event declaration.
Function new();
// declare both the virtual interface as arguments.
// connect with the interface using ‘this’ operator.
endfunction
Vitual task build();
begin
// create objects of all the transactors passing the correct arguments.
end
Endtask
Virtual task reset_dut();
begin
// here you can initialize all the values of the signals.
end
7. International Journal of VLSI design & Communication Systems (VLSICS) Vol.6, No.6, December 2015
17
Endtask
Several phases of this advanced verification methodology are as follows:
• Build phase is available in vm_component. It is used to construct all sub-components
right from the Test case
Function void build_phase(vm_phase phase):
Super.build_phase(phase);
Driver=r_write_driver::type_id::create(“driver”,this);
Endfunction: build_phase
• Connect phase is used for connecting the ports/exports of the components.
• End_ of _elaboration: This phase is used for configuring the components if required.
• Start _of_ simulation: This phase is used for configuring the components if required.
• Run: Main body of the test is executed in this phase.
fork
run_phase();
begin
reset_phase();
configure_phase();
main_phase();
shutdown_phase();
end
join
• Extract: all the required information is gathered in this phase
• Check: checks the results of the extracted information such as unresponded requests in
scoreboard, read statistics registers etc.
• Report: It is used for reporting the pass/fail status
Fig. 2. Phases in advanced verification methodology
8. International Journal of VLSI design & Communication Systems
4. RESULTS AND VERIFICATION
Functional verification of communication based SOC has been carried out using advanced
Verification Methodology. Verification methodology plays an important role in the functional
verification of RTL design of the communication based SOC and yields the complete code
coverage. Following the test Plan, the test cases are generated and then verified by developing the
Verification IP. In the Sequencer, all the test cases are written as
methodology. The sequencer drives these sequences to the driver and to the Scoreboard. In the
scoreboard, the comparison takes place between the actual output and the expected one. If the
obtained output and the expected result matches
completed successfully.
By using verification simulation software, the Verification of communication based SOC have
been carried out and the log files for the test cases are generated with Coverage report. So th
whole design is carried out using HDL and the verification is carried out by using advanced
verification Methodology. The communication based SOC has been set as DUT for the functional
verification and 95% code coverage has been obtained by using verific
Fig. 3. Comparison graph for Different Simulation time
Fig.3. Shows the comparison between different verification methodologies i.e. System verilog,
Open verification methodology and Advanced verification methodology. It is clear from the
figure that Advanced verification methodology takes the minimum time for
comparison to system verilog and OVM. Advanced verification methodology is more time
efficient for reaching coverage goal compared to other methods.
International Journal of VLSI design & Communication Systems (VLSICS) Vol.6, No.6, December 2015
ERIFICATION
Functional verification of communication based SOC has been carried out using advanced
Verification Methodology. Verification methodology plays an important role in the functional
verification of RTL design of the communication based SOC and yields the complete code
coverage. Following the test Plan, the test cases are generated and then verified by developing the
Verification IP. In the Sequencer, all the test cases are written as sequences using this
methodology. The sequencer drives these sequences to the driver and to the Scoreboard. In the
scoreboard, the comparison takes place between the actual output and the expected one. If the
obtained output and the expected result matches then we conclude that the verification is
By using verification simulation software, the Verification of communication based SOC have
been carried out and the log files for the test cases are generated with Coverage report. So th
whole design is carried out using HDL and the verification is carried out by using advanced
verification Methodology. The communication based SOC has been set as DUT for the functional
verification and 95% code coverage has been obtained by using verification simulation software.
Fig. 3. Comparison graph for Different Simulation time
Fig.3. Shows the comparison between different verification methodologies i.e. System verilog,
Open verification methodology and Advanced verification methodology. It is clear from the
figure that Advanced verification methodology takes the minimum time for simulation with
comparison to system verilog and OVM. Advanced verification methodology is more time
efficient for reaching coverage goal compared to other methods.
(VLSICS) Vol.6, No.6, December 2015
18
Functional verification of communication based SOC has been carried out using advanced
Verification Methodology. Verification methodology plays an important role in the functional
verification of RTL design of the communication based SOC and yields the complete code
coverage. Following the test Plan, the test cases are generated and then verified by developing the
sequences using this
methodology. The sequencer drives these sequences to the driver and to the Scoreboard. In the
scoreboard, the comparison takes place between the actual output and the expected one. If the
then we conclude that the verification is
By using verification simulation software, the Verification of communication based SOC have
been carried out and the log files for the test cases are generated with Coverage report. So the
whole design is carried out using HDL and the verification is carried out by using advanced
verification Methodology. The communication based SOC has been set as DUT for the functional
ation simulation software.
Fig.3. Shows the comparison between different verification methodologies i.e. System verilog,
Open verification methodology and Advanced verification methodology. It is clear from the
simulation with
comparison to system verilog and OVM. Advanced verification methodology is more time
9. International Journal of VLSI design & Communication Systems (VLSICS) Vol.6, No.6, December 2015
19
Fig. 4. Simulation Result
Fig. 4. Shows the simulation waveform of communication based SOC that has been carried out
using advanced Verification Methodology. From the waveform, we can conclude proper
transmission and reception of packet based data through the design under test (DUT).
TABLE I: Coverage Report
5. CONCLUSION
The specifications of Communication based SOC are verified successfully using Advance
verification methodology on verification simulator and 95% code coverage has been extracted.
For improvement of code coverage modification in the code has been done according to the need.
The scoreboard also successfully compared the result of every transaction generated. This
methodology provides the complete coverage of the RTL design so as to acquire the fault free
Weighted Average: 95.0%
Coverage Type ◂◂◂◂ Bins Hits Misses Coverage (%) ◂◂◂◂
Branch 200 170 30 85. 00%
Total Assertion
Attempted
13 13 0 100.00%
Total Assertion
Failures
13 0 - 0.00%
Total Assertion
Successes
13 13 0 100.00%