This talk can be considered like the part 2 of my talk at SecurityDay. In the previous part, I talked about how it was possible to cover a targeted function in memory using the DSE (Dynamic Symbolic Execution) approach. Cover a function (or its states) doesn't mean find all vulnerabilities, some vulnerability doesn't crashes the program. That's why we must implement specific analysis to find specific bugs. These analysis are based on the binary instrumentation and the runtime behavior analysis of the program. In this talk, we will see how it's possible to find these following kind of bugs : off-by-one, stack / heap overflow, use after free and format string.
Covering a function using a Dynamic Symbolic Execution approach Jonathan Salwan
This talk is about binary analysis and instrumentation. We will see how it's possible to target a specific function, snapshot the context memory/registers before the function, translate the instrumentation into an intermediate representation,apply a taint analysis based on this IR, build/keep formulas for a Dynamic Symbolic Execution (DSE), generate a concrete value to go through a specific path, restore the context memory/register and generate another concrete value to go through another path then repeat this operation until the target function is covered.
2011 CodeEngn Conference 05
DBI 란 Dynamic Binary Instrumentation 의 약자이다. 이는 실행 중인 어떤 Process 또는 Program 에 특수한 목적으로 사용될 임의의 코드를 삽입하는 방법이다. 이를 이용하여 동적으로 생성된 Code 처리, 특정 코드의 발견, 실행중인 Process 분석 등을 할 수 있다. 주로 컴퓨터 구조 연구, 프로그램, 스레드 분 석에 이용되며, Taint Analysis 에 대한 개념, 각종 Tool 과 사용 방법, 간단한 예제, 최신 취약점 분석 등 을 통하여 DBI 를 알아보도록 한다.
http://codeengn.com/conference/05
This talk can be considered like the part 2 of my talk at SecurityDay. In the previous part, I talked about how it was possible to cover a targeted function in memory using the DSE (Dynamic Symbolic Execution) approach. Cover a function (or its states) doesn't mean find all vulnerabilities, some vulnerability doesn't crashes the program. That's why we must implement specific analysis to find specific bugs. These analysis are based on the binary instrumentation and the runtime behavior analysis of the program. In this talk, we will see how it's possible to find these following kind of bugs : off-by-one, stack / heap overflow, use after free and format string.
Covering a function using a Dynamic Symbolic Execution approach Jonathan Salwan
This talk is about binary analysis and instrumentation. We will see how it's possible to target a specific function, snapshot the context memory/registers before the function, translate the instrumentation into an intermediate representation,apply a taint analysis based on this IR, build/keep formulas for a Dynamic Symbolic Execution (DSE), generate a concrete value to go through a specific path, restore the context memory/register and generate another concrete value to go through another path then repeat this operation until the target function is covered.
2011 CodeEngn Conference 05
DBI 란 Dynamic Binary Instrumentation 의 약자이다. 이는 실행 중인 어떤 Process 또는 Program 에 특수한 목적으로 사용될 임의의 코드를 삽입하는 방법이다. 이를 이용하여 동적으로 생성된 Code 처리, 특정 코드의 발견, 실행중인 Process 분석 등을 할 수 있다. 주로 컴퓨터 구조 연구, 프로그램, 스레드 분 석에 이용되며, Taint Analysis 에 대한 개념, 각종 Tool 과 사용 방법, 간단한 예제, 최신 취약점 분석 등 을 통하여 DBI 를 알아보도록 한다.
http://codeengn.com/conference/05
Automating Speed: A Proven Approach to Preventing Performance Regressions in ...HostedbyConfluent
"Regular performance testing is one of the pillars of Kafka Streams’ reliability and efficiency. Beyond ensuring dependable releases, regular performance testing supports engineers in new feature development with the ability to easily test the performance impact of their features, compare different approaches, etc.
In this session, Alex and John share their experience from developing, using, and maintaining a performance testing framework for Kafka Streams that has prevented multiple performance regressions over the last 5 years. They cover guiding principles and architecture, how to ensure statistical significance and stability of results, and how to automate regression detection for actionable notifications.
This talk sheds light on how Apache Kafka is able to foster a vibrant open-source community while maintaining a high performance bar across many years and releases. It also empowers performance-minded engineers to avoid common pitfalls and bring high-quality performance testing to their own systems."
Demo how to efficiently evaluate nf-vi performance by leveraging opnfv testi...OPNFV
Liang Gao, Huawei, Trevor Cooper, Intel
NFV environments are highly flexible and this introduces unique challenges for testing performance of NFVI and Network Services. This presentation introduces OPNFV performance test projects and explains their role as part of the testing ecosystem. Examples from three performance testing categories will be demonstrated showing test results and their interpretation. Test cases discussed will include data-path performance, live migration performance and storage performance.
A Large-Scale Empirical Comparison of Static and DynamicTest Case Prioritizat...Kevin Moran
The large body of existing research in Test Case Prioritization (TCP) techniques, can be broadly classified into two categories: dynamic techniques (that rely on run-time execution information) and static techniques (that operate directly on source and test code). Absent from this current body of work is a comprehensive study aimed at understanding and evaluating the static approaches and comparing them to dynamic approaches on a large set of projects.
In this work, we perform the first extensive study aimed at empirically evaluating four static TCP techniques comparing them with state-of-research dynamic TCP techniques at different test-case granularities (e.g., method and class level) in terms of effectiveness, efficiency and similarity of faults detected. This study was performed on 30 real-word Java programs encompassing 431 KLoC. In terms of effectiveness, we find that the static call-graph based technique outperforms the other static techniques at test-class level, but the topic-model-based technique performs better at test-method level. In terms of efficiency, the static call-graph based technique is also the most efficient when compared to other static techniques. When examining the similarity of faults detected for the four static techniques compared to the four dynamic ones, we find that on average, the faults uncovered by these two groups of techniques are quite dissimilar, with the top 10% of test cases agreeing on only ~ 25% - 30% of detected faults. This prompts further research into the severity/importance of faults uncovered by these techniques, and into the potential for combining static and dynamic information for more effective approaches.
TMPA-2017: Regression Testing with Semiautomatic Test Selection for Auditing ...Iosif Itkin
TMPA-2017: Tools and Methods of Program Analysis
3-4 March, 2017, Hotel Holiday Inn Moscow Vinogradovo, Moscow
Regression Testing with Semiautomatic Test Selection for Auditing of IMS Database
Alexey Ruchay, Ivan Kliavin, Tatiana Kotova, Julia Ivanova, Applied Technologies
Would like to know more?
Visit our website:
www.tmpaconf.org
www.exactprosystems.com/events/tmpa
Follow us:
https://www.linkedin.com/company/exactpro-systems-llc?trk=biz-companies-cym
https://twitter.com/exactpro
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/2qoUklo.
Mark Price talks about techniques for making performance testing a first-class citizen in a Continuous Delivery pipeline. He covers a number of war stories experienced by the team building one of the world's most advanced trading exchanges. Filmed at qconlondon.com.
Mark Price is a Senior Performance Engineer at Improbable.io, working on optimizing and scaling reality-scale simulations. Previously, he worked as Lead Performance Engineer at LMAX Exchange, where he helped to optimize the platform to become one of the world's fastest FX exchanges.
Program Performance Analysis Toolkit AdaptorMichael Pankov
The Adaptor framework automates experimentation, data collection and analysis in the field of programs performance and tuning.
It can be used for i.e. estimation of computer system performance during its design or search of optimal compiler settings by methods of iterative compilation and machine learning-driven techniques.
Contact information: Michael K. Pankov
• mkpankov@gmail.com
• michaelpankov.com
Source on GitHub: https://github.com/constantius9/adaptor
This is an extended and edited version of my diploma defense keynote, given on June 19, 2013
After four successful JUnit tool competitions, we
report on the achievements of a new Java Unit Testing Tool
Competition. This 5th contest introduces statistical analyses in
the benchmark infrastructure and has been validated with significance against the results of the previous 4th edition. Overall, the competition evaluates four automated JUnit testing tools taking as baseline human written test cases from real projects. The paper details the modifications performed to the methodology
and provides full results of the competition.
Applying the power of Continuous Delivery to performance testing. Process, techniques, best practices. This talk describes a pragmatic approach to building a robust performance testing strategy.
Automating Speed: A Proven Approach to Preventing Performance Regressions in ...HostedbyConfluent
"Regular performance testing is one of the pillars of Kafka Streams’ reliability and efficiency. Beyond ensuring dependable releases, regular performance testing supports engineers in new feature development with the ability to easily test the performance impact of their features, compare different approaches, etc.
In this session, Alex and John share their experience from developing, using, and maintaining a performance testing framework for Kafka Streams that has prevented multiple performance regressions over the last 5 years. They cover guiding principles and architecture, how to ensure statistical significance and stability of results, and how to automate regression detection for actionable notifications.
This talk sheds light on how Apache Kafka is able to foster a vibrant open-source community while maintaining a high performance bar across many years and releases. It also empowers performance-minded engineers to avoid common pitfalls and bring high-quality performance testing to their own systems."
Demo how to efficiently evaluate nf-vi performance by leveraging opnfv testi...OPNFV
Liang Gao, Huawei, Trevor Cooper, Intel
NFV environments are highly flexible and this introduces unique challenges for testing performance of NFVI and Network Services. This presentation introduces OPNFV performance test projects and explains their role as part of the testing ecosystem. Examples from three performance testing categories will be demonstrated showing test results and their interpretation. Test cases discussed will include data-path performance, live migration performance and storage performance.
A Large-Scale Empirical Comparison of Static and DynamicTest Case Prioritizat...Kevin Moran
The large body of existing research in Test Case Prioritization (TCP) techniques, can be broadly classified into two categories: dynamic techniques (that rely on run-time execution information) and static techniques (that operate directly on source and test code). Absent from this current body of work is a comprehensive study aimed at understanding and evaluating the static approaches and comparing them to dynamic approaches on a large set of projects.
In this work, we perform the first extensive study aimed at empirically evaluating four static TCP techniques comparing them with state-of-research dynamic TCP techniques at different test-case granularities (e.g., method and class level) in terms of effectiveness, efficiency and similarity of faults detected. This study was performed on 30 real-word Java programs encompassing 431 KLoC. In terms of effectiveness, we find that the static call-graph based technique outperforms the other static techniques at test-class level, but the topic-model-based technique performs better at test-method level. In terms of efficiency, the static call-graph based technique is also the most efficient when compared to other static techniques. When examining the similarity of faults detected for the four static techniques compared to the four dynamic ones, we find that on average, the faults uncovered by these two groups of techniques are quite dissimilar, with the top 10% of test cases agreeing on only ~ 25% - 30% of detected faults. This prompts further research into the severity/importance of faults uncovered by these techniques, and into the potential for combining static and dynamic information for more effective approaches.
TMPA-2017: Regression Testing with Semiautomatic Test Selection for Auditing ...Iosif Itkin
TMPA-2017: Tools and Methods of Program Analysis
3-4 March, 2017, Hotel Holiday Inn Moscow Vinogradovo, Moscow
Regression Testing with Semiautomatic Test Selection for Auditing of IMS Database
Alexey Ruchay, Ivan Kliavin, Tatiana Kotova, Julia Ivanova, Applied Technologies
Would like to know more?
Visit our website:
www.tmpaconf.org
www.exactprosystems.com/events/tmpa
Follow us:
https://www.linkedin.com/company/exactpro-systems-llc?trk=biz-companies-cym
https://twitter.com/exactpro
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/2qoUklo.
Mark Price talks about techniques for making performance testing a first-class citizen in a Continuous Delivery pipeline. He covers a number of war stories experienced by the team building one of the world's most advanced trading exchanges. Filmed at qconlondon.com.
Mark Price is a Senior Performance Engineer at Improbable.io, working on optimizing and scaling reality-scale simulations. Previously, he worked as Lead Performance Engineer at LMAX Exchange, where he helped to optimize the platform to become one of the world's fastest FX exchanges.
Program Performance Analysis Toolkit AdaptorMichael Pankov
The Adaptor framework automates experimentation, data collection and analysis in the field of programs performance and tuning.
It can be used for i.e. estimation of computer system performance during its design or search of optimal compiler settings by methods of iterative compilation and machine learning-driven techniques.
Contact information: Michael K. Pankov
• mkpankov@gmail.com
• michaelpankov.com
Source on GitHub: https://github.com/constantius9/adaptor
This is an extended and edited version of my diploma defense keynote, given on June 19, 2013
After four successful JUnit tool competitions, we
report on the achievements of a new Java Unit Testing Tool
Competition. This 5th contest introduces statistical analyses in
the benchmark infrastructure and has been validated with significance against the results of the previous 4th edition. Overall, the competition evaluates four automated JUnit testing tools taking as baseline human written test cases from real projects. The paper details the modifications performed to the methodology
and provides full results of the competition.
Applying the power of Continuous Delivery to performance testing. Process, techniques, best practices. This talk describes a pragmatic approach to building a robust performance testing strategy.
Similar to Demand-Driven Structural Testing with Dynamic Instrumentation (ICSE 2005) (20)
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 4
Demand-Driven Structural Testing with Dynamic Instrumentation (ICSE 2005)
1. Jonathan Misurda (jmisurda@cs.pitt.edu)
James A. Clause, Juliya L. Reed, Bruce R. Childers
Department of Computer Science
University of Pittsburgh
Pittsburgh, Pennsylvania 15260 USA
Mary Lou Soffa
Department of Computer Science
University of Virginia
Charlottesville, Virginia 22904 USA
Demand-Driven Structural Testing
with Dynamic Instrumentation
2. Software Testing
n What is Software Testing?
q Gather information about the behavior of the software being
developed or modified.
n Why test software?
q Gain confidence and insights into software correctness
q Create quality, robust programs
n Why is testing hard?
q Testing is expensive
n 50-60% of the total cost of software development
q Adds complexity to development process
n One testing technique is not enough
3. Structural Software Testing
n Structural Testing
q The process of discovering run-time properties of a
program specifically pertaining to control-flow
q Demonstrate adequacy of the test cases
q Different granularities of structures:
n Node coverage records statements (basic blocks)
n Branch coverage records control-flow edges
n Def-Use coverage records pairs of variable definitions
and uses
q Over multiple test cases until coverage criteria
4. Current Testing Tools
n E.g.: JCover, Clover, IBM Rational PurifyPlus
n Static instrumentation with test probes
q Probes are injected instrumentation to gather coverage
information
n Limitations:
q Expensive in both time and space
q Only one type of test
q Limited code regions and scopes
q Cannot define their own ways to test
5. Our Approach
n Demand-driven structural testing
q Specification driven: User written tests
q Test plans: Recipe of how & where to test
q Path specific: Instrument only what is needed
q Dynamic: Insert & remove instrumentation
n Jazz: A scalable & flexible framework for testing
q Automatically apply multiple test strategies
q Handle large programs
q Allows user customization of tests
6. Jazz Overview
Application
IDE & Test GUI
Test Specification
Test Planner
Test Plan
Test Analyzer and Result Reporter
Test Dynamic Instrumenter
Test Results
Select the regions and test types
Allows user customization
Automatically generate where and
how to instrument
Run the program with demand-driven
instrumentation
Collect the results
7. Test Specifier
1a
1b
3
2
Specify the regions to test
1a. Highlight line regions
1b. Select classes and methods
List of desired tests
Click to create a test
8. Test Planner
n Test plan: Where and how to test program
n Test storage, probe location table,
instrumentation payload
n Can combine payloads to create custom strategies
Application
Test
Planner
Test
Specification
Test Instrumentation
Payload
Test Plan
Global Storage & Probe Location Table
9. Test Planner
n Challenges:
q When to insert test probes
q Where to test/instrument
q What to do at each probe
n A planner determines three things:
q Seed Probe Set – probes which are initially inserted in a
test region.
q Coverage Probe Set – probes that can be inserted and
removed on demand along a path.
q Probe Lifetime – whether a probe must be re-inserted
after being hit by its “control flow successor” basic blocks.
10. Branch Coverage Example Plan
public class simpleDemo {
public static void main(String[] args) {
int evenSum = 0;
int oddSum = 0;
for(int i = 0; i < 100; i++) {
if(i%2==0) {
evenSum += i;
}
else {
oddSum += i;
}
}
System.out.println("The sum of the even numbers from 0 to 100 is: " + evenSum);
System.out.println("The sum of the odd numbers from 0 to 100 is: " + oddSum);
}
}
DEFINITIONS {
NAME: d_method, REGION_D,
LOCATION: FILE simpleDemo.java {
CLASS simpleDemo, METHOD main
}
}
BODY {
DO BRANCH_TEST ON REGION d_method UNTIL: 85%
}
simpleDemo.java
simpleDemo.testspec
11. Branch Coverage Planner
n Record which edges are executed
q Determine (source, sink) edge pairs hit at run-time
q Source is a branch
q Sink can be taken & not-taken target of branch
Probe Set Members
Seed Probe First block in testing region
Coverage Probe Sinks of currently instrumented block
Probe Lifetime Until all incoming edges have been
covered
12. Branch Coverage Example
Probe Loc Tab Storage
Block Next Hit
1 2,3
2 4
3 4
4 1
Test Payload
Mark edge hit
Insert at next point
Remove instrumentation
1
2 3
4
Coverage probe
Hit
N, N
N
N
N
Y, Y
Y
Y
Y
Seed probe
13. Demo
n Branch coverage of music player
q Test region is whole program (JOrbis)
q Test input is a song
n Compared
q Traditional approach with static instrumentation
q Demand-driven approach with dynamic instrumentation
With Dynamic
Instrumentation
With Static
Instrumentation
14. Demo
n Branch coverage of music player
q Test region is whole program (JOrbis)
q Test input is a song
n Compared
q Traditional approach with static instrumentation
q Demand-driven approach with dynamic instrumentation
With Dynamic
Instrumentation
With Static
Instrumentation
15. Demo
n Branch coverage of music player
q Test region is whole program (JOrbis)
q Test input is a song
n Compared
q Traditional approach with static instrumentation
q Demand-driven approach with dynamic instrumentation
With Dynamic
Instrumentation
With Static
Instrumentation
16. Other Planners
n Node Coverage Planner
q Record which statements are executed
Probe Set Members
Seed Probe First statement in the testing region
Coverage Probe Next reachable statement
Probe Lifetime Removed as soon as statement is executed
n Def-Use Coverage Planner
q Record which variable definition reaches an executed use
Probe Set Members
Seed Probe All variable definitions
Coverage Probe Uses associated with all definitions
Probe Lifetime Defs removed after all reachable uses are
covered. Uses are removed as executed
17. Dynamic Instrumentation
n Test plan targets an instrumentation API
n FIST instrumentation engine [WOSS’04]
q Retargetable & reconfigurable
q Dynamic insertion & removal of instrumentation
q Binary level instrumentation (post JIT)
n Uses fast breakpoints [Kessler]: Replace existing
instruction with a jump to instrumentation
18. Experimental Methodology
n Test Specification
q Implemented as an Eclipse 3.0 plugin
n Test Planner & Dynamic Instrumenter
q Implemented using the Jikes RVM version 2.1.1
q On the x86 platform
n SPECjvm98 benchmarks test input
q unloaded 2.4 Ghz Pentium IV, 1GB of memory
q RedHat Linux 7.3
n Traditional vs. Jazz (in same implementation)
q Compared coverage, run-time, memory
19. Coverage Results
n Coverage – same reported by both tools
Branch Node Def-Use
compress
58% 90.6% 89.8%
jess
46.8% 80.3% 71.8%
db
44.4% 76.9% 75.0%
javac
38.9% 75.0% 66.9%
mpegaudio
60.9% 88.7% 90.5%
mtrt
50.6% 90.3% 87.3%
jack
55.6% 82.2% 73.4%
22. Summary
n New demand-driven approach
q A tool (Jazz) for structural testing
q Dynamic instrumentation guided by a program’s
execution
q Minimally intrusive
q User configurable and flexible testing
n Very low overhead
q E.g., branch coverage tool is 3-4x faster than
traditional approaches
25. Static Instrumentation
n Shortcomings of static instrumentation:
q Not scalable: Instrumentation left in place causes
unnecessary increase in overhead
q Inflexible: Only certain tests, languages, and
platforms
q Cumbersome: Requires recompilations for
dedicated testing
q Intrusive: Addition of testing code may alter or
mask defects