The document discusses testing in the oil and gas software industry. It focuses on testing PetroVR, an oil and gas business simulation software. Some key points:
- Software in this industry handles large amounts of complex data and functional knowledge. Testing requires understanding specialized technical vocabulary and domain knowledge.
- PetroVR is a comprehensive simulation suite that integrates knowledge from engineering, economics, planning, and finance. It has over 5,000 classes and 80,000 methods.
- Testing includes reviewing documentation, defining test cases, executing tests, and domain experts performing ad-hoc testing to identify unintended behaviors. Testers require experience to understand statistical results and domain terminology.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
A Critical Technology Element (CTE) is a new or novel technology that a platform or system depends on to achieve successful development or production or to successfully meet a system operational threshold requirement. Technology Readiness Levels (TRL) are a method of estimating technology maturity of CTE of a program during the Acquisition Process. They are determine during a Technology Readiness Assessment (TRA) that examines program concepts, technology requirements, and demonstrated technology capabilities.
Final thesis: Technological maturity of future energy systemsNina Kallio
For my Master thesis I built a methodology to assess system maturities in energy sector. The aim was to build a framework, process and tools with the scope of assessing emerging systems and their current technological maturity in an uniform and quantitative way.
This CSC Trusted Cloud Services white paper explores the opportunities now presented to independent software vendors and developers thanks to cloud computing solutions. CSC is enabling ISVs to deliver Software-as-a-Service, creating new value and transforming business models.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
A Critical Technology Element (CTE) is a new or novel technology that a platform or system depends on to achieve successful development or production or to successfully meet a system operational threshold requirement. Technology Readiness Levels (TRL) are a method of estimating technology maturity of CTE of a program during the Acquisition Process. They are determine during a Technology Readiness Assessment (TRA) that examines program concepts, technology requirements, and demonstrated technology capabilities.
Final thesis: Technological maturity of future energy systemsNina Kallio
For my Master thesis I built a methodology to assess system maturities in energy sector. The aim was to build a framework, process and tools with the scope of assessing emerging systems and their current technological maturity in an uniform and quantitative way.
This CSC Trusted Cloud Services white paper explores the opportunities now presented to independent software vendors and developers thanks to cloud computing solutions. CSC is enabling ISVs to deliver Software-as-a-Service, creating new value and transforming business models.
Reproducible Crashes: Fuzzing Pharo by Mutating the Test MethodsUniversity of Antwerp
Fuzzing (or Fuzz Testing) is a technique to verify the robustness of a program-under-test. Valid input is replaced by random values with the goal to force the program-under-test into unresponsive states. In this position paper, we propose a white box Fuzzing approach by transforming (mutating) existing test methods. We adopt the mechanisms used for test amplification to generate crash inducing tests, which developers can reproduce later. We provide anecdotal evidence that our approach towards Fuzzing reveals crashing issues in the Pharo environment.
With the rise of agile development and the adoption of continuous integration, the software industry has seen an increasing interest in test automation. Many organizations invest in test automation but fail to reap the expected benefits, most likely due to a lack of test-automation maturity. In this talk, we present the results of a test automation maturity survey collecting responses of 151 practitioners coming from 101 organizations in 25 countries. We make observations regarding the state of the practice and provide a benchmark for assessing the maturity of an agile team. The benchmark resulted in a self-assessment tool for practitioners to be released under an open source license. An alfa version is presented herein. The research underpinning the survey has been conducted through the TESTOMAT project, a European project with 34 partners coming from 6 different countries.
(Presentation delivered at the Test Automation Days and the Testnet Autumn Event; October 2020)
Advanced Verification Methodology for Complex System on Chip VerificationVLSICS Design
Verification remains the most significant challenge in getting advanced SOC devices in market. The
important challenge to be solved in the Semiconductor industry is the growing complexity of SOCs.
Industry experts consider that the verification effort is almost 70% to 75% of the overall design effort.
Verification language cannot alone increase verification productivity but it must be accompanied by a
methodology to facilitate reuse to the maximum extent under different design IP configurations. This
Advanced reusable test bench development will decrease the time to market for a chip. It will help in code
reuse so that the same code used in sub-block level can be used in block level and top level as well that
helps in saving cost for a tape-out of a chip. This test bench development technique will help us to achieve
faster time to market and will help reducing the cost for the chip up to a large extent.
Finding Bugs, Fixing Bugs, Preventing Bugs — Exploiting Automated Tests to In...University of Antwerp
With the rise of agile development, software teams all over the world embrace faster release cycles as *the* way to incorporate customer feedback into product development processes. Yet, faster release cycles imply rethinking the traditional notion of software quality: agile teams must balance reliability (minimize known defects) against agility (maximize ease of change). This talk will explore the state-of-the-art in software test automation and the opportunities this may present for maintaining this balance. We will address questions like: Will our test suite detect critical defects early? If not, how can we improve our test suite? Where should we fix a defect?
(Keynote for the SHIFT 2020 and IWSF 2020 Workshops, October 2020)
Better testing for C# software through source code analysiskalistick
You are probably using source code analysis for your C# software in order to ensure code quality. Want to go further ? You can use source code analysis to test the software more efficiently through risk based testing and improved regression testing and then deliver the software faster reducing testing cost in the meantime
Swiss Testing Day 2013 - How to avoid the testing swiss cheese syndromemarc.rambert
As a lot of teams suffer all over the world from the “Testing Swiss Cheese Syndrome” so I believe it is time to share the information that we have collected. By the end of this presentation, you should be able to make a first diagnostic on your testing activities and reflect on adapted medication.
To introduce you to this syndrome, let’s think about all the testing activities going on during an application development. It is usually a subset of unit testing, integration testing, functional testing, automated testing, manual testing, exploratory testing, etc.
It is very unlikely that a single testing activity covers the whole application. That’s where the similarity with a slide of Swiss cheese: you can imagine holes in your test coverage, areas that haven’t been covered by one testing activity.
The Swiss Cheese Syndrome appends when you don’t have an aggregated view of all these testing activities. In such a case, testing holes join their force to create tunnels where bugs and regressions can stay hidden until production.
This is a four parts lecture series. The course is designed for reliability engineers working in electronics, opto-electronics and photonics industries. It explains the roles of Highly Accelerated Life Testing (HALT) in the design and manufacturing efforts, with the emphasis on the design one (the HALT in manufacturing is the well known late Greg Hobb’s approach), and teaches what could and should be done to design, when high probability is a must, a product with the predicted, specified (“prescribed”) and, if necessary, even controlled, low probability of the field failure.
Part 3: • Design for Reliability (DfR)
• Probabilistic Design for Reliability (PDfR): role, attributes, challenges, pitfalls
• Safety margin and safety factor
• Practical examples: assemblies subjected to thermal and/or dynamic loading
Part 4: • More general PDfR approach
• New Qualification Approaches Needed?
• One effective way to improve the existing QT practices and specifications
This is a special report by IEEE - Power System relaying Committee's Working Group I12 outlining industry practices of Quality Assurance for protection and control design drawing packages. Throughout the electric utility industry, the drive to maximize quality assurance practices has gained increased prominence. These practices mitigate common errors frequently encountered in engineering design packages, specific to
Protection and Control (P&C) design. This report will illustrate industry practices to be applied in a Quality Assurance
Program for protection and control design drawing packages; from conception to final “as-built.” It is the reader’s responsibility to incorporate these practices into their organization’s Quality Assurance Program.
Testing Masterclass for Electromind (Steve Allott) Mar 2003, London. Slideset was for discussions, not a linear presentation, then was part-extended (work-in-progress) after session.
The key to successful testing is effective and timely planning. Rick Craig introduces proven test planning methods and techniques, including the Master Test Plan and level-specific test plans for acceptance, system, integration, and unit testing. Rick explains how to customize an IEEE-829-style test plan and test summary report to fit your organization’s needs. Learn how to manage test activities, estimate test efforts, and achieve buy-in. Discover a practical risk analysis technique to prioritize your testing and become more effective with limited resources. Rick offers test measurement and reporting recommendations for monitoring the testing process. Discover new methods and develop renewed energy for taking your organization’s test management to the next level.
The key to successful testing is effective and timely planning. Rick Craig introduces proven test planning methods and techniques, including the Master Test Plan and level-specific test plans for acceptance, system, integration, and unit testing. Rick explains how to customize an IEEE-829-style test plan and test summary report to fit your organization’s needs. Learn how to manage test activities, estimate test efforts, and achieve buy-in. Discover a practical risk analysis technique to prioritize your testing and become more effective with limited resources. Rick offers test measurement and reporting recommendations for monitoring the testing process. Discover new methods and develop renewed energy for taking your organization’s test management to the next level.
Why On-Demand Provisioning Enables Tighter Alignment of Test and Production E...Cognizant
To improve their test environments and application quality, organizations are turning to the cloud for on-demand provisioning, as well as build and deployment automation.
Formal Verification of Developer Tests: a Research Agenda Inspired by Mutatio...University of Antwerp
With the current emphasis on DevOps, automated software tests become a necessary ingredient for continuously evolving, high-quality software systems. This implies that the test code takes a significant portion of the complete code base — test to code ratios ranging from 3:1 to 2:1 are quite common.
We argue that "testware'" provides interesting opportunities for formal verification, especially because the system under test may serve as an oracle to focus the analysis. As an example we describe five common problems (mainly from the subfield of mutation testing) and how formal verification may contribute.
We deduce a research agenda as an open invitation for fellow researchers to investigate the peculiarities of formally verifying testware.
Reproducible Crashes: Fuzzing Pharo by Mutating the Test MethodsUniversity of Antwerp
Fuzzing (or Fuzz Testing) is a technique to verify the robustness of a program-under-test. Valid input is replaced by random values with the goal to force the program-under-test into unresponsive states. In this position paper, we propose a white box Fuzzing approach by transforming (mutating) existing test methods. We adopt the mechanisms used for test amplification to generate crash inducing tests, which developers can reproduce later. We provide anecdotal evidence that our approach towards Fuzzing reveals crashing issues in the Pharo environment.
With the rise of agile development and the adoption of continuous integration, the software industry has seen an increasing interest in test automation. Many organizations invest in test automation but fail to reap the expected benefits, most likely due to a lack of test-automation maturity. In this talk, we present the results of a test automation maturity survey collecting responses of 151 practitioners coming from 101 organizations in 25 countries. We make observations regarding the state of the practice and provide a benchmark for assessing the maturity of an agile team. The benchmark resulted in a self-assessment tool for practitioners to be released under an open source license. An alfa version is presented herein. The research underpinning the survey has been conducted through the TESTOMAT project, a European project with 34 partners coming from 6 different countries.
(Presentation delivered at the Test Automation Days and the Testnet Autumn Event; October 2020)
Advanced Verification Methodology for Complex System on Chip VerificationVLSICS Design
Verification remains the most significant challenge in getting advanced SOC devices in market. The
important challenge to be solved in the Semiconductor industry is the growing complexity of SOCs.
Industry experts consider that the verification effort is almost 70% to 75% of the overall design effort.
Verification language cannot alone increase verification productivity but it must be accompanied by a
methodology to facilitate reuse to the maximum extent under different design IP configurations. This
Advanced reusable test bench development will decrease the time to market for a chip. It will help in code
reuse so that the same code used in sub-block level can be used in block level and top level as well that
helps in saving cost for a tape-out of a chip. This test bench development technique will help us to achieve
faster time to market and will help reducing the cost for the chip up to a large extent.
Finding Bugs, Fixing Bugs, Preventing Bugs — Exploiting Automated Tests to In...University of Antwerp
With the rise of agile development, software teams all over the world embrace faster release cycles as *the* way to incorporate customer feedback into product development processes. Yet, faster release cycles imply rethinking the traditional notion of software quality: agile teams must balance reliability (minimize known defects) against agility (maximize ease of change). This talk will explore the state-of-the-art in software test automation and the opportunities this may present for maintaining this balance. We will address questions like: Will our test suite detect critical defects early? If not, how can we improve our test suite? Where should we fix a defect?
(Keynote for the SHIFT 2020 and IWSF 2020 Workshops, October 2020)
Better testing for C# software through source code analysiskalistick
You are probably using source code analysis for your C# software in order to ensure code quality. Want to go further ? You can use source code analysis to test the software more efficiently through risk based testing and improved regression testing and then deliver the software faster reducing testing cost in the meantime
Swiss Testing Day 2013 - How to avoid the testing swiss cheese syndromemarc.rambert
As a lot of teams suffer all over the world from the “Testing Swiss Cheese Syndrome” so I believe it is time to share the information that we have collected. By the end of this presentation, you should be able to make a first diagnostic on your testing activities and reflect on adapted medication.
To introduce you to this syndrome, let’s think about all the testing activities going on during an application development. It is usually a subset of unit testing, integration testing, functional testing, automated testing, manual testing, exploratory testing, etc.
It is very unlikely that a single testing activity covers the whole application. That’s where the similarity with a slide of Swiss cheese: you can imagine holes in your test coverage, areas that haven’t been covered by one testing activity.
The Swiss Cheese Syndrome appends when you don’t have an aggregated view of all these testing activities. In such a case, testing holes join their force to create tunnels where bugs and regressions can stay hidden until production.
This is a four parts lecture series. The course is designed for reliability engineers working in electronics, opto-electronics and photonics industries. It explains the roles of Highly Accelerated Life Testing (HALT) in the design and manufacturing efforts, with the emphasis on the design one (the HALT in manufacturing is the well known late Greg Hobb’s approach), and teaches what could and should be done to design, when high probability is a must, a product with the predicted, specified (“prescribed”) and, if necessary, even controlled, low probability of the field failure.
Part 3: • Design for Reliability (DfR)
• Probabilistic Design for Reliability (PDfR): role, attributes, challenges, pitfalls
• Safety margin and safety factor
• Practical examples: assemblies subjected to thermal and/or dynamic loading
Part 4: • More general PDfR approach
• New Qualification Approaches Needed?
• One effective way to improve the existing QT practices and specifications
This is a special report by IEEE - Power System relaying Committee's Working Group I12 outlining industry practices of Quality Assurance for protection and control design drawing packages. Throughout the electric utility industry, the drive to maximize quality assurance practices has gained increased prominence. These practices mitigate common errors frequently encountered in engineering design packages, specific to
Protection and Control (P&C) design. This report will illustrate industry practices to be applied in a Quality Assurance
Program for protection and control design drawing packages; from conception to final “as-built.” It is the reader’s responsibility to incorporate these practices into their organization’s Quality Assurance Program.
Testing Masterclass for Electromind (Steve Allott) Mar 2003, London. Slideset was for discussions, not a linear presentation, then was part-extended (work-in-progress) after session.
The key to successful testing is effective and timely planning. Rick Craig introduces proven test planning methods and techniques, including the Master Test Plan and level-specific test plans for acceptance, system, integration, and unit testing. Rick explains how to customize an IEEE-829-style test plan and test summary report to fit your organization’s needs. Learn how to manage test activities, estimate test efforts, and achieve buy-in. Discover a practical risk analysis technique to prioritize your testing and become more effective with limited resources. Rick offers test measurement and reporting recommendations for monitoring the testing process. Discover new methods and develop renewed energy for taking your organization’s test management to the next level.
The key to successful testing is effective and timely planning. Rick Craig introduces proven test planning methods and techniques, including the Master Test Plan and level-specific test plans for acceptance, system, integration, and unit testing. Rick explains how to customize an IEEE-829-style test plan and test summary report to fit your organization’s needs. Learn how to manage test activities, estimate test efforts, and achieve buy-in. Discover a practical risk analysis technique to prioritize your testing and become more effective with limited resources. Rick offers test measurement and reporting recommendations for monitoring the testing process. Discover new methods and develop renewed energy for taking your organization’s test management to the next level.
Why On-Demand Provisioning Enables Tighter Alignment of Test and Production E...Cognizant
To improve their test environments and application quality, organizations are turning to the cloud for on-demand provisioning, as well as build and deployment automation.
Formal Verification of Developer Tests: a Research Agenda Inspired by Mutatio...University of Antwerp
With the current emphasis on DevOps, automated software tests become a necessary ingredient for continuously evolving, high-quality software systems. This implies that the test code takes a significant portion of the complete code base — test to code ratios ranging from 3:1 to 2:1 are quite common.
We argue that "testware'" provides interesting opportunities for formal verification, especially because the system under test may serve as an oracle to focus the analysis. As an example we describe five common problems (mainly from the subfield of mutation testing) and how formal verification may contribute.
We deduce a research agenda as an open invitation for fellow researchers to investigate the peculiarities of formally verifying testware.
PythonQuants conference - QuantUniversity presentation - Stress Testing in th...QuantUniversity
In this talk, we will discuss the key aspects of model verification and validation and introduce a novel approach to do stress and scenario tests leveraging parallel and distributed computing technologies and the cloud.
We have developed a platform that leverages cloud based technologies to run stress tests on a massive scale without having to invest in fixed in-house architectures. Through a case study, we will illustrate best practices for stress and scenario testing for model verification and validation. These best practices meant to provide practical tips for companies embarking on a formal model risk management program or enhancing their stress testing methodologies
Tiara Ramadhani - Program Studi S1 Sistem Informasi - Fakultas Sains dan Tekn...Tiara Ramadhani
Tugas ini di buat untuk memenuhi salah satu tugas mata kuliah pada Program Studi S1 Sistem Informasi.
Oleh ;
Nama : Tiara Ramadhani.
NIM ; 11453201723
SIF VII E
UIN SUSKA RIAU
Software Development Models by Graham et alEmi Rahmi
Software Development Models - Graham et al Foundation of Software Testing
http://sif.uin-suska.ac.id/
http://fst.uin-suska.ac.id/
http://www.uin-suska.ac.id/
Software Development Models - Graham et al Foundation of Software Testing
http://sif.uin-suska.ac.id/
http://fst.uin-suska.ac.id/
http://www.uin-suska.ac.id/
3. Testers perform a review of specification documents in order to by business analysts in real-life conditions. Experts test the appli-
become familiar with the functionality to be tested. This activity cation with their specific problems in mind.
ensures the testability, completeness and consistency of the spe-
cification. During this activity testers, developers and domain ex- Experience has shown that placing this type of testing at the end
perts exchange opinions about the product and the specification of the quality control process allows detection of unintended be-
documents. In other domains this kind of activity is not necessary haviors that might otherwise go unnoticed. Note however that
because the functionality being tested is easily learned, even by real-life cases tend to be overly large and complex. In order to in-
less experienced testers. corporate this contribution into a systematic coverage, professio-
nal testers can reduce these cases to isolate emergent issues and
Test case definition comprises two levels: GUI testing and simula- transfer them to a controlled environment.
tion testing. GUI testing in this context is no different from that
performed in other domains. Simulation testing is a more com-
Some conclusions
plex task and is critical for the quality of the product. Over more than three years some interesting conclusions have
been drawn:
Less experienced testers usually work on GUI testing as a way to
learn about the product. Simulation testing requires more expe- The first concerns development. Whether completely error-free
rienced testers. In order to find unexpected behaviors during this software is attainable or not, it is a fact that it cannot be achie-
particular kind of testing, a tester has to be able to interpret stati- ved without traditional testing. Even if we have invested time
stical results and to understand specific terminology. and energy in carefully defining the development processes and
creating automatic tests that reach a high level of coverage, the
A divide-and-conquer strategy is used in simulation testing. Test involvement of a testing team remains essential.
cases are grouped by objects and their features. This strategy
allows two things: (1) to narrow down the model complexity to Secondly, project managers should believe in devoting time to
smaller units, and (2) to have a clear idea of how much functio- training testing teams in domain-related subjects. They should
nality is covered. After defining the cases for each root object, it is focus not only on selecting a proficient development team but
necessary to consider their combinations and discard those that also on securing the participation of a testing team with a sound
are redundant (or not relevant). Here the tester’s domain know- knowledge of the domain.
ledge is critical because it is impossible to test every combination
of features and objects. Third, investing in knowledge management is fundamental. The
corpus of knowledge in this type of industry is not static: it inevi-
The execution activity involves data preparation, model executi- tably undergoes constant change, and so do work teams. A good
on and result analysis. Data preparation and result analysis are knowledge management ensures that this ongoing renewal of
the most important tasks. both information and human resources does not result in loss of
knowledge for the project.
Data preparation consists of defining models in a way that de-
fined test cases are covered. These models should be complex Fourth, automatic testing can eliminate recidivism. Random
enough to test the functionality (test cases) and simple enough to samples of test cases showed that 0% of already-addressed fai-
easily detect errors. Unlike tests in other domains, testers spend lures reappear. (2% of functional tests that originally passed fail
most of their time designing the models as these constitute a lib- one year later.)
rary of knowledge that will be useful in performing tests not only
in this release, but also in upcoming ones. Finally, in a domain so characterized by the need for analytical
skills, the role that testers must assume in the process cannot be
Analyzing the results to determine whether they are correct is limited to early detection of failures. The active involvement of
hard. It involves the manipulation of a large number of formu- testers in exchanges with developers becomes indispensable, as
las and algorithms. A key issue here is the way testers document their feedback may even shed light on design issues.
their findings. It is necessary to document not only the data used
but also the calculation method involved. This is the only way the We are indebted to Diego Seguí for his assistance in putting this
developer can follow the computations in order to reproduce the paper together.
behavior.
Additionally, a regression test is executed in order to ensure back-
compatibility. This is accomplished by running models created
with previous versions of the product. The results obtained with
the current release must be equal to those of previous ones.
Help & release notes review is straightforward. As in any complex
and highly specific software product, technical documentation is
essential for users. Writing style, completeness, formula corres-
pondence between software and documentation are some of the
validation criteria used in this type of review.
The last activity of the quality control process is the ad-hoc tes-
ting carried out by domain experts. This step is usually performed
www.testingexperience.com The Magazine for Professional Testers 31
4. > biography
Leandro Caniglia
Your Ad here
has worked at Caesar Systems
since 2001 and serves as chief
technologist and director of
software development. For
more than a decade prior to
joining Caesar Systems, Ca-
niglia worked as a Smalltalk
consultant for several compa- www.testingexperience.com
nies in Argentina, Brazil and
Chile. He was professor at the
University of Buenos Aires for
more than 20 years. Caniglia
has also worked as a researcher in the CONICET, the official
office for scientific research in Argentina. In 1997, he foun-
ded the user group SUGAR. He has a Ph.D. in Mathematics
and has published extensively on Computational Algeb-
raic Geometry. In 2007 Caniglia became President of FAST,
the organizing board for the Annual Argentine Smalltalk
Conference.
Damián Famiglietti
has worked at Pragma Consul-
tores since 2000. He is a Qua-
lity Assurance expert. He has
10 years of solid professional
experience in Software Quali-
ty Assurance. Damián has led
testing teams in several pro-
jects of different industries.
He holds a Systems Analyst
degree from the University of
Buenos Aires (Argentina).
Ernesto Kiszkurno
has worked at Pragma Consul-
tores since 1996 and became
partner in 2007. He serves as
director of Research & Deve-
lopment and is a Quality As-
surance expert. Kiszkurno has
15 years of solid professional
experience in the areas of soft-
ware engineering, process im-
provement, project manage-
ment and quality assurance.
His main activity in recent
years has been the training, leadership and management
of testing teams at multiple companies and in various in-
dustries. He holds a Bachelor of Computer Science degree
from the University of Buenos Aires (Argentina) and a Mas-
ter in Business Administration degree from the Torcuato
Ditella University. He has also been vice president for His-
panic America of the Software Testing ISTQB Qualifications
Board since 2008. For more information, please visit www.
ernestokiszkurno.com.ar.
32 The Magazine for Professional Testers www.testingexperience.com