This document provides a selective survey of software reliability models. It discusses both static models used in early development stages and dynamic models used later. For static models, it describes a phase-based model and predictive development life cycle model. For dynamic models, it outlines reliability growth models, including binomial, Poisson, and other classes. It also presents a case study of incorporating code changes as a covariate into reliability modeling during testing of a large telecommunications system. The document concludes by advocating for wider use of statistical software reliability models to improve development and testing processes.
Ijartes v2-i1-001Evaluation of Changeability Indicator in Component Based Sof...IJARTES
The maintaining of software system is a major
cost concern. The maintaining of a software system depends
on how the changes made to it. The maintainability of a system
depending on the folw of software, its design pattern and
CBSS. In Maintainability phase of a sotware system there are
4 parts, like analyzing, testing, stability, and changes made to
it. In some side areas, these systems emerged very rapidly.
There are many companies which purchase software instead
of developing it .These companies do not have any interst in
the testing of the system but wants to like smoothness in the
flow of the system during changes.
Changeability is one of the characteristics of maintainability.
Software changeability is associated with refactoring which
makes code simpler and easier to maintain (enable all
programmers to improve their code).Factors that affect
changeability include coupling between the modules, lack of
code comments, naming of functions and variables.
Basically,”changeabilty” is the ability of a product or software
to be able to change the structure of the program. It is the rate
the product allows the modification to its components.
In this paper changeability based cost estimation is done.
Initially we take four components; these components are
evaluated based on the coupling, cohesion and Interface
metrix. Next some changes are made to the existing
components and than again these components are evaluated.
Now, on the basis of these two evaluations some conclusion is
made for changeability cost.
EVALUATION OF SOFTWARE DEGRADATION AND FORECASTING FUTURE DEVELOPMENT NEEDS I...ijseajournal
This article is an extended version of a previously published conference paper. In this research, JHotDraw (JHD), a well-tested and widely used open source Java-based graphics framework developed with the best software engineering practice was selected as a test suite. Six versions of this software were profiled, and data collected dynamically, from which four metrics namely (1) entropy (2) software maturity index, COCOMO effort and duration metrics were used to analyze software degradation, maturity level and use
the obtained results as input to time series analysis in order to predict effort and duration period that may
be needed for the development of future versions. The novel idea is that, historical evolution data is used to
project, predict and forecast resource requirements for future developments. The technique presented in
this paper will empower software development decision makers with a viable tool for planning and decision
making.
Software Quality Analysis Using Mutation Testing SchemeEditor IJMTER
The software test coverage is used measure the safety measures. The safety critical analysis is
carried out for the source code designed in Java language. Testing provides a primary means for
assuring software in safety-critical systems. To demonstrate, particularly to a certification authority, that
sufficient testing has been performed, it is necessary to achieve the test coverage levels recommended or
mandated by safety standards and industry guidelines. Mutation testing provides an alternative or
complementary method of measuring test sufficiency, but has not been widely adopted in the safetycritical industry. The system provides an empirical evaluation of the application of mutation testing to
airborne software systems which have already satisfied the coverage requirements for certification.
The system mutation testing to safety-critical software developed using high-integrity subsets of
C and Ada, identify the most effective mutant types and analyze the root causes of failures in test cases.
Mutation testing could be effective where traditional structural coverage analysis and manual peer
review have failed. They also show that several testing issues have origins beyond the test activity and
this suggests improvements to the requirements definition and coding process. The system also
examines the relationship between program characteristics and mutation survival and considers how
program size can provide a means for targeting test areas most likely to have dormant faults. Industry
feedback is also provided, particularly on how mutation testing can be integrated into a typical
verification life cycle of airborne software. The system also covers the safety and criticality levels of
Java source code.
EVALUATION AND STUDY OF SOFTWARE DEGRADATION IN THE EVOLUTION OF SIX VERSIONS...csandit
When a software system evolves, new requirements may be added, existing functionalities
modified, or some structural change introduced. During such evolution, disorder may be
introduced, complexity increased or unintended consequences introduced, producing rippleeffect
across the system. JHotDraw (JHD), a well-tested and widely used open source Javabased
graphics framework developed with the best software engineering practice was selected
as a test suite. Six versions were profiled and data collected dynamically, from which two metrics were derived namely entropy and software maturity index. These metrics were used to investigate degradation as the software transitions from one version to another. This study observed that entropy tends to decrease as the software evolves. It was also found that a
software product attains its lowest decrease in entropy at the turning point where its highest maturity index is attained, implying a possible correlation between the point of lowest decreasein entropy and software maturity index.
Ijartes v2-i1-001Evaluation of Changeability Indicator in Component Based Sof...IJARTES
The maintaining of software system is a major
cost concern. The maintaining of a software system depends
on how the changes made to it. The maintainability of a system
depending on the folw of software, its design pattern and
CBSS. In Maintainability phase of a sotware system there are
4 parts, like analyzing, testing, stability, and changes made to
it. In some side areas, these systems emerged very rapidly.
There are many companies which purchase software instead
of developing it .These companies do not have any interst in
the testing of the system but wants to like smoothness in the
flow of the system during changes.
Changeability is one of the characteristics of maintainability.
Software changeability is associated with refactoring which
makes code simpler and easier to maintain (enable all
programmers to improve their code).Factors that affect
changeability include coupling between the modules, lack of
code comments, naming of functions and variables.
Basically,”changeabilty” is the ability of a product or software
to be able to change the structure of the program. It is the rate
the product allows the modification to its components.
In this paper changeability based cost estimation is done.
Initially we take four components; these components are
evaluated based on the coupling, cohesion and Interface
metrix. Next some changes are made to the existing
components and than again these components are evaluated.
Now, on the basis of these two evaluations some conclusion is
made for changeability cost.
EVALUATION OF SOFTWARE DEGRADATION AND FORECASTING FUTURE DEVELOPMENT NEEDS I...ijseajournal
This article is an extended version of a previously published conference paper. In this research, JHotDraw (JHD), a well-tested and widely used open source Java-based graphics framework developed with the best software engineering practice was selected as a test suite. Six versions of this software were profiled, and data collected dynamically, from which four metrics namely (1) entropy (2) software maturity index, COCOMO effort and duration metrics were used to analyze software degradation, maturity level and use
the obtained results as input to time series analysis in order to predict effort and duration period that may
be needed for the development of future versions. The novel idea is that, historical evolution data is used to
project, predict and forecast resource requirements for future developments. The technique presented in
this paper will empower software development decision makers with a viable tool for planning and decision
making.
Software Quality Analysis Using Mutation Testing SchemeEditor IJMTER
The software test coverage is used measure the safety measures. The safety critical analysis is
carried out for the source code designed in Java language. Testing provides a primary means for
assuring software in safety-critical systems. To demonstrate, particularly to a certification authority, that
sufficient testing has been performed, it is necessary to achieve the test coverage levels recommended or
mandated by safety standards and industry guidelines. Mutation testing provides an alternative or
complementary method of measuring test sufficiency, but has not been widely adopted in the safetycritical industry. The system provides an empirical evaluation of the application of mutation testing to
airborne software systems which have already satisfied the coverage requirements for certification.
The system mutation testing to safety-critical software developed using high-integrity subsets of
C and Ada, identify the most effective mutant types and analyze the root causes of failures in test cases.
Mutation testing could be effective where traditional structural coverage analysis and manual peer
review have failed. They also show that several testing issues have origins beyond the test activity and
this suggests improvements to the requirements definition and coding process. The system also
examines the relationship between program characteristics and mutation survival and considers how
program size can provide a means for targeting test areas most likely to have dormant faults. Industry
feedback is also provided, particularly on how mutation testing can be integrated into a typical
verification life cycle of airborne software. The system also covers the safety and criticality levels of
Java source code.
EVALUATION AND STUDY OF SOFTWARE DEGRADATION IN THE EVOLUTION OF SIX VERSIONS...csandit
When a software system evolves, new requirements may be added, existing functionalities
modified, or some structural change introduced. During such evolution, disorder may be
introduced, complexity increased or unintended consequences introduced, producing rippleeffect
across the system. JHotDraw (JHD), a well-tested and widely used open source Javabased
graphics framework developed with the best software engineering practice was selected
as a test suite. Six versions were profiled and data collected dynamically, from which two metrics were derived namely entropy and software maturity index. These metrics were used to investigate degradation as the software transitions from one version to another. This study observed that entropy tends to decrease as the software evolves. It was also found that a
software product attains its lowest decrease in entropy at the turning point where its highest maturity index is attained, implying a possible correlation between the point of lowest decreasein entropy and software maturity index.
Determination of Software Release Instant of Three-Tier Client Server Softwar...Waqas Tariq
Quality of any software system mainly depends on how much time testing take place, what kind of testing methodologies are used, how complex the software is, the amount of efforts put by software developers and the type of testing environment subject to the cost and time constraint. More time developers spend on testing more errors can be removed leading to better reliable software but then testing cost will also increase. On the contrary, if testing time is too short, software cost could be reduced provided the customers take risk of buying unreliable software. However, this will increase the cost during operational phase since it is more expensive to fix an error during operational phase than during testing phase. Therefore it is essentially important to decide when to stop testing and release the software to customers based on cost and reliability assessment. In this paper we present a mechanism of when to stop testing process and release the software to end-user by developing a software cost model with risk factor. Based on the proposed method we specifically address the issues of how to decide that we should stop testing and release the software based on three-tier client server architecture which would facilitates software developers to ensure on-time delivery of a software product meeting the criteria of achieving predefined level of reliability and minimizing the cost. A numerical example has been cited to illustrate the experimental results showing significant improvements over the conventional statistical models based on NHPP.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Practical Guidelines to Improve Defect Prediction Model – A Reviewinventionjournals
Defect prediction models are used to pinpoint risky software modules and understand past pitfalls that lead to defective modules. The predictions and insights that are derived from defect prediction models may not be accurate and reliable if researchers do not consider the impact of experimental components (e.g., datasets, metrics, and classifiers) of defect prediction modeling. Therefore, a lack of awareness and practical guidelines from previous research can lead to invalid predictions and unreliable insights. Through case studies of systems that span both proprietary and open-source domains, find that (1) noise in defect datasets; (2) parameter settings of classification techniques; and (3) model validation techniques have a large impact on the predictions and insights of defect prediction models, suggesting that researchers should carefully select experimental components in order to produce more accurate and reliable defect prediction models.
ARC's Greg Gorbach Rapid Product Innovation Presentation @ ARC Industry Forum...ARC Advisory Group
Rapid Product Innovation: Improving Processes for Production System Design Implementation and Design, Implementation, Operations
ARC's Greg Gorbach Rapid Product Innovation Presentation @ ARC Industry Forum 2010 in Orlando, FL.
For certain manufacturing segments (especially discrete and portions of hybrid), introducing new products or improving the manufacturing process usually requires creating or modifying production systems. Many production system problems are not discovered until late in the design/implementation process, which introduces delays and cost. Once in operation, virtual reference models, if available, could aid in performance monitoring, process optimization, problem diagnosis, operator training, and continuous improvement
Determination of Software Release Instant of Three-Tier Client Server Softwar...Waqas Tariq
Quality of any software system mainly depends on how much time testing take place, what kind of testing methodologies are used, how complex the software is, the amount of efforts put by software developers and the type of testing environment subject to the cost and time constraint. More time developers spend on testing more errors can be removed leading to better reliable software but then testing cost will also increase. On the contrary, if testing time is too short, software cost could be reduced provided the customers take risk of buying unreliable software. However, this will increase the cost during operational phase since it is more expensive to fix an error during operational phase than during testing phase. Therefore it is essentially important to decide when to stop testing and release the software to customers based on cost and reliability assessment. In this paper we present a mechanism of when to stop testing process and release the software to end-user by developing a software cost model with risk factor. Based on the proposed method we specifically address the issues of how to decide that we should stop testing and release the software based on three-tier client server architecture which would facilitates software developers to ensure on-time delivery of a software product meeting the criteria of achieving predefined level of reliability and minimizing the cost. A numerical example has been cited to illustrate the experimental results showing significant improvements over the conventional statistical models based on NHPP.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Practical Guidelines to Improve Defect Prediction Model – A Reviewinventionjournals
Defect prediction models are used to pinpoint risky software modules and understand past pitfalls that lead to defective modules. The predictions and insights that are derived from defect prediction models may not be accurate and reliable if researchers do not consider the impact of experimental components (e.g., datasets, metrics, and classifiers) of defect prediction modeling. Therefore, a lack of awareness and practical guidelines from previous research can lead to invalid predictions and unreliable insights. Through case studies of systems that span both proprietary and open-source domains, find that (1) noise in defect datasets; (2) parameter settings of classification techniques; and (3) model validation techniques have a large impact on the predictions and insights of defect prediction models, suggesting that researchers should carefully select experimental components in order to produce more accurate and reliable defect prediction models.
ARC's Greg Gorbach Rapid Product Innovation Presentation @ ARC Industry Forum...ARC Advisory Group
Rapid Product Innovation: Improving Processes for Production System Design Implementation and Design, Implementation, Operations
ARC's Greg Gorbach Rapid Product Innovation Presentation @ ARC Industry Forum 2010 in Orlando, FL.
For certain manufacturing segments (especially discrete and portions of hybrid), introducing new products or improving the manufacturing process usually requires creating or modifying production systems. Many production system problems are not discovered until late in the design/implementation process, which introduces delays and cost. Once in operation, virtual reference models, if available, could aid in performance monitoring, process optimization, problem diagnosis, operator training, and continuous improvement
The Milliscope II™ fiber scope offers incredibly high resolution images in a small diameter flexible fiberscope. The fiberscopes or fiber scopes are flexible, allowing easy passage around bends and into tight spaces. A CCD camera can be attached to view inspections on a monitor and record inspection results. The Milliscope II™ fiber scopes are completely modular to allow use with different light sources, CCD cameras and even the fiber optic fiberscopes are interchangable. The Milliscope™ Fiber scope is a modular, high resolution and affordable industrial fiberscope.
Touch screens displays semiconductor defense and consumer applications sapphi...Yole Developpement
Non substrate applications account for 25% of the US$1 billion sapphire industry. But emerging mobile phone applications could bring this total to more than $3 billion by 2018.
Display cover applications could more than triple the size of the industry within the next 5 years
Established applications could generate US$366 million in 2018. But adoption of sapphire for smartphone display covers could generate an additional $1.3 to $2.6 billion depending on the adoption scenario.
Sapphire is currently used in some exotic, luxury phones. However the sapphire price reduction combined with the massive adoption of touch screen in smartphones have stimulated the interest of cell phone OEMS for this material. Crystal growth equipment manufacturer GTAT is leading the charge and recently created a lot of buzz around this application and on the OEM front. Apple is rumored to have conducted an extended due diligence.
Adoption of sapphire in mobile display covers represents the single largest opportunity discussed in this report. It remains, however, uncertain. We see 4 major challenges: technology, supply chain, cost and market acceptance. Crystal growth and finishing technologies still need to be optimized in order to guarantee stable performance and reduce the price gap with chemically strengthened glass like Corning's Gorilla. We estimate that the current cost of manufacturing a sapphire display cover is around $22 but could drop to $12 and ultimately below $10. It remains to be seen if the Bill Of Material increase vs. the $3 glass display cover will be absorbed by the OEM in exchange for increased market share or if the consumer will value the increased durability brought in by the sapphire cover and accept paying a premium.
It is difficult to predict the success of sapphire in this application. However, we expect that some OEMs will probe the market and introduce some models featuring sapphire by late 2013 - early 2014. Initial customer reaction will have a strong influence on the future of the technology. If successful, strong market traction could ease the funding for the more than $1.5 billion in capex needed to serve this industry and set up the supply chain to serve this application.
Glass cover lens manufacturers might seize the opportunity. Because of their vast existing glass finishing capacity that could be converted to process sapphire and their privileged access to leading smartphone OEMs, those companies could beat established sapphire finishing companies into this market. However, another scenario would see collaborations between some leading sapphire and cover lens makers in order to pool technical knowledge, capacity and customer access under the push of some smartphone OEMs.
In any case, if this opportunity materializes, it will transform the sapphire industry with new players emerging...
More information at http://www.i-micronews.com/reports/Sapphire-Applications-Touch-screens-displays-semiconductor/3/372/
Operações com matrizes em Jitter:
- O objecto [jit.op]
- Operações com uma matriz e um escalar
- Operações com duas matrizes
Disciplina de Vídeo Arte Interactiva: http://artes.ucp.pt/blogs/index.php/vai/
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
A Novel Approach to Derive the Average-Case Behavior of Distributed Embedded ...ijccmsjournal
Monte-Carlo simulation is widely used in distributed embedded system in our present era. In this
research work, we have put an emphasis on reliability assessment of any distributed embedded system
through Monte-Carlo simulation. We have done this assessment on random data which represents input
voltages ranging from 0 volt to 12 volt; several numbers of trials have been executed on those data to
check the average case behavior of a distributed real time embedded system. From the experimental result, a saturation point has been achieved against the time behavior which shows the average case behavior of the concerned distributed embedded system.
From previous year researches, it is concluded that testing is playing a vital role in the development of the software product. As, software testing is a single approach to assure the quality of the software so most of the development efforts are put on the software testing. But software testing is an expensive process and consumes a lot of time. So, testing should be start as early as possible in the development to control the money and time problems. Even, testing should be performed at every step in the software development life cycle (SDLC) which is a structured approach used in the development of the software product. Software testing is a tradeoff between budget, time and quality. Now a day, testing becomes a very important activity in terms of exposure, security, performance and usability. Hence, software testing faces a collection of challenges.
A survey of predicting software reliability using machine learning methodsIAESIJAI
In light of technical and technological progress, software has become an urgent need in every aspect of human life, including the medicine sector and industrial control. Therefore, it is imperative that the software always works flawlessly. The information technology sector has witnessed a rapid expansion in recent years, as software companies can no longer rely only on cost advantages to stay competitive in the market, but programmers must provide reliable and high-quality software, and in order to estimate and predict software reliability using machine learning and deep learning, it was introduced A brief overview of the important scientific contributions to the subject of software reliability, and the researchers' findings of highly efficient methods and techniques for predicting software reliability.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
International Journal of Soft Computing and Engineering (IJShildredzr1di
International Journal of Soft Computing and Engineering (IJSCE)
ISSN: 2231-2307, Volume-2, Issue-3, July 2012
251
Abstract— In recent years, software testing is becoming more
popular and important in the software development industry.
Indeed, software testing is a broad term encircling a variety of
activities along the development cycle and beyond, aimed at
different goals. Hence, software testing research faces a collection
of challenges. A consistent roadmap of most relevant challenges is
proposed here. In it, the starting point is constituted by some
important past achievements, while the destination consists of two
major identified goals to which research ultimately leads, but
which remains as reachable as goals. The routes from the
achievements to the goals are paved by outstanding research
challenges, which are discussed in the paper along with the
ongoing work.
Software testing is as old as the hills in the history of digital
computers. The testing of software is an important means of
assessing the software to determine its quality. Since testing
typically consumes 40~50% of development efforts, and consumes
more effort for systems that require higher levels of reliability, it is
a significant part of the software engineering
Software testing is a very broad area, which involves many
other technical and non-technical areas, such as specification,
design and implementation, maintenance, process and
management issues in software engineering. Our study focuses on
the state of the art in testing techniques, as well as the latest
techniques which representing the future direction of this area.
Today, testing is the most challenging and dominating activity
used by industry, therefore, improvement in its effectiveness, both
with respect to the time and resources, is taken as a major factor
by many researchers
The purpose of testing can be quality assurance, verification,
and validation or reliability estimation. It is a tradeoff between
budget, time and quality. Software Quality is the central concern
of software engineering. Testing is the single most widely used
approach to ensuring software quality.
(Keywords: SDLC, Software quality, Testing techniq
Technique .)
I. INTRODUCTION
I. Introduction: Software Testing
Software testing is the process of executing a program or
system with the intent of finding errors. Software is not unlike
other physical processes where inputs are received and
outputs are produced. Where software differs is in the manner
in which it fails. Most physical systems fail in a fixed (and
reasonably small) set of ways. By contrast, software can fail in
Manuscript received: on July, 2012
Maneela Tuteja, Department of Information TechnologyDronacharya
College of Engineering, Gurgaon, Haryana,.
Gaurav Dubey, Amity School of Computer Sciences, Amity University,
Uttar Pradesh,India.,
.
many bizarre ways. Detec ...
A Review on Software Fault Detection and Prevention Mechanism in Software Dev...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
STATE-OF-THE-ART IN EMPIRICAL VALIDATION OF SOFTWARE METRICS FOR FAULT PRONEN...IJCSES Journal
With the sharp rise in software dependability and failure cost, high quality has been in great demand.However, guaranteeing high quality in software systems which have grown in size and complexity coupled with the constraints imposed on their development has become increasingly difficult, time and resource consuming activity. Consequently, it becomes inevitable to deliver software that have no serious faults. In
this case, object-oriented (OO) products being the de facto standard of software development with their unique features could have some faults that are hard to find or pinpoint the impacts of changes. The earlier faults are identified, found and fixed, the lesser the costs and the higher the quality. To assess product quality, software metrics are used. Many OO metrics have been proposed and developed. Furthermore,
many empirical studies have validated metrics and class fault proneness (FP) relationship. The challenge is which metrics are related to class FP and what activities are performed. Therefore, this study bring together the state-of-the-art in fault prediction of FP that utilizes CK and size metrics. We conducted a systematic literature review over relevant published empirical validation articles. The results obtained are
analysed and presented. It indicates that 29 relevant empirical studies exist and measures such as complexity, coupling and size were found to be strongly related to FP.
Contributors to Reduce Maintainability Cost at the Software Implementation PhaseWaqas Tariq
Software maintenance is important and difficult to measure. The cost of maintenance is the most ever during the phases of software development. One of the most critical processes in software development is the reduction of software maintainability cost based on the quality of source code during design step, however, a lack of quality models and measures can help asses the quality attributes of software maintainability process. Software maintainability suffers from a number of challenges such as lack source code understanding, quality of software code, and adherence to programming standards in maintenance. This work describes model based-factors to assess the software maintenance, explains the steps followed to obtain and validate them. Such a method can be used to eliminate the software maintenance cost. The research results will enhance the quality of the source code. It will increase software understandability, eliminate maintenance time, cost, and give confidence for software reusability.
Reliability Improvement with PSP of Web-Based Software ApplicationsCSEIJJournal
In diverse industrial and academic environments, the quality of the software has been evaluated using
different analytic studies. The contribution of the present work is focused on the development of a
methodology in order to improve the evaluation and analysis of the reliability of web-based software
applications. The Personal Software Process (PSP) was introduced in our methodology for improving the
quality of the process and the product. The Evaluation + Improvement (Ei) process is performed in our
methodology to evaluate and improve the quality of the software system. We tested our methodology in a
web-based software system and used statistical modeling theory for the analysis and evaluation of the
reliability. The behavior of the system under ideal conditions was evaluated and compared against the
operation of the system executing under real conditions. The results obtained demonstrated the
effectiveness and applicability of our methodology
FROM THE ART OF SOFTWARE TESTING TO TEST-AS-A-SERVICE IN CLOUD COMPUTINGijseajournal
Researchers consider that the first edition of the book "The Art of Software Testing" by Myers (1979)
initiated research in Software Testing. Since then, software testing has gone through evolutions that have
driven standards and tools. This evolution has accompanied the complexity and variety of software
deployment platforms. The migration to the cloud allowed benefits such as scalability, agility, and better
return on investment. Cloud computing requires more significant involvement in software testing to ensure
that services work as expected. In addition to testing cloud applications, cloud computing has paved the
way for testing in the Test-as-a-Service model. This review aims to understand software testing in the
context of cloud computing. Based on the knowledge explained here, we sought to linearize the evolution of
software testing, characterizing fundamental points and allowing us to compose a synthesis of the body of
knowledge in software testing, expanded by the cloud computing paradigm.
Similar to IJCER (www.ijceronline.com) International Journal of computational Engineering research (20)
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
JMeter webinar - integration with InfluxDB and Grafana
IJCER (www.ijceronline.com) International Journal of computational Engineering research
1. International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2 Issue. 4
A Selective Survey and direction on the software of Reliability Models
Vipin Kumar
Research Scholar, S.M. Degree College, Chandausi
Abstract:
Software development, design and testing have become very intricate with the advent of modern highly distributed
systems, networks, middleware and interdependent application. The demand for complex software systems has increased
more rapidly than the ability to design, implement, test, and maintain them and the reliability of software systems has
become a major concern for our modern society. Software reliability modeling and measurements have drawn quite a bit
of attention recently in various industries due to concerns about the quality of software. In few years of 21st century, many
reported system outages or machine crashes were traced back to computer software failures.
In this paper, I have many challenges in getting wide spread use of software reliability models. I am focus
on software reliability models and measurements. A software reliability model specifies the general form of the
dependence of the failure process on the principal factors that affect it: fault introduction, fault removal and the
operational environment. During the test phase, the failure rate of a software system is generally decreasing due to
discovery and correction of software faults. With careful record-keeping procedures in place, it is possible to use
statistical methods to analyze the historical record. The purpose of these analyses is twofold:(1) to predict the additional
time needed to achieve a specified reliability objective; (2) to predict the expected reliability when testing is finished.
Key words: Dynamic model, Growth model, Reliability software, Static model, Telecommunication.
Introduction:
In few year of century, many reported system outages or machine crashes were traced back to computer software failures.
Consequently, recent literature is replete with horror stories due to software problems. Software failure has impaired
several high visibility programs in space, telecommunications and defense and health industries. The Mars Climate
Orbiter crashed in 1999. The Mars Climate Orbiter Mission failure investigation Board [1] concluded that “The root cause
of the loss of the spacecraft was the failed translation of English unit into metric units in a segment of ground based,
navigation related mission software. Current versions of the Osprey aircraft, developed at a cost of billions of dollars, are
not deployed because of software induced field failure. In the health industry [2], the Yherac-25 radiation therapy
machine was hit by software errors in its sophisticated control systems and claimed several patients’ lives in 1985 &1986.
Even in the telecommunications industry, known for its five nines reliability, the nationwide long distance network of a
major carrier suffered an embarrassing network outage on January 1990, due to software problem. In 1991, a series of
local network outage occurred in a number of US cities due to software problems in central office switches [3].
Software reliability is defined as the probability of failure free software operations for a specified
period of time in a specified environment [4]. The software reliability field discusses ways of quantifying it and using it
for improvement and control of the software development process.. Software reliability is operationally measured by the
number of field failures, or failures seen in development, along with a variety of ancillary information. The ancillary
information includes the time at which the failure was found, in which part of the software it was found, the state of
software at that time, the nature of the failure. ISO9000-3 [5] is the weakest amongst the recognized standards, in that it
specifies measurement of field failures as the only required quality metric.
In this paper, I take a narrower view and just look at models that are used in software reliability-their
efficacy and adequacy without going into details of the interplay between testing and software reliability models.
Software reliability measurement includes two types of model: static and dynamic reliability estimation, used typically in
the earlier and later stages of development respectively. These will be discussed in the following two sections. One of the
main weaknesses of many of the models is that they do not take into account ancillary information, like churn in system
during testing. Such a model is described in Growth reliability. A key use of the reliability models is in the area of when
to stop testing. An economic formulation is discussed in next paragraph.
Static Models:
One purpose of reliability models is to perform reliability prediction in an early stage of software development. This
activity determines future software reliability based upon available software metrics and measures. Particularly when field
failure data are not available (e.g. software is in design or coding stage), the metrics obtained from the software
development process and the characteristics of the resulting product can be used to estimate the reliability of the software
Issn 2250-3005(online) August| 2012 Page 1060
2. International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2 Issue. 4
upon testing or delivery. I am discussing two prediction models: the phase based model by Gaffney and Davis [10] and a
predictive development life cycle model from Telcordia Technologies by Dalal and Ho [11].
(a) Phase based Model:
Gaffney and Davis [10] proposed the phase based model, which divides the software development cycle into different
phases (e.g. requirement review, design, implementation, unit test, software integration, system test, operation etc.)
and assumes that code size estimates are available during the early phases follow a Raleigh density function when
normalized by the lines of code. The idea is to divide the stage of development along a continuous time (i.e. t=0-
1means requirements analysis and so on) and overlay the Raleigh density function with a scale parameter, known as
fault discovery phase constant, is estimate by equating the area under the curve between earlier phases with observed
error rates normalized by the lines of code. This method gives an estimate of the fault density for any later phase. The
model also estimates the number of faults in a given phase by multiplying the fault density by the number of lines of
code.
This method is clearly motivated by the corresponding model used in hardware reliability and the
predictions are hardwired in the model based on one parameter. In spite of this criticism, this model is one of the first
to leverage information available in earlier development life cycle phases.
(b) Predictive Development Life Cycle Model:
In this model the development life cycle is divided into the same phases as in Phase based method. However, it does
not postulate a fixed relationship (i.e. Raleigh distribution) between the numbers of faults discovered during different
phases. Instead, it leverages past releases of similar products to determine the relationships. The relationships are not
postulated beforehand, but are determined from data using only a few releases per product. Similarity is measured by
using an empirical hierarchical bays framework. The number of releases used as data is kept minimal and, typically,
only the most recent one or two releases are used for prediction. The lack of data is made up for by using as many
products as possible that were being developed in a software organization at around the same time. In that sense it is
similar to meta analysis [12], where a lack of longitudinal data is overcome by using cross-sectional data.
22 products and their releases versus observed (+) and predicted Fault Density connected by dash lines. Solid
vertical lines are 90% predictive intervals for Faulty Density
Conceptually, the two basic assumptions behind this model are as follows that one is “defect rates from different products
in the same product life cycle phase are samples from a statistical universe of products coming from that development
organization” and the second is “different releases from a given product are samples from a statistical universe of
releases for that product”.
Dynamic Models: Reliability Growth Models
Software reliability estimation determines the current software reliability by applying statistical inference techniques to
failure data obtained during system test or during system operation. Since reliability tends to improve over time during the
software testing and operation periods because of removal of faults, the models are also called reliability growth models.
They model the underlying failure process of the software, and use the observed failure history as a guideline, in order to
estimate the residual number of faults in the software and the test time required to detect them. This can be used to make
Issn 2250-3005(online) August| 2012 Page 1061
3. International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2 Issue. 4
release and development decisions. Most current software reliability models fall into this category. Details of these
models can be found in Lyu [9], Musa et al. [8], Singpurwalla and Wilson [13], and Gokhale et al. [14].
Classes of Models:
I am describing a general class of models. In binominal models the total number of faults is some number N; the number
found by time t has a binominal distribution with mean t NF (t ) , where F (t) is the probability of a particular fault
being found by time t. Thus, the number of faults found in any interval of time (including the interval (t, ∞) is also
binominal. F (t) could be any arbitrary cumulative distribution function. Then, a general class of reliability models is
obtained by appropriate parameterization of µ (t) and N.
Letting N be Poisson (with some mean υ) gives the related Poisson model; now, the number of
faults found in any interval is Poisson, and for disjoint intervals these numbers are independent. Denoting the derivative of
F by F ,׳rate at time t is F (t ) /[1-F(t)]. These models are Markovian but not strongly Markoviaan, except when F is
exponential; minor variations of this case were studied by Jelinski and Moranda [15], Shooman [16], Schneidewind [17],
Musa [18], Moranda [19], and Goel and okomoto [20]. Schick and Wolverton [21] and crow [22] made F a Weibull
distribution; Yamada et al. [23] made F a Gamma distribution; and Littlewood’s model [24] is equivalent to aassuming F
to be Pareto. Musa and Okumoto [25] assumed the hazard rate to be an inverse linear function of time; for this
“Logarithmic Poisson” model the total number of failures is infinite. The success of a model is often judged by how well
it fits an estimated reliability curve µ(t) to the observed “number of faults versus time” function.
Let us examine the real example plotted in above Figure from testing a large software system at a
telecommunications research company. The system had been developed over years, and new releases were created and
tested by the same development and testing groups respectively. In this figure, the elapsed testing time in staff day’s t is
plotted against the cumulative number of faults found for one of the releases. It is not clear whether there is some “total
number” of bugs to be found, or whether the number found will continue to increase indefinitely. However, from data
such as that in figure, an estimation of the tail of a distribution with a reasonable degree of precision is not possible. I also
fit a special case of the general reliability growth model described above corresponding to N being Poisson and F being
exponential.
Reliability Growth Modeling:
We have so far discussed a number of different kinds of reliability model of varying degrees of plausibility, including
phase-based models depending upon a Raleigh curve, growth models like the Goel-okumoto model, etc. The growth
models take us at input either failure time on failure count data, and fit a stochastic process model to reflect reliability
growth. The differences between the models lie principally in assumptions made on the underlying stochastic process
generating the data.
Most existing models assume that no explanatory variables are available. This assumption is assuredly simplistic, when
the models are used to model a testing process, for all but small systems involving short development and life cycles. For
large systems (e.g. greater than 100 KNCSL, i.e. thousands of non-commentary source lines) there are variables, other
than time, that are very relevant. For example, it is typically assumed that the number of faults (found and unfound) in a
system under test remains stable during testing. This implies that the code remains frozen during testing. However, this is
rarely the case for large systems, since aggressive delivery cycles force the final phases of development to overlap with
the initial stages of system test. Thus, the size of code and, consequently, the number of faults in a large system can vary
widely during testing. If these changes in code size are not considered as a covariate, one is , at best, likely to have an
increase in variability and a loss in predictive performance; at worst, a poor fitting model with unstable parameter
estimates is likely. I briefly describe a general approach proposed by Dalal and McIntosh [28] for incorporating covariates
along with a case study dealing with reliability modeling during product testing when code is changing.
3000
NCNCSL
0 200 400 600 800 1000 1200 1400
Staff days
Issn 2250-3005(online) August| 2012 Page 1062
4. International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2 Issue. 4
800
Cumulative faults 400
0 800 1000 1500 2000
Staff days Model fit
As an example, consider a new release of a large telecommunications system with approximately 7 million NCSL and 300
KNCNCSL (i.e. thousands of lines off non-commentary new or changed source lines). For a faster delivery cycle, the
source code used for system test was updated every night throughout the test period. At the end of each of 198 calendar
days in the test cycle, the number of faults found, NCNCSL, and thee staff time spent on testing were collected. Above
figure portrays the growth of the system in terms of NCNCSL and of faults against staff time. The corresponding
numerical data are providing in Dalal and McIntosh [28].
Assume that the testing process is observed at time ti , i=0, 1,. . ., h, and at any given time the
amount of time it takes to find specific bug is exponential with rate m. At time ti, the total number of faults remaining in
the system is Poisson with mean li+1, and NCNCSL is increased by an amount Ci. This change adds a Poisson number of
faults with mean proportional to C, say qCi, These assumption lead to the mass blance equation, namely that the expected
number of faults in the system at ti (after possible modification) is the expected number of faults in the system at t i-1
adjusted by the expected number found in the interval (ti-1, ti) plus the faults introduced by the changes made at ti:
m ti ti1
li 1 li e qCi
for i= 1, 2, 3….h. Note that q represent the number of new faults entering the system per additional NCNCSL, and l1
represent the number of faults in the code at the starts of system test. Both of these parameters make it possible to
differentiate between the new code added in the current release and the older code. For the example, the estimated
parameters are q=0.025, m=0.002, and l1=41. The fitted and the observed data are plotted against staff time in the given
above figure (bottom). The fit is evidently very good. Of course, assessing the model on independent or new data is
required for proper validation.
Now, I examine the efficacy of creating a statistical model. The estimate of q in the example is
highly significant, both statistically and practically, showing the need for incorporating changes in NCNCSL as a
covariate. Its numerical value implies that for every additional 10000 NCNCSL added to the system, 25 faults are being
added as well. For these data, the predicted number of faults at the end of the test period is Poisson distributed with mean
145. Dividing this quantity by the total NCNCSL, gives 4.2 per 10000 NCNCSL as an estimated field fault density. These
estimates of the incoming and outgoing quality are valuable in judging the efficacy of system testing and for deciding
where resources should be allocated to improve the quality. Here, for example, system testing was effective, in that it
removed 21 of every 25 faults. However, it raises another issue: 25 faults per 10000 NCNCSL entering system test may
be too high and a plan ought to be considered to improve the incoming quality.
None of the above conclusion could have been made without using a statistical model. These
conclusions are valuable for controlling and improving the process.
Conclusion:
In this paper, I am described key software reliability models for early stages, as well as for the test and operational phases
and have given some examples of their uses. I have also proposed some new research directions useful to practitioners,
which will lead to wider use of software reliability models.
References:
1. Mars Climate Orbiter Mishap Investigation Board Phase I Report, 1999, NASA.
2. Lee L. The day the phones stopped: how people get hurt when computers go wrong. New York: Donald I. Fine, Inc.;
1992
3. Dalal SR, Horgan JR, Kettenring JR. Reliable software and communication: software quality, reliability, and safety.
IEEE J spec Areas Commun 1993; 12; 33-9.
4. Institute of Electrical and Electronics Engineers. ANSI/IEEE standard glossary of software engineering
terminology, IEEE Std. 729-1991.
5. ISO 9000-3. Quality management and quality assurance standard- part 3: guidelines for the application of ISO 9001
to the development, supply and maintenance of software. Switzerland: ISO; 1991.
6. Paulk M, Curtis W, Chrises M, Weber C., Capability maturity model for software, version 1.1, CMU/SEI-93-TR-
24.Carnegie Mellon University, Software engineering Institute, 1993.
Issn 2250-3005(online) August| 2012 Page 1063
5. International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2 Issue. 4
7. Emam K, Jean Normand D, Melo W. Spice: the theory and practice of software process improvement and capability
determination. IEEE computer Society Press; 1997.
8. Musa JD, Iannio A, Okumoto K. Software reliability measurement, prediction, application. New York: Mc Grawth-
Hill; 1987.
9. Lyu MR, editor. Handbook of software reliability engineering. New York: MC Grawth- Hill; 1996.
10. Gaffney JD, Davis CF. An approach to estimating software errors and availability. SPC-TR-88-007, version
1.0,1988.
11. Dalal SR, and Ho YY. Predicting later phase faults knowing early stage data using hierarchical Bayes models.
Technical Report, Telcordia Technologies, 2000.
12. Thomas D, Cook T, Cooper H, Cordray D, Hartmann H, Hedges L, Light R, Louis T, Mosteller F. Meta analysis for
explanation: a casebook. New York: Russell Sage Fiundation; 1992.
13. Singpurwalla ND, Wilson SP. Software reliability modeling, Int Stat Rev 1994; 62 (3): 289-317.
14. Gokhale S, Marinos P, Trivedi K. Important milestones in software reliability modeling. In: Proceeding of software
Engineering and knowledge Engineering (SEKE 96), 1996.p. 345-52.
15. Jelinski Z, Moranda PB. Software reliability research. In: Statistical computer performance evaluation. New York:
Academic Press; 1972. P.465-84.
16. Shooman ML. Probabilistic models for software reliability prediction. In: Statistical computer performance
evaluation. New York: Academic Press; 1972. P.485-502.
17. Schneidewind NF. Analysis of error processes in computer software. Sigplan Note 1975; 10(6): 337-46.
18. Mussa JD, A theory of software reliability and its application. IEEE Trans software Eng 1975; SE-1(3): 312-27.
19. Moranda PB. Predictions of software reliability during debugging. In: Proceeding of the Annual Reliability and
Maintainability Symposium, Washington, DC, 1975.p 327-32.
20. Goal AL, Okumoto K. Time dependent error detection rate model for software and other performance measures.
IEEE Trans Reliab 1979; R-28 (3): 206-11.
21. Schick GJ, Wolverton RW. Assessment of software reliability. In: Proceeding, Operation Research. Wurzburg
Wien: Physica Verlag; 1973. P. 395-422.
22. Crow LH. Reliability analysis for complex repairable systems. In: Proschan F, Serfling RJ, editors. Reliability and
biometry. Philadelphia: SIAM; 1974.p. 379-410.
23. Yamada S, Obha M, Osaki S. S-shaped reliability growth modeling for software error detection. IEEE Tran Reliab
1983; R-32 (5):475-8.
24. Littlewood B. Stochastic reliability growth: a model for fault removal in computer programs and hardware designs.
IEEE Tran Reliab 1981; R-30 (4):313-20.
25. Musa JD, Okumoto K. A logarithmic Poisson executive time model for software reliability measurement. In:
Proceeding seventh International conference on Engineering, Orlando (FL), 1984. p.230-8.
26. Miller D. Exponential order statistic models of software reliability growth. IEEE Trans software Eng 1986; SE-
12(1):12-24.
27. Gokhale S, Lyu M, Trivedi K. Software reliability analysis incorporating debugging activities. In: Proceeding of
International Symposium on software Reliability Engineering (ISSRE 98), 1998.P.202-11.
28. Dalal SR, Mcintosh AM. When to stop testing for large software system with changing code. IEEE Trans software
Eng 1994; 20:318-23.
Issn 2250-3005(online) August| 2012 Page 1064