This document describes a simplified engineering approach for computing software reliability that was developed for and applied to the Space Shuttle Primary Avionics Software System (PASS). The approach models the software as uniform layers representing releases. It computes reliability estimates based on failure data for each layer over time. The approach was found to more accurately predict reliability than a complex statistical model. The key advantages are that it accounts for changes in reliability characteristics over time due to process improvements and can estimate reliability prior to failures occurring based on relative size of software releases.
Space Shuttle Flight Software (PASS) Loss Of Crew Errors J.K. Orr 2015-08-27James Orr
This document summarizes a loss of crew and vehicle (LOCV) error in the Primary Avionics Software System (PASS) that was discovered prior to the Space Shuttle Challenger disaster (STS-51L) in 1986. The error caused the PASS system to hang during a Shuttle Mission Simulator simulation of a contingency abort to Rota, Spain for STS-1. The probability of the PASS error occurring was less than 1 in 240. The Backup Flight System was successfully engaged after 10 seconds when the error occurred. The error had been introduced prior to STS-1. It received high visibility within NASA due to occurring during prime crew training and representing the first total lockup of the PASS flight system after
Annual PASS Failures Vs Known Product DRs J.K.Orr 2015-07-14James Orr
This document summarizes data on failures in the Space Shuttle Primary Avionics Software System (PASS) over its 30-year lifespan. It shows that the annual failure rate was correlated with the number of known product defects in the system each year, with about 6% of defects resulting in failures during operational use. Over time, more latent defects were discovered through testing and development, improving the known quality of the software. By studying this data, failure rates in future flight software systems may be better predicted based on the number of undiscovered defects remaining in the system.
Atelier numérique : Communiquez professionnellement sur Facebook - Niveau déb...Destination Brocéliande
Support de présentation de l'atelier numérique "Communiquez professionnellement sur Facebook - Niveau débutant : Je crée ma page Facebook pour être visible sur le site de la Destination Brocéliande" dans le cadre du programme Abracadaweb.
Atelier animé par Marlène Le Louër, ANT, Gîtes de France Morbihan.
Este documento evalúa el recurso web Issuu, un sitio que permite a usuarios publicar y compartir publicaciones digitales. Fue fundado en 2006 por una compañía para proporcionar una plataforma en línea para publicar y acceder a revistas, periódicos y otros documentos de forma gratuita. El recurso proporciona información de contacto, es editado por su compañía fundadora, y cita sus fuentes de información de manera regular.
En el siguiente documento veremos que son los medios de transmisión guiados y no guiados así como también sus características y sus principales ventajas y desventajas.
This document outlines various roles and analytical capabilities within a company. It lists roles like decision maker, product development, sales analyst, and marketing analyst. It also describes several types of analyses the company can perform including customer segmentation, competitive advantage, real-time customer views, lifetime value segmentation, predictive analysis, and personalized recommendations.
A document discusses enjoying a day of golfing on a sunny day with good friends as it is more enjoyable than working. It describes the thrill of making long shots in golf and how it can provide the greatest feeling in the world. Finally, it suggests that being a business student requires also being a golfer to have good weather, good friends and good times.
Space Shuttle Flight Software (PASS) Loss Of Crew Errors J.K. Orr 2015-08-27James Orr
This document summarizes a loss of crew and vehicle (LOCV) error in the Primary Avionics Software System (PASS) that was discovered prior to the Space Shuttle Challenger disaster (STS-51L) in 1986. The error caused the PASS system to hang during a Shuttle Mission Simulator simulation of a contingency abort to Rota, Spain for STS-1. The probability of the PASS error occurring was less than 1 in 240. The Backup Flight System was successfully engaged after 10 seconds when the error occurred. The error had been introduced prior to STS-1. It received high visibility within NASA due to occurring during prime crew training and representing the first total lockup of the PASS flight system after
Annual PASS Failures Vs Known Product DRs J.K.Orr 2015-07-14James Orr
This document summarizes data on failures in the Space Shuttle Primary Avionics Software System (PASS) over its 30-year lifespan. It shows that the annual failure rate was correlated with the number of known product defects in the system each year, with about 6% of defects resulting in failures during operational use. Over time, more latent defects were discovered through testing and development, improving the known quality of the software. By studying this data, failure rates in future flight software systems may be better predicted based on the number of undiscovered defects remaining in the system.
Atelier numérique : Communiquez professionnellement sur Facebook - Niveau déb...Destination Brocéliande
Support de présentation de l'atelier numérique "Communiquez professionnellement sur Facebook - Niveau débutant : Je crée ma page Facebook pour être visible sur le site de la Destination Brocéliande" dans le cadre du programme Abracadaweb.
Atelier animé par Marlène Le Louër, ANT, Gîtes de France Morbihan.
Este documento evalúa el recurso web Issuu, un sitio que permite a usuarios publicar y compartir publicaciones digitales. Fue fundado en 2006 por una compañía para proporcionar una plataforma en línea para publicar y acceder a revistas, periódicos y otros documentos de forma gratuita. El recurso proporciona información de contacto, es editado por su compañía fundadora, y cita sus fuentes de información de manera regular.
En el siguiente documento veremos que son los medios de transmisión guiados y no guiados así como también sus características y sus principales ventajas y desventajas.
This document outlines various roles and analytical capabilities within a company. It lists roles like decision maker, product development, sales analyst, and marketing analyst. It also describes several types of analyses the company can perform including customer segmentation, competitive advantage, real-time customer views, lifetime value segmentation, predictive analysis, and personalized recommendations.
A document discusses enjoying a day of golfing on a sunny day with good friends as it is more enjoyable than working. It describes the thrill of making long shots in golf and how it can provide the greatest feeling in the world. Finally, it suggests that being a business student requires also being a golfer to have good weather, good friends and good times.
Andy Griffith's character Sheriff Andy Taylor is suspicious of a mysterious stranger who has come to town. Taylor investigates the stranger and discovers that he is just a traveling salesman trying to make an honest living. All ends well as the stranger proves himself not to be a threat to Mayberry.
This document discusses measuring social media marketing performance. It identifies key metrics to track, including audience growth and reach, engagement, visibility and brand perception, traffic pull, and conversion rate. These metrics help identify what social media tactics are working and not working in order to improve marketing performance. The document also recommends tools for measurement and discusses how to create a feedback loop to continuously measure, plan goals, create and engage with content, and listen and monitor social media analytics.
The document discusses various topics including recognizing corporate thinkers, the difference between "did not" and "didn't", and comparing terms like "that woman" and "Sam". It also provides several links to images and articles without much context.
Ivon Sariol is seeking an opportunity to interact with diverse people. He has an Associate in Arts Degree in Business Administration from Miami Dade College with a 3.8 GPA. His previous work experience includes being a Student Assistant providing office support for the Student Life department at Miami Dade College and working as a Cashier at Sears Retail Store providing excellent customer service. He has strong computer, communication, and organizational skills and is bilingual in English and Spanish.
Las ludotecas son espacios diseñados para que los niños se diviertan y aprendan a través del juego libre y no dirigido. Tienen como objetivos educar para el ocio de forma creativa, crear un espacio para las relaciones entre los niños y ofrecerles la posibilidad de llenar su tiempo libre de forma constructiva. Las actividades se centran en juegos que fomenten el desarrollo psicomotor, cognitivo, afectivo y social de los niños de una manera flexible adaptada a sus necesidades.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document provides a selective survey of software reliability models. It discusses both static models used in early development stages and dynamic models used later. For static models, it describes a phase-based model and predictive development life cycle model. For dynamic models, it outlines reliability growth models, including binomial, Poisson, and other classes. It also presents a case study of incorporating code changes as a covariate into reliability modeling during testing of a large telecommunications system. The document concludes by advocating for wider use of statistical software reliability models to improve development and testing processes.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
Call for paper 2012, hard copy of Certificate, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJCER, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, research and review articles, IJCER Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathematics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer review journal, indexed journal, research and review articles, engineering journal, www.ijceronline.com, research journals,
yahoo journals, bing journals, International Journal of Computational Engineering Research, Google journals, hard copy of Certificate,
journal of engineering, online Submission
Benchmarking Techniques for Performance Analysis of Operating Systems and Pro...IRJET Journal
This document discusses benchmarking techniques for analyzing the performance of operating systems and programs. It begins with an abstract that outlines benchmarking as an important process for evaluating system performance and comparing different systems. The document then reviews related work on operating system benchmarking and discusses challenges. It proposes a system for benchmarking CPU, memory, file system, and network performance using various tests and metrics. The methodology, implementation, and results of these tests are described through figures and plots. It concludes that the developed benchmarking tool can test a system's performance locally across different aspects and operating systems in a time-saving manner.
From the past many years many software defects prediction models are developed to solve the various issues in software project development. Software reliability is the significant in software quality which evaluates and predicts the quality of the software based on the defects prediction. Many software companies are trying to improve the software quality and also trying to reduce the cost of the software development. Rayleigh model is one of the significant models to analyze the software defects based on the generated data. Analysis of means (ANOM) is statistical technique which gives the quality assurance based on the situations. In this paper, an improved software defect prediction models (ISDPM) are used for predicting defects occur at the time of five phases such as analysis, planning, design, testing and maintenance. To improve the performance of the proposed methodology an order statistics is adopted for better prediction. The experiments are conducted on 2 synthetic projects that are used to analyze the defects.
A Combined Approach of Software Metrics and Software Fault Analysis to Estima...IOSR Journals
The document presents a software fault prediction model that uses reliability relevant software metrics and a fuzzy inference system. It proposes predicting fault density at each phase of development using relevant metrics for that phase. Requirements metrics like complexity, stability and reviews are used to predict fault density after requirements. Design, coding and testing metrics are similarly used to predict fault densities after their respective phases. The model aims to enable early identification of quality issues and optimal resource allocation to improve reliability. MATLAB is used to define fault parameters, categories, fuzzy rules and analyze results. The goal is a multistage fault prediction model for more reliable software delivery.
1. Software project estimation involves decomposing a project into smaller problems like major functions and activities. Estimates can be based on similar past projects, decomposition techniques, or empirical models.
2. Accurate estimates depend on properly estimating the size of the software product using techniques like lines of code, function points, or standard components. Baseline metrics from past projects are then applied to the size estimates.
3. Decomposition techniques involve estimating the effort needed for each task or function and combining them. Process-based estimation decomposes the software process into tasks while problem-based estimation decomposes the problem.
In the present paper, applicability and
capability of A.I techniques for effort estimation prediction has
been investigated. It is seen that neuro fuzzy models are very
robust, characterized by fast computation, capable of handling
the distorted data. Due to the presence of data non-linearity, it is
an efficient quantitative tool to predict effort estimation. The one
hidden layer network has been developed named as OHLANFIS
using MATLAB simulation environment.
Here the initial parameters of the OHLANFIS are
identified using the subtractive clustering method. Parameters of
the Gaussian membership function are optimally determined
using the hybrid learning algorithm. From the analysis it is seen
that the Effort Estimation prediction model developed using
OHLANFIS technique has been able to perform well over normal
ANFIS Model.
This document discusses performance testing concepts, methodologies, and commonly used tools. It begins by defining performance testing as a process of exercising an application with load-generating tools to find bottlenecks and test scalability, availability, and performance from hardware and software perspectives. It then discusses why performance testing is important, especially for mission-critical applications. Finally, it outlines key features that load testing tools should provide and factors for successful load testing such as testing at different speeds and browsers and generating complex scenarios.
This document discusses performance testing concepts, methodologies, and commonly used tools. It begins by defining performance testing as a process of exercising an application with load-generating tools to find bottlenecks and test for scalability, availability, and performance. It highlights the importance of performance testing for both enterprise and scientific applications. The document then covers key performance testing concepts like load testing and factors that impact system performance. It also outlines features of effective load testing tools and factors for successful load testing projects.
The document discusses load testing software called Traffic Simulator. It allows tracking traffic on a production system without loading it, reproducing the traffic on another version of SQL Server or hardware. This allows comparing performance between configurations. Traffic Simulator records traffic, analyzes it, and replays it on a test system. It also provides reports to identify differences in query speeds, errors or data between the original and test systems. This helps evaluate the impact of changes like upgrading SQL Server versions or hardware.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Reliable software in a continuous integration/continuous deployment (CI/CD) e...Ann Marie Neufelder
When the Waterfall software development model was first published it was not the intent to have multi-year development cycles. However, once this very bad practice became institutionalized it was difficult to change. The Agile Manifesto helped to change that only to result in convenient myths about continuous integration/continuous development. Contrary to popular belief having better and more reliable software is one of the key goals of CI/CD. The shorter cycles, daily reviews and code a little test a little approach has been correlated to more reliable software for decades. While CI/CD does minimize the risk of the long development cycles, it doesn't mitigate every risk. For example, it won't fix software engineers who don't understand the industry or product. It won't fix an insufficient level of rigor in testing. It won't fix designing/testing for success while ignoring designing/testing for failure. The primary purpose of CI/CD was to have data so that future sprints/releases could be more successful. Yet many software organizations fail to collect, analyze or learn from the previous sprints. Engineering leaders often have unrealistic expectations that Agile fixes everything. Even the best CI/CD environments can still experience failure from overlooking one key risk or overlooking one key failure mode.
IRJET- Development Operations for Continuous DeliveryIRJET Journal
This document discusses development operations (DevOps) and continuous delivery practices. It describes how various automation tools like Git, Gerrit, Jenkins, and SonarQube are used together in a DevOps pipeline. Code is committed to a version control system and reviewed. It is then built, tested, and analyzed for quality using these tools. Machine learning algorithms are used to classify build logs and determine if builds succeeded or failed. This helps automate the testing process. Static code analysis with SonarQube also helps maintain code quality. The document demonstrates how such automation practices in DevOps can save time and reduce errors compared to manual processes.
Why do customers migrate to cloud? The answer is complex on the one hand, but the main benefits are pretty simple on the other one.
1.There is no need to take care about hardware availability, reliability and maintenance.
2.Relatively stable performance of hosted applications.
3.Easy backup process of the whole system or its part.
This information can be found in almost any marketing prospect on Cloud Computing and Cloud Hosting.
This document is centered on less obvious concepts.
It reveals the areas that you should care about before and during migration to a Cloud environment and explains why you should do that.
Introduction
Andy Griffith's character Sheriff Andy Taylor is suspicious of a mysterious stranger who has come to town. Taylor investigates the stranger and discovers that he is just a traveling salesman trying to make an honest living. All ends well as the stranger proves himself not to be a threat to Mayberry.
This document discusses measuring social media marketing performance. It identifies key metrics to track, including audience growth and reach, engagement, visibility and brand perception, traffic pull, and conversion rate. These metrics help identify what social media tactics are working and not working in order to improve marketing performance. The document also recommends tools for measurement and discusses how to create a feedback loop to continuously measure, plan goals, create and engage with content, and listen and monitor social media analytics.
The document discusses various topics including recognizing corporate thinkers, the difference between "did not" and "didn't", and comparing terms like "that woman" and "Sam". It also provides several links to images and articles without much context.
Ivon Sariol is seeking an opportunity to interact with diverse people. He has an Associate in Arts Degree in Business Administration from Miami Dade College with a 3.8 GPA. His previous work experience includes being a Student Assistant providing office support for the Student Life department at Miami Dade College and working as a Cashier at Sears Retail Store providing excellent customer service. He has strong computer, communication, and organizational skills and is bilingual in English and Spanish.
Las ludotecas son espacios diseñados para que los niños se diviertan y aprendan a través del juego libre y no dirigido. Tienen como objetivos educar para el ocio de forma creativa, crear un espacio para las relaciones entre los niños y ofrecerles la posibilidad de llenar su tiempo libre de forma constructiva. Las actividades se centran en juegos que fomenten el desarrollo psicomotor, cognitivo, afectivo y social de los niños de una manera flexible adaptada a sus necesidades.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document provides a selective survey of software reliability models. It discusses both static models used in early development stages and dynamic models used later. For static models, it describes a phase-based model and predictive development life cycle model. For dynamic models, it outlines reliability growth models, including binomial, Poisson, and other classes. It also presents a case study of incorporating code changes as a covariate into reliability modeling during testing of a large telecommunications system. The document concludes by advocating for wider use of statistical software reliability models to improve development and testing processes.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
Call for paper 2012, hard copy of Certificate, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJCER, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, research and review articles, IJCER Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathematics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer review journal, indexed journal, research and review articles, engineering journal, www.ijceronline.com, research journals,
yahoo journals, bing journals, International Journal of Computational Engineering Research, Google journals, hard copy of Certificate,
journal of engineering, online Submission
Benchmarking Techniques for Performance Analysis of Operating Systems and Pro...IRJET Journal
This document discusses benchmarking techniques for analyzing the performance of operating systems and programs. It begins with an abstract that outlines benchmarking as an important process for evaluating system performance and comparing different systems. The document then reviews related work on operating system benchmarking and discusses challenges. It proposes a system for benchmarking CPU, memory, file system, and network performance using various tests and metrics. The methodology, implementation, and results of these tests are described through figures and plots. It concludes that the developed benchmarking tool can test a system's performance locally across different aspects and operating systems in a time-saving manner.
From the past many years many software defects prediction models are developed to solve the various issues in software project development. Software reliability is the significant in software quality which evaluates and predicts the quality of the software based on the defects prediction. Many software companies are trying to improve the software quality and also trying to reduce the cost of the software development. Rayleigh model is one of the significant models to analyze the software defects based on the generated data. Analysis of means (ANOM) is statistical technique which gives the quality assurance based on the situations. In this paper, an improved software defect prediction models (ISDPM) are used for predicting defects occur at the time of five phases such as analysis, planning, design, testing and maintenance. To improve the performance of the proposed methodology an order statistics is adopted for better prediction. The experiments are conducted on 2 synthetic projects that are used to analyze the defects.
A Combined Approach of Software Metrics and Software Fault Analysis to Estima...IOSR Journals
The document presents a software fault prediction model that uses reliability relevant software metrics and a fuzzy inference system. It proposes predicting fault density at each phase of development using relevant metrics for that phase. Requirements metrics like complexity, stability and reviews are used to predict fault density after requirements. Design, coding and testing metrics are similarly used to predict fault densities after their respective phases. The model aims to enable early identification of quality issues and optimal resource allocation to improve reliability. MATLAB is used to define fault parameters, categories, fuzzy rules and analyze results. The goal is a multistage fault prediction model for more reliable software delivery.
1. Software project estimation involves decomposing a project into smaller problems like major functions and activities. Estimates can be based on similar past projects, decomposition techniques, or empirical models.
2. Accurate estimates depend on properly estimating the size of the software product using techniques like lines of code, function points, or standard components. Baseline metrics from past projects are then applied to the size estimates.
3. Decomposition techniques involve estimating the effort needed for each task or function and combining them. Process-based estimation decomposes the software process into tasks while problem-based estimation decomposes the problem.
In the present paper, applicability and
capability of A.I techniques for effort estimation prediction has
been investigated. It is seen that neuro fuzzy models are very
robust, characterized by fast computation, capable of handling
the distorted data. Due to the presence of data non-linearity, it is
an efficient quantitative tool to predict effort estimation. The one
hidden layer network has been developed named as OHLANFIS
using MATLAB simulation environment.
Here the initial parameters of the OHLANFIS are
identified using the subtractive clustering method. Parameters of
the Gaussian membership function are optimally determined
using the hybrid learning algorithm. From the analysis it is seen
that the Effort Estimation prediction model developed using
OHLANFIS technique has been able to perform well over normal
ANFIS Model.
This document discusses performance testing concepts, methodologies, and commonly used tools. It begins by defining performance testing as a process of exercising an application with load-generating tools to find bottlenecks and test scalability, availability, and performance from hardware and software perspectives. It then discusses why performance testing is important, especially for mission-critical applications. Finally, it outlines key features that load testing tools should provide and factors for successful load testing such as testing at different speeds and browsers and generating complex scenarios.
This document discusses performance testing concepts, methodologies, and commonly used tools. It begins by defining performance testing as a process of exercising an application with load-generating tools to find bottlenecks and test for scalability, availability, and performance. It highlights the importance of performance testing for both enterprise and scientific applications. The document then covers key performance testing concepts like load testing and factors that impact system performance. It also outlines features of effective load testing tools and factors for successful load testing projects.
The document discusses load testing software called Traffic Simulator. It allows tracking traffic on a production system without loading it, reproducing the traffic on another version of SQL Server or hardware. This allows comparing performance between configurations. Traffic Simulator records traffic, analyzes it, and replays it on a test system. It also provides reports to identify differences in query speeds, errors or data between the original and test systems. This helps evaluate the impact of changes like upgrading SQL Server versions or hardware.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Reliable software in a continuous integration/continuous deployment (CI/CD) e...Ann Marie Neufelder
When the Waterfall software development model was first published it was not the intent to have multi-year development cycles. However, once this very bad practice became institutionalized it was difficult to change. The Agile Manifesto helped to change that only to result in convenient myths about continuous integration/continuous development. Contrary to popular belief having better and more reliable software is one of the key goals of CI/CD. The shorter cycles, daily reviews and code a little test a little approach has been correlated to more reliable software for decades. While CI/CD does minimize the risk of the long development cycles, it doesn't mitigate every risk. For example, it won't fix software engineers who don't understand the industry or product. It won't fix an insufficient level of rigor in testing. It won't fix designing/testing for success while ignoring designing/testing for failure. The primary purpose of CI/CD was to have data so that future sprints/releases could be more successful. Yet many software organizations fail to collect, analyze or learn from the previous sprints. Engineering leaders often have unrealistic expectations that Agile fixes everything. Even the best CI/CD environments can still experience failure from overlooking one key risk or overlooking one key failure mode.
IRJET- Development Operations for Continuous DeliveryIRJET Journal
This document discusses development operations (DevOps) and continuous delivery practices. It describes how various automation tools like Git, Gerrit, Jenkins, and SonarQube are used together in a DevOps pipeline. Code is committed to a version control system and reviewed. It is then built, tested, and analyzed for quality using these tools. Machine learning algorithms are used to classify build logs and determine if builds succeeded or failed. This helps automate the testing process. Static code analysis with SonarQube also helps maintain code quality. The document demonstrates how such automation practices in DevOps can save time and reduce errors compared to manual processes.
Why do customers migrate to cloud? The answer is complex on the one hand, but the main benefits are pretty simple on the other one.
1.There is no need to take care about hardware availability, reliability and maintenance.
2.Relatively stable performance of hosted applications.
3.Easy backup process of the whole system or its part.
This information can be found in almost any marketing prospect on Cloud Computing and Cloud Hosting.
This document is centered on less obvious concepts.
It reveals the areas that you should care about before and during migration to a Cloud environment and explains why you should do that.
Introduction
The document summarizes the results of performance testing on a system. It provides throughput and scalability numbers from tests, graphs of metrics, and recommendations for developers to improve performance based on issues identified. The performance testing process and approach are also outlined. The resultant deliverable is a performance and scalability document containing the test results but not intended as a formal system sizing guide.
Industrial perspective on static analysisChirag Thumar
by BA Wichmann, AA. Canning, D.L. Clutterbuck, LA Winsborrow,
N.J. Ward and D.W.R. Marsh
Static analysis within industrial applications
provides a means of gaining higher assurance
for critical software. This survey notes several
problems, such as the lack of adequate
standards, difficulty in assessing benefits,
validation of the model used and acceptance
by regulatory bodies. It concludes by outlining
potential solutions and future directions.
Defect Prediction & Prevention In Automotive Software DevelopmentRAKESH RANA
Defect Prediction & Prevention In Automotive Software Development
Dec, 2013
Göteborg, Sweden
Get full text of publication at:
http://rakeshrana.website/index.php/work/publications/
This document discusses software reliability engineering and proposes future directions for improving reliability prediction and assessment. It begins with an introduction to software reliability and complexity. It then discusses challenges with current reliability modeling approaches and issues with metrics/measurements. Testing effectiveness and code coverage are also examined. The document proposes methodologies for improving reliability assessment, including focusing on software architectures/components, linking testing and reliability metrics, and collecting industrial data. Overall, it argues that current techniques could be enhanced by incorporating additional factors like code coverage and collecting failure data earlier. Improved reliability prediction would benefit both industry and research.
This document discusses challenges in software reliability and proposes approaches to improve reliability predictions and measurements. It addresses issues like:
1. The difficulty of modeling software reliability due to the complexity and interdependence of software failures, unlike independent hardware failures.
2. Challenges with software reliability growth models (SRGMs) due to unrealistic assumptions and lack of operational profile data.
3. The need for consistent, unified definitions of software metrics and measurements to better assess reliability.
4. Questions around how well testing effectiveness metrics like code coverage actually correlate with detecting defects and reliability. The relationship between code coverage and reliability is not clearly causal.
Improving software reliability predictions requires addressing these issues by developing more realistic
Similar to Software Reliability For Engineers - J.K.Orr 2015-09-23 (20)
Truth how texas and houston compare to new yorkJames Orr
June 14, 2020 Much In News About Growth In Cases In Houston, and Texas. This presents data that shows superior statistics for Houston and Texas versus New York State and New York City. Number (and rates per 100,000 Population) need t o compare Houston now to New York City in four to six weeks after reopening.
Covid 19 cases per day per 100,000 popylation 5-15-2020James Orr
More weekly update. Focused on average new cases per week, change in cases per week, and average new cases per week per 100,000 population. Provides graphs on six "no doing well states" and detail data in tabular form on all states
Coronavirus data focused on new york comparison and data per populationJames Orr
Continues my weekly report. This focus on (new cases per day) per 100,000 population. Uses a nine day moving average (to remove day to day effects over the week). All states compare to New York State with highest 50 new cases per day per 100K. For states who are at very low levels, also show data for that state with both 0 to 60 scale (with New York) and 0 to 5 cases per day per 100K scale.
Us coronavirus data as of may 1, 2020 just dataJames Orr
170 pages this week, but just data on Coronavirus cases in US by state.
Includes the best measure found so far -- Average weekly new cases per day per 100,000 population
Page 4 – 8 are alphabetical by state total cases, max cases per day, 4-24 to 5/1 average cases per day, plus normalized by 100,000 population. Has which 3 charts numbers for detail data per state
Pages 9 – 13 are from state with the highest cases per 100,000 population (New York) to least (Montana). Also shows average new cases per day per 100,000 population in descending order.
Page 14 has distributions of cases/100,000 and new cases per day per 100,000. Also shows the ration of weekly new cases 4/24 to 5/1 divided maximum weekly new case.
Note that for 21 states (and Total US, and District of Columba) had the maximum weekly new cases during 4/24 to 5/1
Predicting coronavirus cases and questions need answeringJames Orr
Another weekend, and another attempt to make sense out of the Coronavirus Case data from states in the United States. Overall, cases per day remain flat (on plateau) with no decline in data. However, other indications suggest increasing irrelevant to use new cases per day for social policy decisions. Indication that new hospitalizations per day might be a better measure.
Criteria to begin relaxing social distancing revision AJames Orr
Latest weekly update (on weekend) of my projection of the number of Coronavirus case per day by state. Prior predictions modeled following three weeks. Mode VII two weeks ago and Model IX on week ago appear as accurate as randomness in data allows, or new pockets of outbreaks. This version looks at where states are at plateauing or being pose peak new cases per day. Also looks at criteria to begin to relax social distancing guidelines. Revision A corrects Alaska (missing a 0) and Nevada population (one extra 0).
Measuring the effect of social distancing On CoronavirusJames Orr
This was an attempt to see if I could measure the effect of social distancing. While the method is immature, it definitely shows that movement into regions by infectious persons is defeating "social distancing" in most areas. Only New York and Washington show progress. Texas "social distancing" appears overwhelmed by movement of new infectious persons into Texas
04/05/2020. The United States looks for signs that new Coronavirus Cases per day has started to decrease. We may be close, but not there yet. This presentation looks at changes in Case for United States and a few states. It looks at Cases versus date, first derivative (Growth In Cases Per Day) and finally the second delivery (which show how rapidly the growth per day is decreasing).
Orr's model vii projection of cases to 4 16-2020 created 04-03-2020James Orr
Updated with Model VII which has a much sharper decrease in Case Growth Per Day. Model VII achieves peak New Cases per Day within the two week project, while Model VI New Cases continues to grow over the 14 days.
Orr's model vi projection of cases to 4 16-2020 created 04-03-2020James Orr
This is the latest weekly update of my projection of Coronavirus Cases over the next fourteen days. Projection two weeks ago was too low (assumed control of spread). Projection on week ago assumed no change in Growth Rate % of cases per day. This version used trend in Growth Rate % cases per day to extrapolate reducing Growth Rate per day over the next 14 days. This should be much more accurate if continued progress in "social distancing" occurs.
Latest Projection Of Cases By State For Next 15 Day.
Average growth abut 30 % per day (expect to be lower).
Uses average growth per day from 3-21 to 3-28 to project ahead 14 days.
Coronavirus cases by state - projections for growther from 03-21-2020 to 04-...James Orr
The document provides a projection of coronavirus cases in the United States from March 22, 2020 to April 21, 2020 using a model (Model IV) developed by James K. Orr. It projects the maximum number of cases for each US state over this time period based on current case numbers as of March 21, 2020 and average daily growth rates over the previous 3 days for each state. It notes that the projections provide an optimistic scenario that depends on actions by governments and mercy from God. Tables with projections for each state are also included.
Coronavirus pandemic public health - lessons in mathematicsJames Orr
This was prepared for a mathematical literate (honors) young teenage girl. This introduces risk assessment and risk mitigation using Public Health Measures. Also includes a mathematical tuitorial.
Coronavirus case growth by country j.k.orr 2020 03-07James Orr
An attempt to model growth of the Coronavirus case growth in a country from the time the total cases reach 225 cases for the next 10 days. Model is ONLY VALID for these 10 days. Model is dependent that data provided by countries are accurate. Raw data from Mainland China, South Korea, Italy, and Iran as of 3/6/2020.
2018 is year to consider conversions of tax deferred funds to rothJames Orr
With the 2018 tax law changes, converting tax-deferred savings into Roth IRAs is advantageous for those who can pay the income taxes. For individuals on Medicare, there are income thresholds that impact Medicare premium costs that must be considered when deciding how much to convert. Converting just under the $267,000 threshold may be best for married couples expecting to complete conversions while both are alive. For those needing access to funds sooner, determining the optimal strategy can be more complicated.
Effect of tax cut and job act for couples over 65James Orr
Captures some of my analysis of the effect of Thrump's "Tax Cut and Job Act" on my strategy for managing my pre-tax IRA funds. Specially written for couples over 65 on Medicare due to the effect of Gross Taxable Income on both federal income tax and Medicare Premiums. Identifies general tax planning strategies and also identifies a largely unaddressed risk, which is effect of change in income tax filing status when a spouse dies (from "Married Filing Jointly" to filing as "Single").
The mythical 100 flood plain Houston Texas 2017James Orr
Data is shown for one location in south Harris County (Clear Creek at I-45) where two near "1000 year floods" based on Harris County Flood Warning System 100 year flood, 500 year flood data within 40 years (in 1979 and 2017)
Hurricane harvey Impact On Houston Rainfall and Water DepthJames Orr
Screen prints of selected data from Harris County Flood Control District. There are rain gauges and stream depth gauges through out Harris and surrounding counties. Data was captured to show impacts in several areas including the Clear Creek Watershed near NASA's Johnson Space Center.
Software Reliability For Engineers - J.K.Orr 2015-09-23
1. Software Reliability For
Dummies Engineers
James K. Orr
Independent Consultant
jkorr@gatech.edu
Copyright 2015 By James K. Orr 19/23/2015
2. Introduction
• This presentation presents a very simplified approach
to computing Software Reliability – an engineering
approach as opposed to a complex statistical approach.
• This approach evolved from analysis of the Space
Shuttle Primary Avionics Software System (PASS), the
software that controlled the Space Shuttle from pre-
launch, through ascent, on-orbit, entry to landing.
• Approach may be limited to similar systems (large scale
critical software with relatively few users).
• If you would like assistance in applying this method,
please contact me at jkorr@gatech.edu.
Copyright 2015 By James K. Orr 29/23/2015
3. Contents
• Introduction
• Contents
• Evolution of Space Shuttle PASS Alternate
Reliability Model in 1989.
• Generalized Approach For Software Reliability
With Examples, Equations, and Simulation
• Sample Results From Space Shuttle PASS
Reliability Analysis
• References
Copyright 2015 By James K. Orr 39/23/2015
4. Evolution of
Space Shuttle PASS
Alternate Reliability Model
in 1989
Copyright 2015 By James K. Orr 49/23/2015
5. Requirement Reliability Prediction
• Following the loss of the Space Shuttle Challenger and
crew in 1986, IBM Federal Systems Division – Houston
as the Space Shuttle Primary Avionics Software System
developer was assigned a “return to flight” action to
model the software reliability of “loss of vehicle and
crew” latent errors (defects).
• This was a two step process. First, compute software
reliability (time to next failure). Second, model the
probability that a failure occurring during flight would
be a “loss of vehicle and crew” latent errors (defects).
• Discussion in this paper focuses on the first activity,
compute software reliability (time to next failure).
Copyright 2015 By James K. Orr 59/23/2015
6. “Professional” Approach
• IBM Federal Systems Division – Houston
contacted multiple experts in software reliability.
Ultimate, N. F. Schneidewind and his “SMERFS”
software reliability estimation tool was selected
to model the Space Shuttle PASS reliability.
• See reference 1 for one paper that documents
the results of this work. The link with reference 1
also connects to a full list of papers, etc. by DR.
Schneidewind.
Copyright 2015 By James K. Orr 69/23/2015
7. Motivating An “Engineering” Approach
• During this time (1986 – 1989), I was working as senior technical staff at
IBM Federal Systems Division – Houston. Roles included:
– Project Coordination and Technical Leadership, 1984-1988. Led initiatives to support
high flight rate in period leading to loss of Space Shuttle Challenger in January 1986.
Oversaw initiatives to implement mandatory changes to On-Board Shuttle Software
(PASS) prior to return to flight in September 1988. Earned IBM highest technical
achievement award, outstanding achievement award for shuttle software engineering,
development and verification technical leadership
– Member of IBM/NASA Shuttle Flight Software (PASS) Discrepancy Review Board, 1981-
1992. Maintained rigor of Discrepancy Review Board process, ensuring identification
and correction of process escapes and identification and correction of similar errors due
to prior process deficiencies found by audits.
• In these roles, I reviewed the results produced by IBM and the “SMERFS”
software reliability estimation tool. A key part of the process was to
separate software into “layers” based on the development cycle for each
release of PASS flight software. In comparing the failures (Flight Software
Discrepancies) by development cycle to data being processed by the of
IBM/NASA Shuttle Flight Software (PASS) Discrepancy Review Board, I
observed significant differences in time to failure by release.
Copyright 2015 By James K. Orr 79/23/2015
8. “Engineering” Approach
• FROM NOTES DATED 04/16/1990 (WITH
ADDED HISTORICAL INSIGHT)
– Analysis was done “eye balling” time between
failures for recent releases. An “engineering
judgment” was used of a rough estimated time to
next failure by release. This was compared to the
values produced by “SMERFS” as well as the
prototype for the Alternate Reliability Model.
• Data on next page has been updated with
actual time to next failure as of 03/14/2007.
Copyright 2015 By James K. Orr 89/23/2015
9. Evaluation Of “SMERFS”
Operational
Increment
Engineering
Judgment
Time to Next
Failure (Days)
“SMERFS”
09/30/89
MTTF (Days)
Alternate
Reliability
Model
12/01/89
MTTF
(Days)
Next Failure
After 12/89
Actual MTTF
(Days)
Added
03/14/2007
4 1500 167 970 2262
5 700 164 729 2746
6 600 146 539 203
7 800 291 864 327
7C 1000 466 1458 4484
8A 700 455 1461 2958
8B 350 256 351 6393
8C 180 420 143 185
Composite
(Combine All
Above)
63 30 60 67
See Reference 2, page 16 to identify time frame for Operational Increments.
See pages 43 – 53 for significant process improvements applied for
Operational Increments OI-8A, OI-8B, and OI-8C.
Copyright 2015 By James K. Orr 99/23/2015
10. Evaluation Of “SMERFS”
• Alternate model was developed to better match the engineering
judgment values. In hindsight, looking back after 17 plus years (in
2007), the engineering judgment was most accurate (6 %
conservative), followed by the Alternate Reliability Model (10 %
conservative). SMERFS was conservative, but in error by 55 % of
the actual results.
• RATIONALE FOR “ALTERNATE RELIABILITY MODEL” From Notes
dated 04/16/1990
– First, subtle differences existed between Predicted Time Between Failures
using “SMERFS” (Statistical Modeling and Estimating of Reliability Functions
for Software) and the actual data. The key difference that was unacceptable
was the skew in probability of the next error occurring on older OI’s (for
example, OI 4) rather than on recent OI’s (for example, OI-8C). Actual data
showed an opposite trend.
– Second, “SMERFS” required significant historical data prior to producing
accurate results making it inappropriate for predicting in advance the
reliability of unreleased systems
Copyright 2015 By James K. Orr 109/23/2015
11. Effects Of Process Improvement
• Candidate reasons for mismatch between “SMERFS” and reality. See
Reference 3, page 9. This shows very large spike in product error rate for
Operational Increments OI-1 and OI-2. See page 16 for tabular data.
• Continual process improvements through OI-8C may have accounted for
error in “SMERFS” predications.
OI-1
OI-2
OI-3
OI-4
OI-5 OI-6
OI-7
OI-8B
OI-8C
STS-1
STS-2 STS-5
OI-25 Process Issues During Transition From IBM To Loral
Copyright 2015 By James K. Orr 119/23/2015
12. Summary of the method
• The Space Shuttle “Alternate Reliability Model” program was developed
for the Space Shuttle Primary Avionics Software System, which is human
flight rated system of approximately 450,000 sources lines of code
(excluding comment lines). Operational Increment release development
over 15 years has demonstrated that the reliability characteristics per unit
of changed code for each release is very consistent, with variations
explainable by process deviations or other special causes of variation.
Copyright 2015 By James K. Orr 129/23/2015
13. Summary of the method
• The Space Shuttle “Alternate Reliability Model” program computes
software reliability estimates in complex software systems even as the
reliability characteristics change over time.
• The method and tool works in two independent modes.
– First, when failure data is available, it will estimate two model
coefficients for each grouping of software being analyzed. These two
model coefficients can then be used to calculate the software
reliability characteristics of each grouping of software, and also total
software reliability for all groupings combined.
– Second, the two model coefficients are also normalized based on
relative size. For appropriate circumstances (e.g., the software is
produced with essentially the same equivalent quality process),
estimates of software reliability can be made prior to any failures
occurring based on relative size of the software.
• Once the two model coefficients are determined, reliability and failure
information over a user defined time interval can be computed
Copyright 2015 By James K. Orr 139/23/2015
14. Required Inputs Mode # 1
Mode # 1 (use actual failure data to compute reliability)
• Define software as “uniform layers.” These layers represent functionally
whatever characteristic desired to be modeled. In the Space Shuttle Primary
Avionics Software System context, each layer represents all new/changed
software delivered by each release. In the Constellation context, layers could
be broken down by function and criticality of the software.
• Relative size measure of each layer. In the Space Shuttle Primary Avionics
Software System context, relative size is defined by new/changed source lines
of code (slocs). In the Constellation context, relative size could be based on
number of requirements, functions points, or any other measure desired.
Comparing relative functional size of software function to the PASS slocs and
Constellation size parameter of choice could perform correlation between
Space Shuttle Primary Avionics Software System and Constellation.
– Data on each failure.
– Date of failure
– “Layer” of software that was the source of the failure
Copyright 2015 By James K. Orr 149/23/2015
15. Required Inputs Mode # 2
Mode # 2 (use relative size and historical data to compute reliability)
• Define software as “uniform layers.” These layers could represent
as desired to be modeled.
• Relative size measure of each layer.
• Expected relative quality level compared to historical data (could be
subjective).
All (produce reliability calculations)
• Date or date range. Typically, this would correspond to (a) the date
of flight at which you wanted a Mean Time To Failure, or (b) any
range of dates over which you wanted to determine the expected
number of failures (expressed as a scalar, which in some contexts
would represent the likelihood of a failure in that interval).
Copyright 2015 By James K. Orr 159/23/2015
16. MATHEMATICAL BASIS
“ALTERNATE RELIABILITY MODEL”
The expected number of failures at any time
• X = K_layer ln(t) - K_layer ln(tref) for t > tref
• Where X = number of software failures
• K_layer = a single constant that characterizes the grouping of software
• tref = a reference time in days shortly after release (Configuration Inspection date), typically on the order of 90 days. 90
days was selected as the time to normally reconfigure a system for flight and begin its use
– The 90 days makes operations sense in the Space Shuttle Program Primary Avionics Software System (PASS)
environment
– The 90 days avoids a lot of mathematical issues as t takes on small values, ultimately approaching 0.
• t = time in days after release (Configuration Inspection date). t varies from approximately 90 days up to approximately
10,000 days for Space Shuttle Primary Avionics Software System (PASS) data.
• For every pair of successive failures, a value of K can be computed. Values computed for each pair of successive failures may
vary by a factor on the order of 100.
– K failures N to N + 1 = 1 / (ln( t at failure N + 1 ) - ln( t at failure N ) )
– In the Space Shuttle Primary Avionics Software System Data, failures are some times reported on the same day.
Mathematically, the above equation does not work for this situation. The approach adopted was to treat all failures
occurring within 12 days (evolved through multiple iterations) in one K calculation
– If two failures within one 12 day interval
• K failures N to N + 2 = 2 / (ln( t at failure N + 2 ) - ln( t at failure N ) )
– If three failures within one 12 day interval
• K failures N to N + 3 = 3 / (ln( t at failure N + 3 ) - ln( t at failure N ) )
– Etc.
• The above calculations give a series of K terms each associated with a time interval. A single value of K layer for each by
“layer” or set of released changes is calculated by weighting by the associated delta time interval. Note the equation below
is simplified to the case where all failures occur more than 12 days apart. Note also that the method assumes a failure at
the current date for each layer as the calculations are performed to insure a conservative estimate is produced.
Copyright 2015 By James K. Orr 169/23/2015
17. MATHEMATICAL BASIS
“ALTERNATE RELIABILITY MODEL”
• Standard deviation is computed directly from
all of the computed F_factor.
• Normalized standard deviation (SD_factor) is
computed by dividing the standard deviation
by the composite final F_factor.
Copyright 2015 By James K. Orr 179/23/2015
22. Summary Noise Calculations
• Ideal Data uses integer dates as close as possible
to produce exactly even integer failures using
model equations.
• Noise 1 Data uses random variance in dates to
produce a Standard Deviation in F_factor of
about 17 %.
• Noise 2 Date uses random variance in dates to
produce a Standard Deviation in F_factor or
about 28 %.
• The above are simply samples, no other value.
Copyright 2015 By James K. Orr 229/23/2015
24. Key Issues
• Must separate failures during development from failures post release operations.
• Ideally separate failures from post release operations by completion date of each
release content.
• Selection of tref is critical in that it must not be near 0. Zero time is normally when
verification testing is completed.
– Based on Space Shuttle PASS experience, a value of 90 days is recommended.
This was the time from when verification on an Operational Increment was
completed until a flight specific reconfigured release was available to field
users (crew training, Software Avionics Integration Laboratory testing).
– Alternately, the time at which the first failure occurs could also be used (if
significantly greater than 0).
• Method does not work in trying to treat single failures occurring on the same day
or very, very close together.
– Space Shuttle PASS engineering solution was to group all failures occurring
within 12 days into a single calculation with N number of failures between the
two time points.
Copyright 2015 By James K. Orr 249/23/2015
25. Test If This Approach Is Valid
• This model may or may not work for any
specific system and set of failure data.
• The most direct test is to plot failures versus
time from release verification completion with
time as logarithmic scale.
– If this plot is approximately linear, then this
approach (PASS “Alternate Reliability Model”) is
valid.
– If there are failures very near delta time near 0,
these should be ignored for modeling purposes.
Copyright 2015 By James K. Orr 259/23/2015
26. Effect Of Not Isolating Each Release
Copyright 2015 By James K. Orr 269/23/2015
27. Extreme Samples Of Model Equations
• Sample 1 has random variation in dates for failures,
plus assumes two failures at second failure point to
demonstrate multiple failures within a short time
period (typically within 12 days). Uses same dates as
Noise 2 Sample for first four failures only.
• Sample 2 has random variation in dates for failures.
Uses same dates as Noise 2 Sample for first four
failures only.
• Sample 3 has random variation in dates for failures.
Uses same dates as Noise 1 Sample for first four
failures only.
Copyright 2015 By James K. Orr 279/23/2015
38. Sample Results From
Space Shuttle PASS
Reliability Analysis
Copyright 2015 By James K. Orr 389/23/2015
39. Estimate K_factor
• The following four charts illustrate how K_factor can be predicted from other
software metrics such as Product Error Rate (Product Errors per 1000
new/changed source lines of code)
• Data is shown from OI-20 (released in 1990) to OI-30 (released in 2003).
These were large releases with 7 to 20 years of service life. Relatively stable
software development and verification process was used except for OI-25 (see
Reference 2 for more information).
• Page 40 shows Product Error Rate data from Reference 3. Page 41 shows PASS
K_factor per 1000 new/changed source lines of code from my personal notes.
• Page 42 tabulates key information. Page 43 plots the relationship between
K_factor per KSLOC versus Product Error Rate (Product Errors per KSLOC).
• This relationship could be used to estimate reliability of a future system if an
estimate of Product Error Rate is known based on prior process performance.
– K_factor = (K_factor per KSLOC as a function of Product Error Rate) *
KSLOC of system
Copyright 2015 By James K. Orr 399/23/2015
40. Reference 3, Page 16
Copyright 2015 By James K. Orr 40
Focus On AP-101S
(upgraded General
Purpose Computer)
Major Releases
9/23/2015
41. PASS “Alternate Reliability Model” K_factor
Copyright 2015 By James K. Orr 41
OI-30
OI-29
OI-28
OI-27
OI-26B
OI-26
OI-25
OI-24
OI-23
OI-22
OI-21
OI-20
9/23/2015
44. Discussion Of Results For PASS
• Computed Alternate Reliability Model coefficients analysis for OI-3 and later systems using
the post flight failures. For OI-30, OI-32, OI-33, and OI-34, adjusted the calculated values due
to the assumption of an additional failure on the day of the analysis gave unrealistically high
values. Alternate Reliability Model coefficients were adjusted to a value per unit of size
(1000 uncommented new/changed source lines of HAL/S code, or KSLOC) that was consistent
with other similar recent OI’s.
• For Release 16 (STS-1) to OI-2, failure data exists for the combined releases, not separately.
Computation of Alternate Reliability Model coefficients was done by comparing failures per
year for the combined releases to Alternate Model output for assumed of Alternate
Reliability Model coefficients. Alternate Reliability Model coefficients derived based on
constant value per unit of size (KLSOC). Additional unique analysis produced the Alternate
Reliability Model coefficient showing the variability of the predicted failures per year.
• Analysis focused on flown systems. Data from Operational Increments not flown combined
with later flown increment. As an example, failures and KSLOC’s from OI-7C and OI-8A are
included in the calculation of Alternate Reliability Model coefficients for OI-8B. For simplicity,
combined data from OI-8F under OI-20 even thought OI-8F supported flights due to the small
size of OI-8F and OI-8F’s unique nature. OI-8F made operating system changes to support the
AP-101S General Purpose Computer upgrade.
Copyright 2015 By James K. Orr 449/23/2015
48. 1. Schneidewind, N. F. and Keller, T. W. 1992. "Application of Reliability Models to the
Space Shuttle," IEEE Software, July 1992, pp. 28-33.
– See list of papers by N.F. Schneidewind at
• http://faculty.nps.edu/vitae/cgi-
bin/vita.cgi?p=display_more&id=1023567911&field=pubs
2. James K. Orr, Daryl Peltier, Space Shuttle Program Primary Avionics Software System
(PASS) Success Legacy - Major Accomplishments and Lessons Learned Detail Historical
Timeline Analysis, August 24, 2010, NASA JSC-CN-21350, presented at NASA-
Contractors Chief Engineers Council 3-day meeting August 24-26, 2010, in Montreal,
Canada.
– Free at
• http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20100028293.pdf
3. James K. Orr, Daryl Peltier, Space Shuttle Program Primary Avionics Software System
(PASS) Success Legacy – Quality & Reliability Data, August 24, 2010, NASA JSC-CN-
21317, presented at NASA-Contractors Chief Engineers Council 3-day meeting August
24-26, 2010, in Montreal, Canada
– Free at
• http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20100029536.pdf
Copyright 2015 By James K. Orr 489/23/2015