A major part of batch processing on mainframe computers consists of several thousand batch jobs which run every day. This network of jobs runs every day to update day-to-day transaction. There are frequent failures which can cause a high delay in the batch and also degrade the performance & efficiency of the application. Permanent solution can be done to frequently occurring job failures to avoid the delay in batch and to improve performance & efficiency of the application. In this paper, we have analyzed the frequently occurring batch job failure recorded in Know Error Databases (KEBD) for past one year based on different categories. Frequently failed jobs obtained are categorized based on application, failure-type, job-runs and the resolution. Different results are obtained in the weka tool based on the different categories. From the various results obtained it can be concluded that the frequent failures are occurring in MSD application. On further analysis on this frequently failed jobs showed that data and network issue are causing the major job failures in which most of the jobs were daily processing jobs. In order to fix the failure the jobs was resolved by restarting the job from the overrides or by restarting the job from the top
Introduction to software engineering
Software products
Why Software is Important?
Software costs
Features of Software?
Software Applications
Software—New Categories
Software Engineering
Importance of Software Engineering
Essential attributes / Characteristics of good software
Software Components
Software Process
Five Activities of a Generic Process framework
Relative Costs of Fixing Software Faults
Software Qualities
Software crisis
Software Development Stages/SDLC
What is Software Verification
Advantages of Software Verification
Advantages of Validation
This Slideshare presentation is a partial preview of the full business document. To view and download the full document, please go here:
http://flevy.com/browse/business-document/system-analysis-and-design-program-1926
BENEFITS OF DOCUMENT
1. Detailed presentation on system analysis and design program
DOCUMENT DESCRIPTION
Content:
Introduction
Software Development Life Cycle
Managing System Development
Estimation
Using Data-flow Diagrams
Analysing Systems Using Data Dictionaries
Describing Process Specifications and Structured Decisions
Review
Introduction to Testing
See how to Assess Your Application: https://www.castsoftware.com/use-cases/application-assessment
Assessing application development like the rest of the business
Well overdue, it is time to measure application development and
maintenance the same way as the rest of the business, based on not just how much work someone does, but how well they do the work. As we know, looking to see if the code works as expected is only a single measurement. Knowing how easy it will be to maintain over time, how flexible it is to change as required by business changes, how quickly new team members can understand the code and get working on it and how easily the application can be tested are just some of the things that we need to look at in order to understand the real quality of the work being done by application development teams. When these measurements are combined with ways of counting the productivity (quantity) of development teams, we can get a real understanding of how well the teams are performing and what return is being realized from the investment. These measurements can be assessed both for in-house development organizations as well as the work being done by outsourcers.
The applications delivered by IT are a significant differentiator between competitors and therefore it needs to be managed as a core business process. Held up against corporate standards and no matter how or where the development work is done, it must be done well and the resulting applications need to be able to withstand time.
See how to Assess Your Application: https://www.castsoftware.com/use-cases/application-assessment
Microsoft SQL Server Seven Deadly Sins of Database DesignMark Ginnebaugh
This slide show focuses on the seven most common mistakes software developers make while designing databases and how to correct them.
The good news is that following some simple data modeling fundamentals can lead to a high-functioning database. Solomon Waters of Embarcadero Technologies demos these important fundamentals using ER/Studio.
You will learn:
•The basics of normalization and data modeling
•How to define consistent data definitions
•Best practices for designing quality database applications
Obstacle Driven Development is the latest engineering process and combines Test Driven Development with safety critical V-model development.
This updated presentation demonstrates how ODD extends and combines requirements analysis with Test Driven Development and V-models.
Please see the series for further details.
Introduction to software engineering
Software products
Why Software is Important?
Software costs
Features of Software?
Software Applications
Software—New Categories
Software Engineering
Importance of Software Engineering
Essential attributes / Characteristics of good software
Software Components
Software Process
Five Activities of a Generic Process framework
Relative Costs of Fixing Software Faults
Software Qualities
Software crisis
Software Development Stages/SDLC
What is Software Verification
Advantages of Software Verification
Advantages of Validation
This Slideshare presentation is a partial preview of the full business document. To view and download the full document, please go here:
http://flevy.com/browse/business-document/system-analysis-and-design-program-1926
BENEFITS OF DOCUMENT
1. Detailed presentation on system analysis and design program
DOCUMENT DESCRIPTION
Content:
Introduction
Software Development Life Cycle
Managing System Development
Estimation
Using Data-flow Diagrams
Analysing Systems Using Data Dictionaries
Describing Process Specifications and Structured Decisions
Review
Introduction to Testing
See how to Assess Your Application: https://www.castsoftware.com/use-cases/application-assessment
Assessing application development like the rest of the business
Well overdue, it is time to measure application development and
maintenance the same way as the rest of the business, based on not just how much work someone does, but how well they do the work. As we know, looking to see if the code works as expected is only a single measurement. Knowing how easy it will be to maintain over time, how flexible it is to change as required by business changes, how quickly new team members can understand the code and get working on it and how easily the application can be tested are just some of the things that we need to look at in order to understand the real quality of the work being done by application development teams. When these measurements are combined with ways of counting the productivity (quantity) of development teams, we can get a real understanding of how well the teams are performing and what return is being realized from the investment. These measurements can be assessed both for in-house development organizations as well as the work being done by outsourcers.
The applications delivered by IT are a significant differentiator between competitors and therefore it needs to be managed as a core business process. Held up against corporate standards and no matter how or where the development work is done, it must be done well and the resulting applications need to be able to withstand time.
See how to Assess Your Application: https://www.castsoftware.com/use-cases/application-assessment
Microsoft SQL Server Seven Deadly Sins of Database DesignMark Ginnebaugh
This slide show focuses on the seven most common mistakes software developers make while designing databases and how to correct them.
The good news is that following some simple data modeling fundamentals can lead to a high-functioning database. Solomon Waters of Embarcadero Technologies demos these important fundamentals using ER/Studio.
You will learn:
•The basics of normalization and data modeling
•How to define consistent data definitions
•Best practices for designing quality database applications
Obstacle Driven Development is the latest engineering process and combines Test Driven Development with safety critical V-model development.
This updated presentation demonstrates how ODD extends and combines requirements analysis with Test Driven Development and V-models.
Please see the series for further details.
Application Performance: 6 Steps to Enhance Performance of Critical SystemsCAST
See more ways to improve application performance: https://www.castsoftware.com/use-cases/Improve-adm-quality
This white paper presents a six-step Application Performance
Modeling Process using software intelligence to identify potential performance issues earlier in the development lifecycle. Enriching dynamic testing with structural quality analysis gives ADM teams insight into the performance behavior of applications by highlighting critical application performance issues, especially when combined with runtime
information.
By adding structural quality analysis, ADM teams learn important information about violations of architectural and programming best practices earlier in the development lifecycle than with a pure dynamic testing approach. Structural quality analysis as part of the performance modeling process allows for fact-based insight into application complexity (e.g. multiple layers, dynamics of their interactions, complexity of SQL, etc.) and allows ADM managers to anticipate evolution of the runtime context (e.g. growing volume of data, higher number of transactions, etc.). The combined approach results in better detection of latent application performance issues within software. Resolving application performance issues early in the development cycle, these alerts help to not only save money but also prevent complete business disruptions.
See more ways to improve application performance: https://www.castsoftware.com/use-cases/Improve-adm-quality
Six steps-to-enhance-performance-of-critical-systemsCAST
To view more ways to improve application performance: https://bit.ly/2OZGxgf
This white paper presents a six-step Application Performance
Application Development and Maintenance (ADM) teams often face performance issues in applications during the testing phase when an application is almost complete which results in delays and business loss. The performance modeling process using software Intelligence to identify and eliminate performance flaws before they reach to production level.
By adding dynamic performance testing with automated structural quality analysis, ADM team get early and important information that might be missed with a pure dynamic approach such as inefficient loops or SQL queries and improve the development lifecycle. The combined approach will result in detection of performance issues within the application software.
This white paper presents a six-step Performance Modeling Process using automated structural quality analysis to identify these potential performance issues at the earlier stage in the development lifecycle which results in reducing the cost but also intercept business from any kind of downfall.
This white paper helps to understand different approaches of structural quality analysis and illustrate the modeling process at work.
To view more ways to improve application performance: https://bit.ly/2OZGxgf
An introduction to requirements engineering for students with no previous background in this area. Part of critical systems engineering course, CS 5032.
Equation of everything i.e. Quantum Fields: the Real Building Blocks of the U...inventionjournals
Mind, the inner most box of nature has not been investigated by modern physicists .Mind has not been incorporated in Standard model. Mind can only be studied by participatory science. Having searched Basic building blocks of the universe i.e. mass part of reality, we have also investigated mind part of reality and finally two fundamental particles with mind and mass realities are hypothesized . Now we discuss how to further investigate mind so as to know their structures and functions. Atomic genetics is the branch of science where we investigate about fundamental interactions of the universe i.e. atomic transcription and translations. New words have been coined to understand hidden science of mind part of reality. Mind reality have been recognized as different faces by “I” about 5000 years back to Arjuna in Mahabharata. It is just like to understand any language through Alphabets. These are (different faces) Alphabets of mind reality. One Mind reality has one face identity and the second mind reality has second face identity and so on. The facial expression represents phenomenon of intelligence and different face represents different types of properties carrying property. The open eyes means property is activated while close eye means property is inactivated. In spite of carrying properties conscious ness they also know how to conduct not only origin of universe but also how to create two different universe i.e. next creation could be different from this creation. In all, It is automatic system of the universe. The mind realities which are of good properties have devtas face identity (first five faces on both side and those mind realities which are of bad properties have demons face identity ( last four faces on both side) . These are named as code PCPs or messenger atomic genes. The central face is CCP or Thought script where all thoughts of the universe are banked. It is bank of data of all information s of the universe It is face identity of Anti mind particles as data of all information’s of the universe are stored as anti mind particles . It is the Time mind ness (biological clock) that keeps on expressing different thoughts from this thought script (CCP). There are four more faces (black bodies) shown on extreme left and right floating in fire are CPs (translating Atomic genes). That translates the messages and realizes it and reacts accordingly. Rest pictures are creation of different individuals and nature (sun, moon and snake and other pictures made on hands and body) by different thoughts of Almighty B.B.B. The entire picture has been explained in Geeta in 11/ 10 and 11.Whatever is being created in this universe is basically not by our thoughts rather it is the thought of Almighty B.B.B (Yang B.B.B or matter B.B.B. or Male B.B.B working as Highest center of the universe. ) that is dominated over creation and destruction of this cycle of the universe. Hence the World of Everyday Experience, in One Equation is Myth.
A Minimum Spanning Tree Approach of Solving a Transportation Probleminventionjournals
: This work centered on the transportation problem in the shipment of cable troughs for an underground cable installation from three supply ends to four locations at a construction site where they are needed; in which case, we sought to minimize the cost of shipment. The problem was modeled into a bipartite network representation and solved using the Kruskal method of minimum spanning tree; after which the solution was confirmed with TORA Optimization software version 2.00. The result showed that the cost obtained in shipping the cable troughs under the application of the method, which was AED 2,022,000 (in the United Arab Emirate Dollar), was more effective than that obtained from mere heuristics when compared
Application Performance: 6 Steps to Enhance Performance of Critical SystemsCAST
See more ways to improve application performance: https://www.castsoftware.com/use-cases/Improve-adm-quality
This white paper presents a six-step Application Performance
Modeling Process using software intelligence to identify potential performance issues earlier in the development lifecycle. Enriching dynamic testing with structural quality analysis gives ADM teams insight into the performance behavior of applications by highlighting critical application performance issues, especially when combined with runtime
information.
By adding structural quality analysis, ADM teams learn important information about violations of architectural and programming best practices earlier in the development lifecycle than with a pure dynamic testing approach. Structural quality analysis as part of the performance modeling process allows for fact-based insight into application complexity (e.g. multiple layers, dynamics of their interactions, complexity of SQL, etc.) and allows ADM managers to anticipate evolution of the runtime context (e.g. growing volume of data, higher number of transactions, etc.). The combined approach results in better detection of latent application performance issues within software. Resolving application performance issues early in the development cycle, these alerts help to not only save money but also prevent complete business disruptions.
See more ways to improve application performance: https://www.castsoftware.com/use-cases/Improve-adm-quality
Six steps-to-enhance-performance-of-critical-systemsCAST
To view more ways to improve application performance: https://bit.ly/2OZGxgf
This white paper presents a six-step Application Performance
Application Development and Maintenance (ADM) teams often face performance issues in applications during the testing phase when an application is almost complete which results in delays and business loss. The performance modeling process using software Intelligence to identify and eliminate performance flaws before they reach to production level.
By adding dynamic performance testing with automated structural quality analysis, ADM team get early and important information that might be missed with a pure dynamic approach such as inefficient loops or SQL queries and improve the development lifecycle. The combined approach will result in detection of performance issues within the application software.
This white paper presents a six-step Performance Modeling Process using automated structural quality analysis to identify these potential performance issues at the earlier stage in the development lifecycle which results in reducing the cost but also intercept business from any kind of downfall.
This white paper helps to understand different approaches of structural quality analysis and illustrate the modeling process at work.
To view more ways to improve application performance: https://bit.ly/2OZGxgf
An introduction to requirements engineering for students with no previous background in this area. Part of critical systems engineering course, CS 5032.
Equation of everything i.e. Quantum Fields: the Real Building Blocks of the U...inventionjournals
Mind, the inner most box of nature has not been investigated by modern physicists .Mind has not been incorporated in Standard model. Mind can only be studied by participatory science. Having searched Basic building blocks of the universe i.e. mass part of reality, we have also investigated mind part of reality and finally two fundamental particles with mind and mass realities are hypothesized . Now we discuss how to further investigate mind so as to know their structures and functions. Atomic genetics is the branch of science where we investigate about fundamental interactions of the universe i.e. atomic transcription and translations. New words have been coined to understand hidden science of mind part of reality. Mind reality have been recognized as different faces by “I” about 5000 years back to Arjuna in Mahabharata. It is just like to understand any language through Alphabets. These are (different faces) Alphabets of mind reality. One Mind reality has one face identity and the second mind reality has second face identity and so on. The facial expression represents phenomenon of intelligence and different face represents different types of properties carrying property. The open eyes means property is activated while close eye means property is inactivated. In spite of carrying properties conscious ness they also know how to conduct not only origin of universe but also how to create two different universe i.e. next creation could be different from this creation. In all, It is automatic system of the universe. The mind realities which are of good properties have devtas face identity (first five faces on both side and those mind realities which are of bad properties have demons face identity ( last four faces on both side) . These are named as code PCPs or messenger atomic genes. The central face is CCP or Thought script where all thoughts of the universe are banked. It is bank of data of all information s of the universe It is face identity of Anti mind particles as data of all information’s of the universe are stored as anti mind particles . It is the Time mind ness (biological clock) that keeps on expressing different thoughts from this thought script (CCP). There are four more faces (black bodies) shown on extreme left and right floating in fire are CPs (translating Atomic genes). That translates the messages and realizes it and reacts accordingly. Rest pictures are creation of different individuals and nature (sun, moon and snake and other pictures made on hands and body) by different thoughts of Almighty B.B.B. The entire picture has been explained in Geeta in 11/ 10 and 11.Whatever is being created in this universe is basically not by our thoughts rather it is the thought of Almighty B.B.B (Yang B.B.B or matter B.B.B. or Male B.B.B working as Highest center of the universe. ) that is dominated over creation and destruction of this cycle of the universe. Hence the World of Everyday Experience, in One Equation is Myth.
A Minimum Spanning Tree Approach of Solving a Transportation Probleminventionjournals
: This work centered on the transportation problem in the shipment of cable troughs for an underground cable installation from three supply ends to four locations at a construction site where they are needed; in which case, we sought to minimize the cost of shipment. The problem was modeled into a bipartite network representation and solved using the Kruskal method of minimum spanning tree; after which the solution was confirmed with TORA Optimization software version 2.00. The result showed that the cost obtained in shipping the cable troughs under the application of the method, which was AED 2,022,000 (in the United Arab Emirate Dollar), was more effective than that obtained from mere heuristics when compared
A one parameter Poisson-Mishra distribution has been obtained by compounding Poisson distribution with Mishra distribution of B.K.Sah(2015). The first four moments about origin have been obtained. The maximum likelihood method and the method of moments have been discussed for estimating its parameter. The distribution has been fitted to some data-sets to test its goodness of fit. It has been found that this distribution gives better fit to all the discrete data sets which are used by Sankaran (1970) and others to test goodness of fit of Poisson-Lindley distribution.
Oscillation of Solutions to Neutral Delay and Advanced Difference Equations w...inventionjournals
In this article we give infinite-sum conditions for the oscillation of all solutions of the following first order neutral delay and advanced difference equations with positive and negative coefficientsof the forms and where is a sequence of nonnegative real numbers, and are sequences of positive real numbers, and are positive integers. We derived sufficient conditions for oscillation of all solutions of and . AMS Subject Classification 2010: 39A10, 39A12
Oscillation and Convengence Properties of Second Order Nonlinear Neutral Dela...inventionjournals
In this paper, we consider the second order nonlinear neutral delay difference equations of the form We establish sufficient conditions which ensures that every solution of is either oscillatory or tends to zero as . We also gives examples to illustrate our results
Should Astigmatism be Corrected until the Age of Three? Results of a Six-year...inventionjournals
The state of refraction in young children is important to be examined in order to prevent different eye diseases (amblyopia, strabismus, hordeolosis, blepharitis etc.). Part of these diseases may interfere with the normal development of a child, as well as they may affect the process of socialization. AIM: The aim of this study is to determine and follow-up aspheric refraction dynamics in children up to age three. Patients and Methods: The study covers 324 children (648 eyes) from Sofia city and Sofia district (Bulgaria), aged between 6 and 12 months at their first visit and refraction examination. 159 of them are girls (318 eyes) and 165- boys (330 eyes). The children are followed up in time. They are grouped in four age groups and divided by gender. The methods used are: photorefractometry with PlusoptiX S04; cycloplegia, retinoscopy and optical correction if needed; statistics. Results: Astigmatism changes with age (towards reducing); Diopters of astigmatism are most decreased in age between 12 and 18 months; There is no statistically significant difference in aspheric refraction between genders. Conclusion: PlusoptiX S04 photorefractometer is operational for children aged at least 5 months. This is a quite accurate method for determining the aspheric refraction in children without cycloplegia. It allows us to state that astigmatism reduces considerably with age
Numerical Simulation of Flow in a Solid Rocket Motor: Combustion Coupled Pres...inventionjournals
Acomputational study is performed for the simulation of reactive fluid flow in a solid rocket motor chamber with pressure dependent propellant burning surface regression. The model geometry consists of a 2D end burning lab-scale motor. Complete conservation equations of mass, momentum, energy and species are solved with finite rate chemistry. The pressure dependent regressive boundary in the combustion chamber is treated by use of remeshing techniques. Hydrogen and propane combustion processes are examined. Time dependent pressure and burning rate variations are illustrated comprehensively. Temperature and species mass fraction variations are given within the flame zone. Temperature, velocity and density distributions are compared for both constant burning rate and pressure dependent burning rate simulations.
The Krylov-TPWL Method of Accelerating Reservoir Numerical Simulationinventionjournals
Because of the large number of system unknowns, reservoir simulation of realistic reservoir can be computationally demanding. Model order reduction (MOR) technique represents a promising approach for accelerating the simulations. In this work, we focus on the application of a MOR technique called Krylov trajectory piecewise-linear (Krylov-TPWL). First, the nonlinear system is represented as a weighted combined piecewise linear system using TPWL method, and then reducing order of each linear model using Krylov subspace. We apply Krylov-TPWL method for a two-phase (oil-water) reservoir model which is solved by full implicit. The example demonstrates that which can greatly reduce the dimension of reservoir model, so as to reduce the calculation time and improve the operation speed.
Shigellosis and Socio-Demography of hospitalized Patients in Kano, North-West...inventionjournals
Aim: The aim of the study was to determine the prevalent of Shigellosis in relation to socio-demographic characteristics of hospitalized patients in Kano metropolis. Study design: The study is a descriptive cross-sectional study. Place and duration of study: One milliliter of venous blood was collected from each patient with some or all clinical features of Shigellosis that sign a consent form and transfer into EDTA bottles. If daily is unavoidable blood samples were stored at 4 0C. Samples were analyzed at the both laboratories of the authors. This work was carried out between May, 2012 and March, 2014. Methodology: The blood specimens were cultured in thioglycollate broth and sub-cultured onto deoxycholate citrate agar (DCA), Salmonella-Shigella agar (SSA) and brilliant Green agar (BGA) followed by confirmation of presumptive colonies using different biochemical tests and analytical profile index 20E. Serologic identification of Shigella was performed by slide agglutination test using polyvalent O and H Shigella antisera. Results: Although, the relationship between different age groups was not significantly associated (P < 0.05), patients under age bracket of 21-30 years were found to be more susceptible to Shigella infections with 13 representing 2.6% followed in that order by 11-20 years (6), , ≤10 -years (4) 31-40 years (3) and >40 years (2) age groups, representing 1.2%, 0.8%, 0.6% and 0.4% respectively. The frequency of shigellosis was highest in other patients (without occupation), patients with informal level of education, using tap water as sources of drinking water, with more than one of all clinical manifestations of Salmonella infections and patients on treatment. However, there was a significant difference between the rate of Salmonella infections and sociodemographic characteristics of patients studied (p<0.05).> 0.05) in males than the females’ patients. However, Shigella flexneri was the most common among patients followed by Shigella dysenteriae, Shigella boydii and Shigella sonnei in decreasing order. The frequency of shigellosis was highest in other patients (without occupation), patients with informal level of education, using tap water as sources of drinking water, with more than one of all clinical manifestations of Salmonella infections and patients on treatment.
Corelation between Central Corneal Thicknes, Gender and Age in Bulgarian Chil...inventionjournals
Introduction: The rapid growth and development suggest dynamics in various biometric indicators. Knowing the laws in their changes, as well as their relationship to and impact of other factors contribute to a thorough, fast and accurate interpretation during the diagnostic and treatment process Purpose: Determination of the statistically significant link between the biometric indicator central corneal thickness, gender and age in Bulgarian children. Material and Methods: The research covers 248 patients / 496 eyes / divided into four age groups: first / 0 to 1 years old / 70 children / 140 eyes / -32 girls and 38 boys; second / 1 to 3 years old / - 57 children / 114 eyes / - 31 girls and 26 boys; third / 3 to 7 years old / - 81 children / 162 eyes / - 40 girls and 41 boys; fourth / 7 to 15 years old / - 40 children / 80 eyes / -18 girls and 22 boys. The examinations were conducted for a period of 24 months with Ultrasonic pachimetry with PacScan300AP. Results: First group - test of Mann-Whitney/U = 1543, p = 0.694> 0.05/, no statistically significant difference between the average levels of CCT between genders. Second group - test of Mann-Whitney/U = 3001.5, p = 0.35 > 0.05/, no statistically significant difference between the average levels of CCT between genders. Third group - test of Mann-Whitney/U = 1543, p = 0.694 > 0.05/, no statistically significant difference between the average levels of CCT between genders. Fourth group - Independent Samples Test /t = 0.571, p = 0.571> 0.05/, no statistically significant difference between the average levels of CCT between genders. Kruscal-Wallis test shows that there is a statistically significant increase with increasing age in bought gender: Boys /X2 = 24.02, p<0.001 /><0.001/. Discussion: In the four groups of the study was not found statistically significant link between average central corneal thickness and gender. A correlation was found between central corneal thickness and age of the patients. CCT indicator in Bulgarian children increases from the age of six months to fifteen years.
Studies on Mortars and Concretes with Pozzolonic Admixtureinventionjournals
Due to the steep increase in the cost of cement which is the main building material commonly used, the constructional costs are going up. In the present contest of housing the millions, various programmes are getting upset because of the increased cost of constructions. In these contexts, various alternatives are being tried to be used as full or partial replacement of cement to reduce its cost. Pozzolime is one such material manufactured locally using lime and clay. Its cost nearby 1/3rd of that of cement. Though pozzolime has been put use by builders to certain extent, its strength properties are not well understood to relies its full potential and use, it is necessary to carryout detailed experimental studies on the strength properties of pozzolime in combination with cement. In the present experimental investigation, pozzolime is used as been partial replacement to cement in various proportions; specimens of mortars and concretes are cost and tested for compressive strength at different ages. The results are compared with those of the fly ash. The results indicate that desirable strength properties can be achieved in mortars and concretes by using pozzolime as partial replacement to cement. It may be concluded that cost affective mortars and concretes can be prepared using pozzolime admixture. This would help substantially in reducing the cost of construction.
Oil Shale Ex-Situ Process - Leaching Study of Spent Shaleinventionjournals
During the past decade, significant advancement has been made on various extraction technologies to develop U.S. oil shale resources in an environmentally and economically sustainable fashion. This work has been driven by the increasing demand for domestic transportation fuels and the need to improve U.S. energy security. Although conventional hydrocarbon deposits are becoming more difficult to find and limited in volume, unconventional reserves are relatively easy to locate and plentiful. Hence, development of unconventional resources, particularly shale gas, oil sands, and shale oil continues to receive tremendous attention. The present work shows ex-situ process of oil shale in a five ton/day externally heated horizontal rotary reactor, and discuss the process parameters and yield. The main focus of this article is: A) Effect of reactor bed temperature, rotation speed and feed rate on the residence time, fuel consumption and process yield B) Hazardous environmental issue related to leaching of heavy metals and metalloids from spent shale by underground and/or surface water, which prevent further commercialization of this process. In addition, an Aspen diagram of the overall oil shale process is presented as ongoing work focusing on key mechanical issues that affect online reliability and process efficiency including particle size, bed temperature and solid/gas mixing efficiency.
Antibacterial Activity of Schiff Bases Derived from OrthoDiaminocyclohexane, ...inventionjournals
Schiff bases (SBs) are known to possess many biological activities. In this paper we will be interested in nine SBs derived from ortho-diaminocyclohexane, meta-phenylenediamine, 1,6-diaminohexane and benzaldehydes variously substituted by nitro group. We had synthesized, characterized and tested these molecules for their antibacterial properties. Herein our study focuses in particular on the determination of quantum descriptors on which observed antibacterial activity depends, in order to be able to predict biological activities in analogue molecule series. Using quantum chemistry methods at B3LYP / 6-31G (d, p) level, we determined for each molecules, theoretical antibacterial potentials that we correlated to the experimental ones. Calculation results showed that, the energy of the Highest Occupied Molecular Orbital (EHOMO), electronegativity (χ) and electronic energy (E), are the best quantum descriptors related to the antibacterial activity values of studied molecules. The correlation coefficient R 2 indicates that 92.1% of the molecular descriptors defining this model are taken into account with a standard deviation of 0.152.The model significance is reflected by Fischer coefficient F = 7.721: Correlation coefficient of cross-validation = 0.88. This model is acceptable with . The values of the pCE50theo/pCE50exp values of the validation set tend to unity
As You Go is an inspirational gift a parent or other close family member would give to a child about to leave home for college or the armed forces this fall. Families whose child is about to enter the great wide world often experience a roller-coaster ride of emotions, from the jubilation of graduation to the realization that "my baby is leaving home!" On such a momentous occasion, how do you tell your child everything you are feeling? Sometimes words alone fail us.
The first in a series of book/CD/scrapbooks from Ocean Bridge Communications about life’s major transitions, As You Go was written by father and musician Lee Liebner. The book is only available through the website www.AsYouGo.net where you can view a multimedia presentation of the book. The price is US $24.95 plus tax, shipping and handling. Press review copies available upon request.
All photographs in the presentation are copyrighted and may only be used with the express written permission of the respective copyright owners.
IBM's use of virtual worlds - Roo Reynolds' presentation at Eduserv Foundatio...Roo Reynolds
IBM famously have a large virtual continent, but what do they do there? Roo Reynolds peeps behind the Big Blue curtain to reveal how and why IBM got involved in Second Life as well as introduce some of the current activities going on within virtual worlds.
STATISTICAL ANALYSIS FOR PERFORMANCE COMPARISONijseajournal
Performance responsiveness and scalability is a make-or-break quality for software. Nearly everyone runs into performance problems at one time or another. This paper discusses about performance issues faced during Pre Examination Process Automation System (PEPAS) implemented in java technology. The challenges faced during the life cycle of the project and the mitigation actions performed. It compares 3 java technologies and shows how improvements are made through statistical analysis in response time of the application. The paper concludes with result analysis.
Performance Optimization: Incorporating Database and Code Optimzitation Into ...Michael Findling
Embarcadero offers database and code optimization tools for application developers, database developers, and database administrators to prevent, find, and fix problems that impact performance throughout the system development lifecycle.
As technology develops, software programs become more complex and more fragile. In addition to high functionality and seamless user interaction, aesthetics and presentation are increasingly significant in apps. Database testing is now crucial for assessing an application's databases effectively. Here are all the things you need to know about database testing.
This paper discusses an automated approach to database change management throughout the companies’ development workflow. By using automated tools, companies can avoid common issues related to manual database deployments. This work was motivated by analyzing usual problems within organizations, mostly originated from manual interventions that may result in systems disruptions and production incidents. In addition to practices of continuous integration and continuous delivery, the current paper describes a case study in which a suggested pipeline is implemented in order to reduce the deployment times and decrease incidents due to ineffective data controlling.
There are many ways to ruin a performance testing project, there is just a handful of ways to do it right. This publication analyses the most widespread performance testing blunders. It is impossible in one article to expose all the varieties of testing wrongdoings; as such, this publication is definitely an open-ended.
Unsustainable Regaining Control of Uncontrollable AppsCAST
The ever-growing cost to maintain systems continues to crush IT organizations robbing their ability to fund innovation while increasing risks across the organization. There are, however, some tactics to reduce application total ownership cost, reduce complexity and improve sustainability across your portfolio.
Association Rule Mining Scheme for Software Failure AnalysisEditor IJMTER
The software execution process is tracked with event logs. The event logs are used to maintain the
execution process flow in a textual log file. The log file also manages the error values and their source of classes.
The error values are used to analyze the failure of the software. The data mining methods are used to evaluate the
quality and software failure rate analysis process. The text logs are processed and data values are extracted from
the data values. The data values are mined using the machine learning methods for failure analysis.
The service error, service complaints, interaction error and crash errors are maintained under the log files.
The events and their reactions are also maintained under the log files. Software termination and execution failures
are identified using the log details. The log file parsing process is applied to extract data from the logs. The
associations rule mining methods are used to analyze the log files for failure detection process. The system uses
the Weighted Association Rule Mining (WARM) scheme to fetch failure rate in the software execution flow. The
system improves the failure rate detection accuracy in WARM model.
Performance Evaluation of a Network Using Simulation Tools or Packet TracerIOSRjournaljce
Today, the importance of information and accessing information is increasing rapidly. With the advancement of technology, one of the greatest means of achieving knowledge are, computers have entered in many areas of our lives. But the most important of them are the communication fields. This study will be a practical guide for understanding how to assemble and analyze various parameters in network performance evaluation and when designing a network what is necessary to looking for to remove the consequences of degrading performance. Therefore, what can you do in a network performance evaluation using simulation tools such as Network Simulation or Packet tracer and how various parameters can be brought together successfully? CCNA, CCNP, HCNA and HCNP educational level has been used and important setting has been simulated one by one. At the result this is a good guide for a local or wide area network. Finally, the performance issues precautions described. Considering the necessary parameters, imaginary networks were designed and evaluated both in CISCO Packet Tracer and Huawei's eNSP simulation program. But it should not be left unsaid that the networks have been designed and evaluated in free virtual environments, not in a real laboratory. Therefore, it is impossible to make actual performance appraisal and output as there is no actual data available.
Managing the performance of enterprise applications is hard. Managing and optimizing the performance of enterprise applications on shared virtualized infrastructure (i.e. cloud computing) is even harder. This article outlines the specifics of capacity planning and performance management of EAs deployed in the cloud.
Similar to Job Failure Analysis in Mainframes Production Support (20)
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Job Failure Analysis in Mainframes Production Support
1. International Journal of Mathematics and Statistics Invention (IJMSI)
E-ISSN: 2321 – 4767 P-ISSN: 2321 - 4759
www.ijmsi.org Volume 5 Issue 3 || March. 2017 || PP-31-36
www.ijmsi.org 31 | Page
Job Failure Analysis in Mainframes Production Support
Ranjani KV1
, R. Roseline Mary2
1
(Department of Computer Science, Christ University, Bangalore India)
2
(Department of Computer Science, Christ University, Bangalore India)
Abstract: A major part of batch processing on mainframe computers consists of several thousand batch jobs
which run every day. This network of jobs runs every day to update day-to-day transaction. There are frequent
failures which can cause a high delay in the batch and also degrade the performance & efficiency of the
application. Permanent solution can be done to frequently occurring job failures to avoid the delay in batch and
to improve performance & efficiency of the application. In this paper, we have analyzed the frequently
occurring batch job failure recorded in Know Error Databases (KEBD) for past one year based on different
categories. Frequently failed jobs obtained are categorized based on application, failure-type, job-runs and the
resolution. Different results are obtained in the weka tool based on the different categories. From the various
results obtained it can be concluded that the frequent failures are occurring in MSD application. On further
analysis on this frequently failed jobs showed that data and network issue are causing the major job failures in
which most of the jobs were daily processing jobs. In order to fix the failure the jobs was resolved by restarting
the job from the overrides or by restarting the job from the top.
Keywords: Batch jobs, Failure analysis, Know Error Databases (KEDB), Resolution, weka
I. INTRODUCTION
Batch processing on mainframe computers consists of several thousand batch jobs which run every day. This
network of jobs shows the day-to-day business transaction that are updated during the night with interrelations
requiring scheduling and prioritizing the jobs to assure all batch jobs run in the scheduled order within Service
Level Agreement(SLA). The job scheduler helps in identifying the times at which the job runs on specific days
and the dependencies of the batch job will also be seen in the scheduler. Scheduled jobs run on specific days and
at different times ensures updating on business holidays as well without any loss of data. The execution of some
jobs is dependent on the other jobs because output data from first job is used as an input data to second data.
This data dependency is also associated with various upstream and vendor. Scheduler also helps in identifying
the status of the job i.e. under the execution, waiting for other jobs to complete, arriving of data or file from the
upstream or vendor, Error if the job has failed etc. There would be a delay or postponing in the job run due to
dependent job in failed state which further delays application batch and downstream jobs dependent on the
failed jobs. The resolution of that failed job is fixed by referring to the Know Error Databases (KEDB). KEDB
contains previously failed job details which include the Job Name, Application, Return code, Error Message,
Resolution and the person who resolved it. If the job details are not present in the KEDB, then it is first time
failure for particular job and the record is added for future reference. Based on the previously occurred error in
the production the job failures are categorized based on input arrival times and the type of failure occurred.
Based on the failure occurred that are stored or updated in the KEDB, the failure is categorized into different
types like technical issue, network issue, contention, space issue, data issue, cancellation, new jobs, developer
mistake or incorrect scheduling. Based on these categorizations the analysis is made on, how many times the job
failed due to same error and the resolution done to fix the error.
To analyze the pattern of the job failures, The KEDB for the previously failed jobs for the past one year is
obtained. With this the frequent job failure are analyzed for following categories:
The type of the job that has failed (like the processing, transmission, database)
The job failure type (like data issue, network issue, database, deadlock)
The action taken to fix the job failure at that time (like the job was restarted from top, restarted from failed
step, marked the job as complete).
The job names.
Based on the categorization, analysis is done to improve performance and avoid the recurring failures of the
system by implementing or suggesting the permanent fix for the specific job. This analysis is done with the help
of weka tool. The job failure dataset obtained for the past one year is taken and inserted into the weka for the
analysis and the respective results obtained are tabulated are shown in this research paper.
2. Job Failure Analysis In Mainframes Production Support
www.ijmsi.org 32 | Page
II. PROBLEM DESCRIPTION
A batch jobs on mainframe will often run every day to update day-to-day transaction and there are frequent
failures. These failures can cause a high delay in the batch as well as degrade the performance and efficiency of
the application. In order to avoid the delay in batch, degradation of performance & efficiency of the application
a permanent solution could be done to frequently occurring job failures. This is done by analyzing the pattern of
the job failure and the resolution steps taken to fix the failure. Once the pattern frequency of the failure and the
resolution steps to fix are analyzed, the failure can be avoided in future by fixing it permanently by analyzing
the pattern of the job failure with the historical failures recorded by the support team in a project for keeping
record of the failed jobs.
III. LITERATURE REVIEW
In [1] ,the paper ”Job Failure Analysis and its implications in a Large-scale Production Grid” determines an
analysis of job failures in large-scale data intensive grid. Job failures in large-scale heterogeneous environments
are due to a variety of possible causes, system problems which was due to node, disk or network issues. Errors
also occurred at different levels as the software stack was more and more complex. Based on the job failures
they are represented in three periods in the production, characterize the inter-arrival times and life spans of
failed jobs. Different failure types are distinguished and the analysis is carried out. Based on the failure pattern,
historical failure is taken into account in decision making. Based on the analysis the cooperation and
accountability issues are briefly addressed, evaluated the effectiveness and feasibility.
In COBOL application (Banking system) critical outages were due to common causes. ” Towards Assuring
Non-Recurrence of faults Leading to Transaction outages – An Experiment with Stable Business Application”
[2] determines that, to reduce the cost and efforts for maintaining a legacy business application was a challenge
for capturing faults at early stage of software – known to prevent defects in production. Analysis was performed
on these common causes to detect the causes using structured and COBOL flow analysis. From an structural
analysis and control flow analysis techniques the faults was automatically detected using the all occurrences of
the faults which would potentially lead to multiply failures during production.
In [3]” Towards a Training-Orientated Adaptive Decision Guidance and Support System” determines that,
strategic approaches are needed to troubleshoot system failures by first identifying the component causing
failure to solve the problems. In this paper they have addressed the domain of administration of DB2 on z/OS
Mainframes system. The framework dynamically extracts knowledge from various correlated data sources
containing system related data from the problem solving procedures of human experts. The research paper
applies text and data mining techniques for knowledge extraction a rule based system for knowledge
representation and problem categorization and case based system for providing decision support.
Based on the error codes categorization for the job failures” Getting back Up: Understanding How Enterprise
Data Backups fail” [4] determines that, the jobs that run on each system are monitored and checked if they are
completed successfully. Error characteristics are done based on the production, development and test, Number
of unique error codes Number of most frequent error codes. Error causes are due to misconfigurations, system
error and information messages (unusual). The above characterizations are done with the historical data and the
analysis performed for decision support.
IV. METHODS AND MATERIALS
To analyze the pattern of the batch job failures in mainframe computers, the KEDB for the previously failed
jobs for the past one year is obtained. The most frequent job failures were recorded and analyzed with the weka
tool upon these categorization:
Application
Failure-type
Resolution
Job Type
Job runs
1. Application: In this research paper the failed jobs are categorized into application based on the different
servers the job runs.
2. Failure-Type: The job failure are classified as below Data, Network, Delay, Deadlock
a) Data: The failure is classified as data issue is there is any discrepancy in the file received. Examples of data
issues could be due to the following reasons:
Incorrect file received from the upstream or the vendor.
Junk values in the file which has caused the data inconsistency.
3. Job Failure Analysis In Mainframes Production Support
www.ijmsi.org 33 | Page
The format of the file is not as expected for example; the data or the values in the files are not in the correct
format as expected.
Missing values in the file or the file is empty.
b) Network: The interaction with batch processing is mainly through a network of transmitting or receiving the
files from either one server to another or through DB2 tables. The failure is classified as network issue when
there is any issue in interacting with the batch processing. Examples of the network issue could be due to the
following reasons:
Server unavailability to while extraction the file or uploading the file during the job execution.
Resource unavailability, this could be due to the file or the table is been used by other jobs.
System or the application down while the user is trying to access the data.
c) Delay: When there is any feed delay from the upstream or the vendor the batch jobs go into the failed status.
The delay would happen due to the following reasons mentioned below:
The file is very huge (that is, contains more number of records than the expected.
The scheduled release activity either in our application or in the upstream.
The jobs going into contention waiting for the files which are used by the other jobs.
d) Deadlock: When the batch is executing concurrently, a deadlock can happen when one job is trying to access
the file or database and is waiting on the other job for release the lock on that file or the database.
3. Job-Type: The job type specifies the type of job failures that happen while the batch is running. The failed
jobs in past year recorded are categorized into the following:
a) Processing: The mainframes batch jobs that run during the night updates the data or the transactions that has
happened in the day time. The processing jobs have failed due to the following reasons:
Insufficient storage: While updating the day-to-day transactions into a dataset or DB2 tables.
Null values: When there is an empty file or dataset received or missing data in a specific column.
Incorrect data formats: When the data received is not in the same format the values usually received like the
date formats or the characters in place of numeric values.
Load Balancing: When the jobs are automatically submitted on different CPU’s where the specific
application jobs do not have access to run on that CPU.
b) FTP Transmission: File Transfer Protocol (FTP) transmission is process of transmitting the files between
servers.
c) NDM: Network Data Mover (NDM) transmission is process of transmitting the files between servers with a
direct connect by installing the server details at both the end before transmitting any details between the
servers. This type of transmission is much faster and more secure while comparing with the FTP
transmission.
The FTP transmission and NDM transmission jobs have failed due to the following reasons:
Connectivity/Access Issue: When the server details are not installed at both the ends so the jobs fail due to
access issue while accessing the servers.
File unavailability: When the file is not available as the file generation at the upstream is still in progress.
d) Database: The database jobs are the jobs that append the latest data into the database and take the backup of
those databases into the datasets for future reference. These jobs failed due to the following reasons:
Resource/ Datasets Unavailability: Where the specific database or the table is in use by other batch jobs.
Space Issues: While updating the day-to-day transactions into a dataset or DB2 tables.
4. Job-Runs: The Job-Runs categorized with Daily, Weekly, Monthly jobs based on the pattern the job runs.
a) Daily: The jobs are categorized into daily where the jobs runs daily that is Mon-Friday or Tuesday to
Saturday.
b) Weekly: The jobs are categorized into weekly where the job runs any one day of the week.
c) Monthly: The jobs are categorized into monthly where the job runs once in month.
5. Resolution: The resolution steps taken to resolve the job failure when it has failed are categorized in the
following categories:
a) Restarted-the-job-from-top: The failed jobs are restarted from the top when they have failed before executing
either due to access issue or network connectivity issue.
b) Restarted-the-job-failed-step: The failed jobs are restarted from the failed step when the processing of the
jobs is interrupted due to the resource unavailability, network issue, unexpected return code from the job, etc.
c) Marked-the-job-as-complete: The failed jobs are marked as completed when all the processing completed
and the job has thrown any acceptable return code.
4. Job Failure Analysis In Mainframes Production Support
www.ijmsi.org 34 | Page
d) Restarted-the-job-with-the-overrides: The failed jobs are restarted with the overrides when the job has failed
due to syntax errors, space issue or file not available due to delay or any other reasons where the job needs
any manual modification to complete the job successfully.
V. RESULTS AND DISCUSSION
The various results obtained with the above categorization with Weka tool for the records are shown as below
with the respective observations.
1. Application: Based on Application, below figure Fig.1 shows the classification for different categories. The
MSD application has the more number of job failures compared to the IMS application jobs in the past one year.
78.61% of the times the jobs have failed in MSD application and 21.38% of the times the jobs have failed in
IMS application.
Fig. 1: Application classification for different categorizes
2. Failure- Type: Based on Failure-Type, below figure Fig.2 shows the Failure-Type classification. It shows
that there are 47.48% of the times the jobs are failing due to the data issue, 43.08% of the times the jobs are
failing due to Network issue, 5.74% of the times the jobs are failing due to delay and 3.45% of the times the jobs
are failing due to deadlock.
Fig. 2: Failure-Type classification for different categorizes
3. Resolution: Based on Resolution, below figure Fig.3 shows the Resolution-Type classification. It shows that
there are 30.81% of the times the failed jobs have resolved by restarting the job from the top, 32.38% of the
times the failed jobs have resolved by restarting the job from failed step, 13.20% of the times the failed jobs
have resolved by marking the job as complete and 23.58% of the times the failed jobs have resolved by
restarting the job with the overrides.
5. Job Failure Analysis In Mainframes Production Support
www.ijmsi.org 35 | Page
Fig. 3: Resolution classification for different categorizes
4. Job-Type: Based on Job-Type, figure Fig 4 shows the Job-Type classification. It shows that there are 59.74%
of the times the Processing jobs have failed, 29.87% of the times the FTP Transmission jobs have failed, 5.34%
of the times the Database jobs have failed and 5.03% of the times the NDM Transmission jobs have failed.
Fig. 4: Job type classification for different categorizes
5. Job-Runs: The below figure Fig 5 shows the Job-Run classification. It shows that there are 93.08% of the
times daily jobs have failed, 3.45% of the times weekly jobs have failed and 3.45% of the times the monthly
jobs have failed for the job failure taken for the past one year.
Fig. 5: Job Runs classification for different categorizes