All About Testing ERP SoftwareBangalore: The implementation of ERP software system involves changing some of the important complex business processesapart from involving a huge expenditure, time and effort. The success of this implementation depends on the co-operation ofthe employees in the organization. Before implementing ERP software, it has to be tested.Terry Low on dvalde.com explains the types of testing for ERP and the need for extensive ERP testing.Types of testing:With regards to testing ERP software, there are different approaches.1. Performance TestingPerformance testing determines how the various components of a system will perform in a particular situation. It doesn’t aim atfinding defects in the system, but it aims at establishing the benchmark behaviour of an application.2. Functional TestingThe main intention when companies adopt ERP software is to obtain solutions to problems. Functional tests on ERP softwarewill verify if the software will provide the expected solution.3. Integration testingIntegration testing verifies if all the units work together as a single application. Through integration testing, one will be able tovisualize how the software will fit into the ERP system. Under this type of test, testers will deploy the software in realorganizational settings. The main aim is to see how well the software will fit into the organization and how the employees areusing it.4. Automated TestingThis refers to the automation of the testing procedure. It promises faster and reliable testing. However, in most cases, theresults of manual test will be compared with the results of automated tests to get a clear picture of the results.Extensive ERP Testing, is it needed?From the above description of the types of testing involved in ERP implementation we can come to a conclusion that testing isneeded to verify if the software fits into the organization. It is only through testing that defects and bugs can be detected. If theERP software has been tested it becomes easier for companies to find possible solutions for problems.Early Performance Testing, Top Obstacle for TestersA study conducted by Shunra, the industry-recognized authority in network virtualization and application performanceengineering, revealed a stark difference between what application testers believe is important to achieve optimal applicationperformance and what they are actually doing to achieve peak performance.Of the 246 application testing professionals surveyed, 64 percent responded that “Adding performance testing very early in thesoftware development lifecycle” to be either “Most or Very Important” in developing a performance-minded culture. However,only 30 percent actually performance test “early and often throughout the development lifecycle” and 60 percent stated thattheir company is not more than halfway to achieving a performance-minded culture.“This data demonstrates the challenges performance engineers face when trying to implement a culture of performance. It isnot an easy task given the complexity of mobile, Cloud and composite applications, and the institutional barriers that must beovercome. The fact that DevOps ranked second in importance for developing a performance-minded culture shows that it takeswhole organizations, not individuals or single silos, to achieve the economic benefits of engineering in performance across thesoftware development lifecycle,” stated Bill Varga, Shunra COO.
The study also revealed a lack of priority for measuring the financial impact of post-production failures. This ranked as thesecond least important factor in developing a performance-minded culture. Only 28 percent ranked it “Most or VeryImportant.”“This was very surprising given that 80 percent of the total cost of an application is a result of performance remediation effortspost deployment. Performance engineers are advocating for greater performance as they see how poor performance affectsback-office costs. We expect executives to be involved in a top-down performance push as the deep front-office financial impactof poor performance on revenue, productivity, brand reputation, and customer retention becomes more apparent,” said Varga.Other key findings from the study include:• 57 percent allocate budget for designing applications for performance prior to deployment.• Analytical skills rated as the most important trait of effective performance engineers.• 5 percent still primarily rely on end users for performance testing.Differences between Unit Testing and Integration TestingNormally software application is not developed by a single developer; rather it is divided into modules that are allocated todifferent development teams. When a single developed module or unit of the application is tested it is known as unit testing.Once all the modules are developed, they are integrated into one application. Testing of the entire application is known asintegration testing.Characteristics of Unit Testing 1. Repetition: Unit testing can be performed again and again as many times as you want. 2. Consistent: Unit test results are very reliable as you can reproduce the results 3. Quick: Unit tests usually take less than1/2 a second. 4. Single Issue: Unit tests concentrate on just one issue at a time. 5. No external implementation dependencies: Database, network, file system, etc that are not in the memory are not required.Characteristics of Integration Testing 1. Slow, Difficult and Not Reproducible: Unlike unit tests, integration tests are time consuming. Re-running the test is not only difficult but it may also not give consistent results. Sometimes getting the test results is also not easy. 2. External Source Requirements: Integration tests need access to external systems as well as local machine. 3. Multiple Issues: Integration tests target multiple aspects in the same test. Database integrity, protocols, configurations and system logic are some examples of issues checked by a single integration test. 4. System Dependent values: Unlike unit tests, integration tests take up values like date, time, machine name, etc from the system. 5. Uncontrollable elements: Integration tests develop features like threads and random number generators that are not under their control.Some Major DifferencesBoth Unit testing and Integration testing are used to check the internal quality of the application, whereas acceptance testingverifies the external quality or business quality. Unit testing checks a single component of an application, whereas integrationtesting checks if different units can work together as a single application. Therefore while writing a unit test shorter codes areused that target just a single class. On the other hand writing integration tests is a lengthy process. Unit tests help the developerto put individual tasks of an application into different classes. Each class executes just one aspect of functionality which helps indeveloping the design. Integration tests are not useful for design.Since unit tests cover only a small part of functionality, their test results confirm the exact part of the application that needs tobe fixed. Unit tests only target the proper functioning of individual units and are not helpful to verify the smooth functioning ofthe entire application. Therefore integration tests are required to check the entire application. When an application does notpass the integration test, the unit test results for that specific part of functionality will help in fixing the issue. Thus both unit
testing and integration testing are required; as unit tests target the implementation of individual components and integrationtests target the working of an application as a whole.What Should a Beginner in Testing Know?Anirudh JainSenior Software Testing Engineer, Samin TekMindzAnirudh Jain is a Senior Software Testing Engineer at Samin Tekmindz India. He is having around 5 Years of Experience in the ITindustry with emphasis in Software Quality Assurance using Black Box Testing. His experience in testing is under various O.S.Environments viz. Windows 98/2000/XP/Vista.Extensive working experience on Developing Test Cases, System Testing, Functional Testing, GUI Testing, Regression Testing,Integration Testing, and Security Testing.Experience in Manual Testing of Web-based applications, client server applications.Experience in Manual Testing of Open ERP applications.Experience in Mobile Application Testing on various platforms (Java, Symbian, Blackberry, Iphones, Windows Mobile etc).Expertise in Black Box testing. Exposure to Test Plan development. In depth knowledge of Quality System and process (ISO9001/2008, ISMS 27001, CMMi L3) & Participated in the Audits for the same.UsageI realized that most of us are so engulfed in our daily work routine that we do not get time to enhance our knowledge. Theworld of testing has moved ahead & everyone needs to catch up. Hence I am sharing with series of facts. The idea is tostart with basic knowledge about testing & then gradually move towards more advanced topics.What is Testing?Testing involves operation of a system or application under controlled conditions and evaluating the results (eg, if the user is ininterface A of the application while using hardware B, and does C, then D should happen).1. The controlled conditions should include both normal and abnormal conditions.2. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldnt or things donthappen when they should. It is oriented towards detection.3. Testing is nothing but to find out the deviation between actual and expected.What is Software Quality Assurance?Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that anyagreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented toprevention.What is the difference between verification and validation?Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can bedone with checklists, issues lists, walkthroughs, and inspection meetings.Validation typically involves actual testing and takes place after verifications are completed.What is Software Quality?Quality software is reasonably bug-free (no software can claim to be absolutely bug , delivered on time and within budget, meetsrequirements and/or expectations, and is maintainable. However, quality is obviously a subjective term. It will depend on who thecustomer is and their overall influence in the scheme of things. A wide-angle view of the customers of a software development projectmight include end-users, customer acceptance testers, customer contract officers, customer management, the developmentorganizations management/accountants/testers/sales people, future software maintenance engineers, stockholders, magazinecolumnists, etc. Each type of customer will have their own slant on quality - the accounting department might define quality in termsof profits while an end-user might define quality as user-friendly and bug-free.
Qualities of a Good Test EngineerA good test engineer has a test to break attitude, an ability to take the point of view of the customer, a strong desire for quality, and anattention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability tocommunicate with both technical (developers) and non-technical (customers, management) people is useful. Previous softwaredevelopment experience can be helpful as it provides a deeper understanding of the software development process, gives the tester anappreciation for the developers point of view, and reduce the learning curve in automated test tool programming. Judgment skills areneeded to assess high-risk areas of an application on which to focus testing efforts when time is limited.Whats a Test Plan?A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. Theprocess of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product.The completed document will help people outside the test group understand the why and how of product validation. It should bethorough enough to be useful but not so thorough that no one outside the test group will read it. The following are some of the itemsthat might be included in a test plan, depending on the particular project:• Title• Identification of software including version/release numbers• Revision history of document including authors, dates, approvals• Table of Contents• Purpose of document, intended audience• Objective of testing effort• Software product overview• Relevant related document list, such as requirements, design documents, other test plans, etc.• Relevant standards or legal requirements• Traceability requirements• Relevant naming conventions and identifier conventions• Overall software project organization and personnel/contact-info/ responsibilities• Test organization and personnel/contact-info/responsibilities• Assumptions and dependencies• Project risk analysis• Testing priorities and focus• Scope and limitations of testing• Test outline - a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable• Outline of data input equivalence classes, boundary value analysis, error classes• Test environment - hardware, operating systems, other required software, data configurations, interfaces to other systems• Test environment validity analysis - differences between the test and production systems and their impact on test validity.• Test environment setup and configuration issues• Software migration processes• Software CM processes• Test data setup requirements• Database setup requirements• Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to helpdescribe and report bugs• Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs• Test automation - justification and overview• Test tools to be used, including versions, patches, etc.• Test script/test code maintenance processes and version control• Problem tracking and resolution - tools and processes• Project test metrics to be used• Reporting requirements and testing deliverables• Software entrance and exit criteria• Initial sanity testing period and criteria• Test suspension and restart criteria• Personnel allocation• Personnel pre-training needs• Test site/location• Outside test organizations to be utilized and their purpose, responsibilities, deliverables, contact persons, and coordination issues• Relevant proprietary, classified, security, and licensing issues.• Open issues• Appendix - glossary, acronyms, etc.
Whats a Test Case?A test case is a document that describes –an input,action,or event andan expected response,to determine if a feature of an application is working correctly.A test case should contain particulars such as -- test case identifier,- test case name,- objective,- test conditions/setup,- input data requirements,- steps, and- expected results.Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requirescompletely thinking through the operation of the application. For this reason, its useful to prepare test cases early in thedevelopment cycle, if possible.What if the Software is so Buggy it cant Really be Tested at All?The best bet in this situation is for the testers to go through the process of reporting whatever bugs or blocking-type problems initiallyshow up, with the focus being on critical bugs. Since this type of problem can severely affect schedules, and indicates deeper problemsin the software development process (such as insufficient unit testing or insufficient integration testing, poor design, improper buildor release procedures, etc.) managers should be notified, and provided with some documentation as evidence of the problem.How Can it be Known When to Stop Testing?This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependentenvironment, that complete testing can never be done. Some of the common factors in deciding when to stop are: Deadlines (release deadlines, testing deadlines, etc.) Test cases completed with certain percentage passed Test budget depleted Coverage of code/functionality/requirements reaches a specified point Bug rate falls below a certain level Beta or alpha testing period endsWhat if There isnt Enough Time for Thorough Testing?Use risk analysis to determine where testing should be focused.Since its rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, oreverything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgment skills,common sense, and experience. Considerations can include: Which functionality is most important to the projects intended purpose Which functionality is most visible to the user Which functionality has the largest safety impact? Which functionality has the largest financial impact on users? Which aspects of the application are most important to the customer? Which aspects of the application can be tested early in the development cycle? Which parts of the code are most complex, and thus most subject to errors? Which parts of the application were developed in rush or panic mode? Which aspects of similar/related previous projects caused problems? Which aspects of similar/related previous projects had large maintenance expenses? Which parts of the requirements and design are unclear or poorly thought out? What do the developers think are the highest-risk aspects of the application? What kinds of problems would cause the worst publicity? What kinds of problems would cause the most customer service complaints? What kinds of tests could easily cover multiple functionalities? Which tests will have the best high-risk-coverage to time-required ratio?