Performance responsiveness and scalability is a make-or-break quality for software. Nearly everyone runs into performance problems at one time or another. This paper discusses about performance issues faced during Pre Examination Process Automation System (PEPAS) implemented in java technology. The challenges faced during the life cycle of the project and the mitigation actions performed. It compares 3 java technologies and shows how improvements are made through statistical analysis in response time of the application. The paper concludes with result analysis.
Integrating profiling into mde compilersijseajournal
Scientific computation requires more and more performance in its algorithms. New massively parallel
architectures suit well to these algorithms. They are known for offering high performance and power
efficiency. Unfortunately, as parallel programming for these architectures requires a complex distribution
of tasks and data, developers find difficult to implement their applications effectively. Although approaches
based on source-to-source intends to provide a low learning curve for parallel programming and take
advantage of architecture features to create optimized applications, programming remains difficult for
neophytes. This work aims at improving performance by returning to the high-level models, specific
execution data from a profiling tool enhanced by smart advices computed by an analysis engine. In order to
keep the link between execution and model, the process is based on a traceability mechanism. Once the
model is automatically annotated, it can be re-factored aiming better performances on the re-generated
code. Hence, this work allows keeping coherence between model and code without forgetting to harness the
power of parallel architectures. To illustrate and clarify key points of this approach, we provide an
experimental example in GPUs context. The example uses a transformation chain from UML-MARTE
models to OpenCL code.
THE UNIFIED APPROACH FOR ORGANIZATIONAL NETWORK VULNERABILITY ASSESSMENTijseajournal
The present business network infrastructure is quickly varying with latest servers, services, connections,
and ports added often, at times day by day, and with a uncontrollably inflow of laptops, storage media and
wireless networks. With the increasing amount of vulnerabilities and exploits coupled with the recurrent
evolution of IT infrastructure, organizations at present require more numerous vulnerability assessments.
In this paper new approach the Unified process for Network vulnerability Assessments hereafter called as
a unified NVA is proposed for network vulnerability assessment derived from Unified Software
Development Process or Unified Process, it is a popular iterative and incremental software development
process framework.
A FRAMEWORK FOR ASPECTUAL REQUIREMENTS VALIDATION: AN EXPERIMENTAL STUDYijseajournal
Requirements engineering is a discipline of software engineering that is concerned with the
identification and handling of user and system requirements. Aspect-Oriented Requirements
Engineering (AORE) extends the existing requirements engineering approaches to cope with the
issue of tangling and scattering resulted from crosscutting concerns. Crosscutting concerns are
considered as potential aspects and can lead to the phenomena “tyranny of the dominant
decomposition”. Requirements-level aspects are responsible for producing scattered and tangled
descriptions of requirements in the requirements document. Validation of requirements artefacts
is an essential task in software development. This task ensures that requirements are correct and
valid in terms of completeness and consistency, hence, reducing the development cost,
maintenance and establish an approximately correct estimate of effort and completion time of the
project. In this paper, we present a validation framework to validate the aspectual requirements
and the crosscutting relationship of concerns that are resulted from the requirements engineering
phase. The proposed framework comprises a high-level and low-level validation to implement on
software requirements specification (SRS). The high-level validation validates the concerns with
stakeholders, whereas the low-level validation validates the aspectual requirement by
requirements engineers and analysts using a checklist. The approach has been evaluated using
an experimental study on two AORE approaches. The approaches are viewpoint-based called
AORE with ArCaDe and lexical analysis based on Theme/Doc approach. The results obtained
from the study demonstrate that the proposed framework is an effective validation model for
AORE artefacts.
PRODUCT QUALITY EVALUATION METHOD (PQEM): TO UNDERSTAND THE EVOLUTION OF QUAL...ijseajournal
Promoting quality within the context of agile software development, it is extremely important as well as
useful to improve not only the knowledge and decision-making of project managers, product owners, and
quality assurance leaders but also to support the communication between teams. In this context, quality
needs to be visible in a synthetic and intuitive way in order to facilitate the decision of accepting or
rejecting each iteration within the software life cycle. This article introduces a novel solution called
Product Quality Evaluation Method (PQEM) which can be used to evaluate a set of quality characteristics
for each iteration within a software product life cycle. PQEM is based on the Goal-Question-Metric
approach, the standard ISO/IEC 25010, and the extension made of testing coverage in order to obtain the
quality coverage of each quality characteristic. The outcome of PQEM is a unique multidimensional value,
that represents the quality level reached by each iteration of a product, as an aggregated measure. Even
though a value it is not the regular idea of measuring quality, we believe that it can be useful to use this
value to easily understand the quality level of each iteration. An illustrative example of the PQEM method
was carried out with two iterations from a web and mobile application, within the healthcare environment.
A single measure makes it possible to observe the evolution of the level of quality reached in the evolution
of the product through the iterations.
Integrating profiling into mde compilersijseajournal
Scientific computation requires more and more performance in its algorithms. New massively parallel
architectures suit well to these algorithms. They are known for offering high performance and power
efficiency. Unfortunately, as parallel programming for these architectures requires a complex distribution
of tasks and data, developers find difficult to implement their applications effectively. Although approaches
based on source-to-source intends to provide a low learning curve for parallel programming and take
advantage of architecture features to create optimized applications, programming remains difficult for
neophytes. This work aims at improving performance by returning to the high-level models, specific
execution data from a profiling tool enhanced by smart advices computed by an analysis engine. In order to
keep the link between execution and model, the process is based on a traceability mechanism. Once the
model is automatically annotated, it can be re-factored aiming better performances on the re-generated
code. Hence, this work allows keeping coherence between model and code without forgetting to harness the
power of parallel architectures. To illustrate and clarify key points of this approach, we provide an
experimental example in GPUs context. The example uses a transformation chain from UML-MARTE
models to OpenCL code.
THE UNIFIED APPROACH FOR ORGANIZATIONAL NETWORK VULNERABILITY ASSESSMENTijseajournal
The present business network infrastructure is quickly varying with latest servers, services, connections,
and ports added often, at times day by day, and with a uncontrollably inflow of laptops, storage media and
wireless networks. With the increasing amount of vulnerabilities and exploits coupled with the recurrent
evolution of IT infrastructure, organizations at present require more numerous vulnerability assessments.
In this paper new approach the Unified process for Network vulnerability Assessments hereafter called as
a unified NVA is proposed for network vulnerability assessment derived from Unified Software
Development Process or Unified Process, it is a popular iterative and incremental software development
process framework.
A FRAMEWORK FOR ASPECTUAL REQUIREMENTS VALIDATION: AN EXPERIMENTAL STUDYijseajournal
Requirements engineering is a discipline of software engineering that is concerned with the
identification and handling of user and system requirements. Aspect-Oriented Requirements
Engineering (AORE) extends the existing requirements engineering approaches to cope with the
issue of tangling and scattering resulted from crosscutting concerns. Crosscutting concerns are
considered as potential aspects and can lead to the phenomena “tyranny of the dominant
decomposition”. Requirements-level aspects are responsible for producing scattered and tangled
descriptions of requirements in the requirements document. Validation of requirements artefacts
is an essential task in software development. This task ensures that requirements are correct and
valid in terms of completeness and consistency, hence, reducing the development cost,
maintenance and establish an approximately correct estimate of effort and completion time of the
project. In this paper, we present a validation framework to validate the aspectual requirements
and the crosscutting relationship of concerns that are resulted from the requirements engineering
phase. The proposed framework comprises a high-level and low-level validation to implement on
software requirements specification (SRS). The high-level validation validates the concerns with
stakeholders, whereas the low-level validation validates the aspectual requirement by
requirements engineers and analysts using a checklist. The approach has been evaluated using
an experimental study on two AORE approaches. The approaches are viewpoint-based called
AORE with ArCaDe and lexical analysis based on Theme/Doc approach. The results obtained
from the study demonstrate that the proposed framework is an effective validation model for
AORE artefacts.
PRODUCT QUALITY EVALUATION METHOD (PQEM): TO UNDERSTAND THE EVOLUTION OF QUAL...ijseajournal
Promoting quality within the context of agile software development, it is extremely important as well as
useful to improve not only the knowledge and decision-making of project managers, product owners, and
quality assurance leaders but also to support the communication between teams. In this context, quality
needs to be visible in a synthetic and intuitive way in order to facilitate the decision of accepting or
rejecting each iteration within the software life cycle. This article introduces a novel solution called
Product Quality Evaluation Method (PQEM) which can be used to evaluate a set of quality characteristics
for each iteration within a software product life cycle. PQEM is based on the Goal-Question-Metric
approach, the standard ISO/IEC 25010, and the extension made of testing coverage in order to obtain the
quality coverage of each quality characteristic. The outcome of PQEM is a unique multidimensional value,
that represents the quality level reached by each iteration of a product, as an aggregated measure. Even
though a value it is not the regular idea of measuring quality, we believe that it can be useful to use this
value to easily understand the quality level of each iteration. An illustrative example of the PQEM method
was carried out with two iterations from a web and mobile application, within the healthcare environment.
A single measure makes it possible to observe the evolution of the level of quality reached in the evolution
of the product through the iterations.
SOFTWARE REQUIREMENT CHANGE EFFORT ESTIMATION MODEL PROTOTYPE TOOL FOR SOFTWA...ijseajournal
In software development phase software artifacts are not in consistent states such as: some of the class artifacts are fully developed some are half developed, some are major developed, some are minor developed and some are not developed yet. At this stage allowing too many software requirement changes may possibly delay in project delivery and increase development budget of the software. On the other hand rejecting too many changes may increase customer dissatisfaction. Software change effort estimation is one of the most challenging and important activity that helps software project managers in accepting or rejecting changes during software development phase. This paper extends our previous works on developing a software requirement change effort estimation model prototype tool for the software development phase. The significant achievements of the tool are demonstrated through an extensive experimental validation using several case studies. The experimental analysis shows improvement in the estimation accuracy over current change effort estimation models.
Performance Evaluation using Blackboard Technique in Software ArchitectureEditor IJCATR
Validation of software systems is very useful at the primary stages of their development cycle. Evaluation of functional
requirements is supported by clear and appropriate approaches, but there is no similar strategy for evaluation of non-functional
requirements (such as performance). Whereas establishing the non-functional requirements have significant effect on success of
software systems, therefore considerable necessities are needed for evaluation of non-functional requirements. Also, if the software
performance has been specified based on performance models, may be evaluated at the primary stages of software development cycle.
Therefore, modeling and evaluation of non-functional requirements in software architecture level, that are designed at the primary
stages of software systems development cycle and prior to implementation, will be very effective.
We propose an approach for evaluate the performance of software systems, based on black board technique in software architecture
level. In this approach, at first, software architecture using blackboard technique is described by UML use case, activity and
component diagrams. then UML model is transformed to an executable model based on timed colored petri nets(TCPN)
Consequently, upon execution of an executive model and analysis of its results, non-functional requirements including performance
(such as response time) may be evaluated in software architecture level.
REALIZING A LOOSELY-COUPLED STUDENTS PORTAL FRAMEWORKijseajournal
Most of the currently available students' portal frameworks are tightly-coupled frameworks. A recent
research done by the authors of this paper has discussed how to distribute the concepts of the traditional
students' portal framework and came out with a distributed interoperable framework. This paper realizes
the distributed interoperable students' portal framework by developing a prototype. This prototype is based
on Service Oriented Architecture (SOA). The prototype is tested using web service testing and compatibility
testing.
JELINSKI-MORANDA SOFTWARE RELIABILITY GROWTH MODEL: A BRIEF LITERATURE AND MO...ijseajournal
Analyzing the reliability of a software can be done at various phases during the development of
engineering software. Software reliability growth models (SRGMs) assess, predict, and controlthe software
reliability based on data obtained from testing phase.This paper gives a literaturereview of the first and
wellknownJelinski and Moranda(J-M) (1972)SRGM.Also a modification to Jelinski and Morandamodel is
given, Jelinski and Moranda and Schick and Wolverton (S-W) (1978)SRGMsare two special cases of our
new suggested general SRGM. Our proposed general SRGMalong with our Survey will open doors for
much more useful researches to be done in the field of reliability modeling.
Program analysis is useful for debugging, testing and maintenance of software systems due to information
about the structure and relationship of the program’s modules . In general, program analysis is performed
either based on control flow graph or dependence graph. However, in the case of aspect-oriented
programming (AOP), control flow graph (CFG) or dependence graph (DG) are not enough to model the
properties of Aspect-oriented (AO) programs. With respect to AO programs, although AOP is good for
modular representation and crosscutting concern, suitable model for program analysis is required to
gather information on its structure for the purpose of minimizing maintenance effort. In this paper Aspect
Oriented Dependence Flow Graph (AODFG) as an intermediate representation model is proposed to
represent the structure of aspect-oriented programs. AODFG is formed by merging the CFG and DG, thus
more information about dependencies between the join points, advice, aspects and their associated
construct with the flow of control from one statement to another are gathered. We discussthe performance
of AODFG by analysing some examples of AspectJ program taken from AspectJ Development Tools
(AJDT).
Selenium - A Trending Automation Testing Toolijtsrd
Selenium is an important testing tool for software quality assurance. In recent days number of websites are increasing rapidly and it becomes essential to test the websites against various quality factors to make sure it meets the expected quality goals. Several companies are spending a lot of bucks for the testing tool while Selenium is available completely free for the performance test. The open source tool is well known for its unlimited capabilities and unlimited reach. Selenium stands out from the crowd in this aspect. Anyone could visit the Selenium website and download the latest version and use it. It is not only an open source but also highly modifiable. Testers could make changes based upon their needs and requirements. Manav Kundra "Selenium - A Trending Automation Testing Tool" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-4 , June 2020, URL: https://www.ijtsrd.com/papers/ijtsrd31202.pdf Paper Url :https://www.ijtsrd.com/engineering/software-engineering/31202/selenium-%E2%80%93-a-trending-automation-testing-tool/manav-kundra
Improving Consistency of UML Diagrams and Its Implementation Using Reverse En...journalBEEI
Software development deals with various changes and evolution that cannot be avoided due to the development processes which are vastly incremental and iterative. In Model Driven Engineering, inconsistency between model and its implementation has huge impact on the software development process in terms of added cost, time and effort. The later the inconsistencies are found, it could add more cost to the software project. Thus, this paper aims to describe the development of a tool that could improve the consistency between Unified Modeling Language (UML) design models and its C# implementation using reverse engineering approach. A list of consistency rules is defined to check vertical and horizontal consistencies between structural (class diagram) and behavioral (use case diagram and sequence diagram) UML diagrams against the implemented C# source code. The inconsistencies found between UML diagrams and source code are presented in a textual description and visualized in a tree view structure.
PROPERTIES OF A FEATURE IN CODE-ASSETS: AN EXPLORATORY STUDYijseajournal
Software product line engineering is a paradigm for developing a family of software products from a
repository of reusable assets rather than developing each individual product from scratch. In featureoriented software product line engineering, the common and the variable characteristics of the products
are expressed in terms of features. Using software product line engineering approach, software products
are produced en masse by means of two engineering phases: (i) Domain Engineering and, (ii) Application
Engineering. At the domain engineering phase, reusable assets are developed with variation points where
variant features may be bound for each of the diverse products. At the application engineering phase,
individual and customized products are developed from the reusable assets. Ideally, the reusable assets
should be adaptable with less effort to support additional variations (features) that were not planned
beforehand in order to increase the usage context of SPL as a result of expanding markets or when a new
usage context of software product line emerges. This paper presents an exploration research to investigate
the properties of features, in the code-asset implemented using Object-Oriented Programming Style. In the
exploration, we observed that program elements of disparate features formed unions as well as
intersections that may affect modifiability of the code-assets. The implication of this research to practice is
that an unstable product line and with the tendency of emerging variations should aim for techniques that
limit the number of intersections between program elements of different features. Similarly, the implication
of the observation to research is that there should be subsequent investigations using multiple case studies
in different software domains and programming styles to improve the understanding of the findings.
Performance comparison on java technologies a practical approachcsandit
Performance responsiveness and scalability is a make-or-break quality for software. Nearly
everyone runs into performance problems at one time or another. This paper discusses about
performance issues faced during one of the project implemented in java technologies. The
challenges faced during the life cycle of the project and the mitigation actions performed. It
compares 3 java technologies and shows how improvements are made through statistical
analysis in response time of the application. The paper concludes with result analysis.
PERFORMANCE COMPARISON ON JAVA TECHNOLOGIES - A PRACTICAL APPROACHcscpconf
Performance responsiveness and scalability is a make-or-break quality for software. Nearly everyone runs into performance problems at one time or another. This paper discusses about
performance issues faced during one of the project implemented in java technologies. The challenges faced during the life cycle of the project and the mitigation actions performed. It compares 3 java technologies and shows how improvements are made through statistical analysis in response time of the application. The paper concludes with result analysis.
SOFTWARE REQUIREMENT CHANGE EFFORT ESTIMATION MODEL PROTOTYPE TOOL FOR SOFTWA...ijseajournal
In software development phase software artifacts are not in consistent states such as: some of the class artifacts are fully developed some are half developed, some are major developed, some are minor developed and some are not developed yet. At this stage allowing too many software requirement changes may possibly delay in project delivery and increase development budget of the software. On the other hand rejecting too many changes may increase customer dissatisfaction. Software change effort estimation is one of the most challenging and important activity that helps software project managers in accepting or rejecting changes during software development phase. This paper extends our previous works on developing a software requirement change effort estimation model prototype tool for the software development phase. The significant achievements of the tool are demonstrated through an extensive experimental validation using several case studies. The experimental analysis shows improvement in the estimation accuracy over current change effort estimation models.
Performance Evaluation using Blackboard Technique in Software ArchitectureEditor IJCATR
Validation of software systems is very useful at the primary stages of their development cycle. Evaluation of functional
requirements is supported by clear and appropriate approaches, but there is no similar strategy for evaluation of non-functional
requirements (such as performance). Whereas establishing the non-functional requirements have significant effect on success of
software systems, therefore considerable necessities are needed for evaluation of non-functional requirements. Also, if the software
performance has been specified based on performance models, may be evaluated at the primary stages of software development cycle.
Therefore, modeling and evaluation of non-functional requirements in software architecture level, that are designed at the primary
stages of software systems development cycle and prior to implementation, will be very effective.
We propose an approach for evaluate the performance of software systems, based on black board technique in software architecture
level. In this approach, at first, software architecture using blackboard technique is described by UML use case, activity and
component diagrams. then UML model is transformed to an executable model based on timed colored petri nets(TCPN)
Consequently, upon execution of an executive model and analysis of its results, non-functional requirements including performance
(such as response time) may be evaluated in software architecture level.
REALIZING A LOOSELY-COUPLED STUDENTS PORTAL FRAMEWORKijseajournal
Most of the currently available students' portal frameworks are tightly-coupled frameworks. A recent
research done by the authors of this paper has discussed how to distribute the concepts of the traditional
students' portal framework and came out with a distributed interoperable framework. This paper realizes
the distributed interoperable students' portal framework by developing a prototype. This prototype is based
on Service Oriented Architecture (SOA). The prototype is tested using web service testing and compatibility
testing.
JELINSKI-MORANDA SOFTWARE RELIABILITY GROWTH MODEL: A BRIEF LITERATURE AND MO...ijseajournal
Analyzing the reliability of a software can be done at various phases during the development of
engineering software. Software reliability growth models (SRGMs) assess, predict, and controlthe software
reliability based on data obtained from testing phase.This paper gives a literaturereview of the first and
wellknownJelinski and Moranda(J-M) (1972)SRGM.Also a modification to Jelinski and Morandamodel is
given, Jelinski and Moranda and Schick and Wolverton (S-W) (1978)SRGMsare two special cases of our
new suggested general SRGM. Our proposed general SRGMalong with our Survey will open doors for
much more useful researches to be done in the field of reliability modeling.
Program analysis is useful for debugging, testing and maintenance of software systems due to information
about the structure and relationship of the program’s modules . In general, program analysis is performed
either based on control flow graph or dependence graph. However, in the case of aspect-oriented
programming (AOP), control flow graph (CFG) or dependence graph (DG) are not enough to model the
properties of Aspect-oriented (AO) programs. With respect to AO programs, although AOP is good for
modular representation and crosscutting concern, suitable model for program analysis is required to
gather information on its structure for the purpose of minimizing maintenance effort. In this paper Aspect
Oriented Dependence Flow Graph (AODFG) as an intermediate representation model is proposed to
represent the structure of aspect-oriented programs. AODFG is formed by merging the CFG and DG, thus
more information about dependencies between the join points, advice, aspects and their associated
construct with the flow of control from one statement to another are gathered. We discussthe performance
of AODFG by analysing some examples of AspectJ program taken from AspectJ Development Tools
(AJDT).
Selenium - A Trending Automation Testing Toolijtsrd
Selenium is an important testing tool for software quality assurance. In recent days number of websites are increasing rapidly and it becomes essential to test the websites against various quality factors to make sure it meets the expected quality goals. Several companies are spending a lot of bucks for the testing tool while Selenium is available completely free for the performance test. The open source tool is well known for its unlimited capabilities and unlimited reach. Selenium stands out from the crowd in this aspect. Anyone could visit the Selenium website and download the latest version and use it. It is not only an open source but also highly modifiable. Testers could make changes based upon their needs and requirements. Manav Kundra "Selenium - A Trending Automation Testing Tool" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-4 , June 2020, URL: https://www.ijtsrd.com/papers/ijtsrd31202.pdf Paper Url :https://www.ijtsrd.com/engineering/software-engineering/31202/selenium-%E2%80%93-a-trending-automation-testing-tool/manav-kundra
Improving Consistency of UML Diagrams and Its Implementation Using Reverse En...journalBEEI
Software development deals with various changes and evolution that cannot be avoided due to the development processes which are vastly incremental and iterative. In Model Driven Engineering, inconsistency between model and its implementation has huge impact on the software development process in terms of added cost, time and effort. The later the inconsistencies are found, it could add more cost to the software project. Thus, this paper aims to describe the development of a tool that could improve the consistency between Unified Modeling Language (UML) design models and its C# implementation using reverse engineering approach. A list of consistency rules is defined to check vertical and horizontal consistencies between structural (class diagram) and behavioral (use case diagram and sequence diagram) UML diagrams against the implemented C# source code. The inconsistencies found between UML diagrams and source code are presented in a textual description and visualized in a tree view structure.
PROPERTIES OF A FEATURE IN CODE-ASSETS: AN EXPLORATORY STUDYijseajournal
Software product line engineering is a paradigm for developing a family of software products from a
repository of reusable assets rather than developing each individual product from scratch. In featureoriented software product line engineering, the common and the variable characteristics of the products
are expressed in terms of features. Using software product line engineering approach, software products
are produced en masse by means of two engineering phases: (i) Domain Engineering and, (ii) Application
Engineering. At the domain engineering phase, reusable assets are developed with variation points where
variant features may be bound for each of the diverse products. At the application engineering phase,
individual and customized products are developed from the reusable assets. Ideally, the reusable assets
should be adaptable with less effort to support additional variations (features) that were not planned
beforehand in order to increase the usage context of SPL as a result of expanding markets or when a new
usage context of software product line emerges. This paper presents an exploration research to investigate
the properties of features, in the code-asset implemented using Object-Oriented Programming Style. In the
exploration, we observed that program elements of disparate features formed unions as well as
intersections that may affect modifiability of the code-assets. The implication of this research to practice is
that an unstable product line and with the tendency of emerging variations should aim for techniques that
limit the number of intersections between program elements of different features. Similarly, the implication
of the observation to research is that there should be subsequent investigations using multiple case studies
in different software domains and programming styles to improve the understanding of the findings.
Performance comparison on java technologies a practical approachcsandit
Performance responsiveness and scalability is a make-or-break quality for software. Nearly
everyone runs into performance problems at one time or another. This paper discusses about
performance issues faced during one of the project implemented in java technologies. The
challenges faced during the life cycle of the project and the mitigation actions performed. It
compares 3 java technologies and shows how improvements are made through statistical
analysis in response time of the application. The paper concludes with result analysis.
PERFORMANCE COMPARISON ON JAVA TECHNOLOGIES - A PRACTICAL APPROACHcscpconf
Performance responsiveness and scalability is a make-or-break quality for software. Nearly everyone runs into performance problems at one time or another. This paper discusses about
performance issues faced during one of the project implemented in java technologies. The challenges faced during the life cycle of the project and the mitigation actions performed. It compares 3 java technologies and shows how improvements are made through statistical analysis in response time of the application. The paper concludes with result analysis.
FROM THE ART OF SOFTWARE TESTING TO TEST-AS-A-SERVICE IN CLOUD COMPUTINGijseajournal
Researchers consider that the first edition of the book "The Art of Software Testing" by Myers (1979)
initiated research in Software Testing. Since then, software testing has gone through evolutions that have
driven standards and tools. This evolution has accompanied the complexity and variety of software
deployment platforms. The migration to the cloud allowed benefits such as scalability, agility, and better
return on investment. Cloud computing requires more significant involvement in software testing to ensure
that services work as expected. In addition to testing cloud applications, cloud computing has paved the
way for testing in the Test-as-a-Service model. This review aims to understand software testing in the
context of cloud computing. Based on the knowledge explained here, we sought to linearize the evolution of
software testing, characterizing fundamental points and allowing us to compose a synthesis of the body of
knowledge in software testing, expanded by the cloud computing paradigm.
From the Art of Software Testing to Test-as-a-Service in Cloud Computingijseajournal
Researchers consider that the first edition of the book "The Art of Software Testing" by Myers (1979)
initiated research in Software Testing. Since then, software testing has gone through evolutions that have
driven standards and tools. This evolution has accompanied the complexity and variety of software
deployment platforms. The migration to the cloud allowed benefits such as scalability, agility, and better
return on investment. Cloud computing requires more significant involvement in software testing to ensure
that services work as expected. In addition to testing cloud applications, cloud computing has paved the
way for testing in the Test-as-a-Service model. This review aims to understand software testing in the
context of cloud computing. Based on the knowledge explained here, we sought to linearize the evolution of
software testing, characterizing fundamental points and allowing us to compose a synthesis of the body of
knowledge in software testing, expanded by the cloud computing paradigm.
Automatic model transformation on multi-platform system development with mode...CSITiaesprime
Several difficulties commonly arise during the software development process. Among them are the lengthy technical process of developing a system, the limited number and technical capabilities of human resources, the possibility of bugs and errors during the testing and implementation phase, dynamic and frequently changing user requirements, and the need for a system that supports multi-platforms. Rapid application development (RAD) is the software development life cycle (SDLC) that emphasizes the production of a prototype in a short amount of time (30-90 days). This study discovered that implementing a model-driven architecture (MDA) approach into the RAD method can accelerate the model design and prototyping stages. The goal is to accelerate the SDLC process. It took roughly five weeks to construct the system by applying all of the RAD stages. This time frame does not include iteration and the cutover procedure. During the prototype test, there were no errors with the create, read, update, and delete (CRUD) procedure. It was demonstrated that automatic transformation in MDA can shorten the RAD phases for designing the model and developing an early prototype, reduce code errors in standard processes like CRUD, and construct a system that supports multi-platform.
Mvc architecture driven design and agile implementation of a web based softwa...ijseajournal
This paper reports design and implementation of a web based software system for storing and managing
information related to time management and productivity of employees working on a project.
The system
has been designed and implemented w
ith best principles from model view
controller
and agile development.
Such system has practical use for any organization in terms of ease of use, efficiency, and cost savings. The
manuscript describes design of the system as well as its database and user i
nterface. Detailed snapshots of
the working system are provided too.
EMBEDDING PERFORMANCE TESTING IN AGILE SOFTWARE MODELijseajournal
In the last couple of decades, the software development process has evolved drastically, starting from Big
Bang to Waterfall to Agile. The primary driver for the evolution of the software was the “Speed of
Delivery” of the Software Product which has significantly accelerated from months to less than weeks and
days. For IT (Information Technology) Organizations to be successful, they inevitably need a strong
technology presence to roll out new software and features as quickly as possible to their customer base.
The current user generation tends to use technology to maximum potential and is always striving to keep
up with the new trends. The main subject is for the organizations to be ready with their Speed of Delivery
strategy adapting to all technology modernization initiatives like CICD (Continuous Integration and
Continuous Deployment), Agile, DevOps, and Cloud so that there are negligible customer friction and no
risks to their Market shares,. The aim of this paper is to compare the performance testing in every stage of
the agile model to the traditional end testing. The results of the corresponding testing phases are presented
in this paper.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
STATISTICAL ANALYSIS FOR PERFORMANCE COMPARISON
1. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.4, July 2013
DOI : 10.5121/ijsea.2013.4407 77
STATISTICAL ANALYSIS FOR PERFORMANCE
COMPARISON
Priyanka Dutta, Vasudha Gupta, Sunit Rana
Centre for Development of Advanced Computing, C-56/1, Anusandhan Bhawan, Sector –
62, Noida
priyankadutta@cdac.in, vasudhagupta@cdac.in, sunitrana@cdac.in
ABSTRACT
Performance responsiveness and scalability is a make-or-break quality for software. Nearly everyone runs
into performance problems at one time or another. This paper discusses about performance issues faced
during Pre Examination Process Automation System (PEPAS) implemented in java technology. The
challenges faced during the life cycle of the project and the mitigation actions performed. It compares 3
java technologies and shows how improvements are made through statistical analysis in response time of
the application. The paper concludes with result analysis.
KEYWORDS
Decision Analysis and Resolution(DAR), Causal Analysis and Resolution(CAR), Groovy Grails, Adobe
Flex, , Jmeter
1. INTRODUCTION
Today’s software development organizations are being asked to do more with less resource. In
many cases, this means upgrading legacy applications to new web-based application with quick
response time and throughput. Nearly everyone runs into performance problems at one time or
another. Focusing on the architecture provides more and potentially greater options for
performance tuning for improvement [1].
CDAC is involved in design, development, maintenance and hosting of the Computer Based
Technical System for Online Acceptance of Applications and Examination Management System.
The project was developed to help client to perform their task easily and effectively in
computerized environment for transparency in the system. The System performs tasks such as:
i. Receives online application forms - The function of this module was to receive
applications online. Fees through Demand Draft (DD) or National Electronic Funds
Transfer (NEFT) transaction are also processed through it. Facility for uploading photos
and signatures of the candidates is also provided.
ii. Generation of Admit cards - The purpose of this module was to generate the admit cards
for online and offline received applications forms. The process of admit card generation
was followed by centre allocation and roll no generation for each applicant.
iii. MIS reports for conducting exams - Various reports were generated by the system and
data was saved securely on the CDAC server.[2]
2. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.4, July 2013
78
In project Pre Examination Process Automation System (PEPAS) we also faced such
performance issues. A typical Examination System involves wide range of functionalities dealing
with public at large and an efficient communication and feedback mechanism to the utmost
satisfaction of all the users. An error – free speedy interface is vital for successful functioning of
the system [2].There were many challenges while building the application. This paper tries to
compare various Java technologies which could be adopted successfully for efficient application
and performance improvement. Section 2 describes how the technology is chosen. Section 3 deals
with various available technologies in Java, section 4 explains performance comparison of the
technologies used with conclusion.
2. TECHNOLOGY SELECTION
There were too many options available in terms of technology selection and keeping in view the
requirements. Decision Analysis and Resolution (DAR) one of the Project management
techniques was utilized to decide on appropriate technology for the project. This Technique is
intended to ensure that critical decisions are made in a scientific and systematic way. DAR
process is a formal method of evaluating key program decisions and proposed solutions to these
issues [3]. Table- 1 depicts DAR sheet wherein how the technology was chosen to develop the
system is shown.
Table 1. DAR Table made for PEPAS
Issues Identified
Alternatives
Evaluation
Method
Evaluation Criteria Remarks Result
1.Presenta
tion Tier
Technolog
y
JSP Java
Applet
Adobe Flex
Brainstor
ming
Compari
son
/Performance
Ease of
Building Interface
Richness of
User experience
Ease of
offloading Logics to
client side without
explicit installation
of software at the
client side Browser
Independence.
Adobe Flex
provides building
Interface with easy to
understand tutorials
over JSP.
Ease of
Interaction with J2EE
Middle Tier. (Adobe
Flex) which is not
possible in JSP.
Adobe
Flex
iReport
Crystal
Report
Brainstor
ming
Compari
son
Easy to build
reports
Rich features
Open source
Ease of
interaction with J2EE
Middle Tier
iReport
2.Technol
ogy for
Middle
Tier
EJB 2
Spring
Grails
Brainstor
ming
Compari
son
Functionality
Interoperability
Advanced
Technology
Use EJB 2 - less
functionality over
grails.
Spring –
Interoperability is
complex.
Grails –
Advanced
Technology Grails
Groovy GFS (Grails
Flex Scaffolding)
with integration of
Adobe Flex.
Grails
On
Grails
3. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.4, July 2013
79
After the careful evaluation of the technology the development bed was chosen. Grails and Adobe
Flex3 were the chosen technologies as an outcome of the DAR.
Groovy is a dynamic language for the JVM that offers a flexible Java-like syntax that all Java
developers can learn in matter of hours. Grails is an advanced and innovative web-application
framework based on Groovy that enables a developer to establish fast development cycles
through Agile Methodologies and to deliver quality product in reduced amount of time [4].
Grails Flex Scaffold (GFS) is a plug-in that deals with Flex code generation by scaffolding
methodology [5]. When it came to attractive user interface the use of Adobe Flex is a powerful,
an open source application framework that allowed us to easily build the GUI of the application.
[6]
The base model was developed and accepted by client. After acceptance the client placed
different work orders. For every work order, customization of the base model was to be made as
per the requirements. Timelines were set for each and every activity including requirements
gathering, development and testing. A work plan was prepared at the beginning of the project
wherein constraints were highlighted. To control the project there is need to compare actual
performance with planned performance and taking corrective action to get the desired outcome
when there are significant differences. By monitoring and measuring progress regularly,
identifying variances from plan, and taking corrective action if required, project control ensures
that project objectives are met.
3. CAR(CAUSAL ANALYSIS AND RESOLUTION)
Causal Analysis and Resolution (CAR) is done to identify how to increase the effectiveness in
code review to make it a capable process. For doing CAR we used 5 Whys technique. The 5
Whys is a questions-asking method used to explore the cause/effect relationships underlying a
particular problem, with the goal of determining a root cause of a defect or problem.
Why 39% Code review effectiveness(CRE) is not manageable
Why Code Review (CR) has not been planned effectively
Why No formal CR done, at some places it was code walkthrough only
Why No Checklist for review and all the processes have not been checked thoroughly
Why The review was done by peers from other team
Conclusion Informal Code Review.
Code reviews done without checklists
Code Review done by peers from other team was not effective
Action All processes will be covered in the code review.
Use checklist to review the code.
Pair programming
Table 2. CAR Table
4. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.4, July 2013
80
4. THE CASE ANALYSIS
Performance responsiveness and scalability is a make-or-break quality for software. Two work
orders were completed on time, but following problems were persistent:
1. System Performance was slow
2. Stale Object Exception was thrown
3. Garbage Collector overhead limit exceeded
The problems were discussed and analyzed in the brainstorming session. Discussions were made
for doing extensive code reviews. Using CAR (Causal Analysis and Resolution) technique for
project management drawbacks in existing code reviews was identified and their corrective
actions were planned. Revisiting and evaluating the design and architecture of the system once
again were also discussed. The outcomes of CAR were some changes like:
a) Change in mail sending process – Earlier mail was sent synchronously after the saving of
registration data. Asynchronous Mechanisms of sending mail like installing and using Grails
Asynchronous Mails Plug-in brings down the response time required for doing registration.
Another advantage of using Grails Asynchronous Mails Plug-in is that the activity of sending
mails may be scheduled or re-tried after certain amount of time. This feature would be useful
in events when Mail Server is heavily loaded or Mail server is “down”. The implementation
of this plug-in requires minimal code changes in the current application.
b) JVM options to tune JVM Heap size - Java has a couple of options that help control how
much memory it uses:
a. -Xmx sets the maximum memory heap size
b. -Xms sets the minimum memory heap size
For a server with a 'small' amount of memory, we recommend that -Xms is kept as small as
possible e.g. -Xms 16m. Some set this higher, but that can lead to issues e.g. the command that
restarts tomcat runs a java process. That Java process picks up the same -Xms setting as the
actual Tomcat process. So you will effectively be using two times -Xms when doing a restart. If
you set -Xms too high, then you may run out of memory.
When setting the -Xmx setting you should consider a few things like -Xmx has to be enough for
you to run your application. If it is set too low then you may get Java OutOfMemory exceptions
(even when there is sufficient spare memory on the server). If you have 'spare' memory, then
increasing the -Xmx setting is often a good idea. Just note that the more you allocate to Java the
less will be available to your database server or other applications and less for Linux to cache
your disk reads.
Note that Java can end up using (a lot) more than the -Xmx value worth of memory, since it
allocates extra/separate memory for the Java classes it uses. So the more classes are involved in
your application the more memory that Java process will require.
The PermGen space is used for things that do not change (or change often) e.g. Java classes. So
often large, complex applications will need lots of PermGen space. Similarly if you are doing
frequent war/ear/jar deployments to running servers like Tomcat or JBoss you may need to issue
a server restart after a few deployments or increase your PermGen space. To increase the
5. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.4, July 2013
81
PermGen space use something like: -XX:MaxPermSize=128m. The default is 64MB. (Note that
Xmx is separate from the PermGen space, so increasing Xmx will not help with the PermGen
errors).
Since our technology selection was made using DAR, hence we revisited our DAR and made two
teams. One team worked upon Java/servelet technology and the second one worked on Spring
Framework. For team one with java/servelet technology oracle database server has been choosen
in place of mysql. A comparative study on query execution time was carried out and we found
that query executed on MySQL is executing 30 times faster than the same query being executed
on ORACLE. So we finally decided that for our application and for the kind of data which we
record, MySQL database gives better performance.
5. PERFORMANCE COMPARISON
One of the most critical aspects of the quality of a software system is its performance and hence
we set our goals to improve the performance of the system. Performance Test from Jmeter was
undertaken. This was done iteratively after tuning the application and each time we get better
result of performance from performance tests. Jmeter is one of the Java tools which are used to
load testing client/server applications. It is used for testing the systems performance which
automatically stimulates the number of users. It is also important to select the right kind of
parameter based on your application for analyzing the test results. Since our application receives
too many hits hence we choose response time as the evaluating parameter. Response time is the
elapsed time from the moment when a given request is sent to the server until the moment when
the last bit of information has returned to the client. We carried out the performance test on our
base application developed with Grails and Flex. We gave inputs of 1 sec, 5sec, 10sec and 60 sec
with sample sets ranging from 20 to 700 users and the report is as follows:
We carried out the performance test on our base application developed with grails and flex. We
gave inputs of 1sec, 5sec, 10sec and 60sec with sample sets ranging from 20 to 700 users. This
table shows the first Jmeter Report of Performance Testing on Base Version of Grails & Flex. It
can be seen here that average response time for sample set in 10 second for 20 users is coming to
be 333 milliseconds with standard deviation of 51.3% which is very high and the throughput is
2.03 request /sec which is very low. Looking at these statistics it was evident that the system was
unstable.
Samples Average
(msec)
Min
(msec)
Max
(msec)
Std. Dev. Error
%
Throughput
(/sec)
Avg.
Bytes
1 sec
20 2722 1710 3472 499.36 0 4.88 327
100 31214 297 52071 19425.21 0.24 1.89 1357.64
5 sec
20 315 260 505 64.47 0 3.97 327
50 3629 699 5387 1254.05 0.12 5.76 920.4
100 6098 638 11230 2818.44 0.43 7.95 2453.35
10 sec
6. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.4, July 2013
82
20 333 294 530 51.3 0 2.03 327
100 6177 320 9990 2431.46 0.04 5.98 524.8
60 sec
300 494 267 3820 615 0 4.89 327
400 6248 333 15355 3947.63 0.1 6.03 833.86
500 37292 789 113203 30205.56 0.46 3.41 2195.23
700 68532 337 170815 43343.26 0.68 3.39 3088.47
Table 3. 1st
Jmeter Report with Grails
Performance tests were also carried out for Java/Servlet application which showed that it was no
way near the grails version in view of performance of the application. For 10sec with 20 users,
the average response time is 23640msec which is far more than 333msec (average response time
for sample of 10 sec and 20 users in case of Grails and Flex).
Samples Average
(msec)
Min
(msec)
Max
(msec)
Std. Dev. Error
%
Throughput
(/sec)
Avg.
Bytes
1 sec
1 2870 2870 2870 0 0 0.348432 51283
2 4296 3565 5027 731 0 0.361598 51283
5 8745 4510 12829 3063.947 0 0.36673 51283
10 15295 3597 26190 7458.177 0 0.369072 51283
20 27417 4175 50226 14277.99 0 0.390678 51283
50 65343 4730 125804 36155.33 0 0.394185 51283
100 96242 4240 129667 39525.4 0.49 0.764 26154.33
5 sec
1 2795 2795 2795 0 0 0.357782 51283
2 3091 3016 3167 75.5 0 0.352734 51283
5 6767 4497 8903 1697.582 0 0.387447 51283
10 12644 3863 20973 5624.403 0 0.39248 51283
50 62592 4064 119374 34217.76 0.02 0.401858 50257.34
100 94558 4128 128009 39101.14 0.48 0.753551 26667.16
10 sec
1 2734 2734 2734 0 0 0.365764 51283
2 2885 2787 2984 98.5 0 0.25047 51283
5 4424 3436 5284 692.505 0 0.384734 51283
10 10351 3982 17581 4326.702 0 0.393902 51283
20 23640 4082 44047 12011.69 0 0.386473 51283
50 61088 3460 117988 33411.58 0.02 0.397599 50257.34
100 59016 4325 100826 19939.48 0.57 0.91943 22051.69
Table 4. Jmeter Report with Java/Servlet
The Spring Framework provides integration with Hibernate in terms of resource management,
DAO implementation support, and transaction strategies. When we did the performance test on
this we found the following statistics:
7. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.4, July 2013
83
Samples Average
(msec)
Min
(msec)
Max
(msec)
Std. Dev. Error
%
Throughput
(/sec)
Avg.
Bytes
1 sec
1 189 189 189 0 0 5.291005 102
2 191 191 191 0 0 2.828854 102
5 193 190 196 2.227106 0 4.911591 102
10 878 200 1243 273.9256 0 5.099439 102
20 1841 620 2997 753.7219 0 5.042864 102
50 5490 495 9441 2512.129 0 4.949515 102
100 11706 497 22748 6436.252 0 4.332568 102
5 sec
1 207 207 207 0 0 4.830918 102
2 218 217 219 1 0 0.732064 102
5 208 206 214 2.939388 0 1.188213 102
10 207 200 216 4.019950 0 2.118195 102
20 215 208 230 5.064336 0 4.031445 102
50 2811 280 6004 1486.374 0 4.936321 102
100 8869 311 17970 5450.867 0 4.577078 102
10 sec
1 219 219 219 0 0 4.56621 102
2 233 233 233 0 0 0.381025 102
5 227 224 229 1.854724 0 0.606796 102
10 225 220 230 2.98161 0 1.084599 102
20 223 219 235 4.130375 0 2.054232 102
50 888 331 2299 506.5207 0 4.483903 102
100 7162 373 17162 656.5207 0 4.445432 102
Table 5. Jmeter Report with Spring Framework
In the mean time when we were exploring and testing with other technologies we also did
optimization of Grails version wherein we took following major corrective measures:
a) Removed bidirectional mappings which helped us to remove the unnecessary tables created
by the mapping to store the values.
b) Explicitly clearing the Hibernate Session Level/1st Level Cache increases the performances.
Hibernate 1st Level cache is a transaction-level cache of persistent data. It was seen that
when this transaction-level cache is cleared, write performance of the system was increased.
Initial load testing shows 5 x improvements in terms of Time Taken to insert for doing this
test, 3000 inserts were made without clearing Hibernate 1st level cache which took around
240 sec. After clearing the Hibernate 1st Level cache, the same operation took 45sec.
c) By default Hibernate uses, Optimistic Locking strategies by the use of versioning. The system
experiences high number of concurrent reads and writes. In such situations optimistic locking
fails and throws an exception for Stale Object. The default behavior of Hibernate was
changed from optimistic to pessimistic locking for avoiding this error. The change required
8. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.4, July 2013
84
minimal code modification. The change however, may result in some performance penalty as
there will be row level locking for reads.
d) For implementing row level locking, the Storage engine of MySQL was changed from
MyISAM (which is tuned for High Performance without being ACID compliant) to InnoDB
(which is tuned to support transactions and is fully ACID Complaint)
e) Worked on the query optimization and removal of unnecessary queries which were building
load on mysql.
f) Initially the images were stored in the database along with the path and also on the
application server which made the size of the application bulky and created lots of problems
while retrieving the images for generation of application form or admit card. Now, the images
are stored in a separate folder outside the webapps which helped in improving the
performance of the application.
After doing all the above changes, the application was tested with Jmeter to find the performance
result which is depicted below:
Samples Average
(msec)
Min
(msec)
Max
(msec)
Std. Dev. Error
%
Throughput
(/sec)
Avg.
Bytes
1 sec
1 47 47 47 0 0 21.2766 683
2 47 47 48 0.5 0 3.656307 683
5 48 47 52 1.939072 0 5.820722 683
10 49 46 54 3.257299 0 10.41667 683
20 50 46 59 4.093898 0 19.32367 683
50 791 146 1199 289.9754 0 23.57379 683
100 1973 104 3858 1051.758 0 23.94063 683
5 sec
1 47 47 47 0 0 21.2766 683
2 48 47 50 1.5 0 0.784314 683
5 49 46 54 3.03315 0 1.230921 683
10 55 52 63 2.712932 0 2.194908 683
20 57 53 75 5.634714 0 4.148517 683
50 48 46 62 3.589763 0 10.006 683.38
100 51 46 62 4.430564 0 19.73165 683.19
10 sec
1 48 48 48 0 0 20.83333 683
2 56 56 57 0.5 0 0.395491 683
5 54 51 60 3.544009 0 0.620887 683
10 58 52 79 8.971622 0 1.101443 683
20 56 50 76 7.031358 0 2.093145 683
50 49 46 68 4.422669 0 5.071508 683
100 49 46 77 4.272985 0 10.0311 683.19
60 sec
300 53 47 106 6.930367 0 4.990435 683.19
400 51 47 114 5.434344 0 6.640327 683.19
9. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.4, July 2013
85
500 50 46 96 5.398905 0 8.287201 683.19
700 50 45 178 8.207943 0 11.55726 683.19
Table 6. After optimization Jmeter report of Groovy Grails Version
The results showed that in 1sec with 1 user when compared with spring version Grails showed an
improvement of 75% in response time and throughput increased to various folds from 5.2
requests/sec to 21.2 request/sec. A comparative analysis of the three technologies is shown below
with respect to response time and standard deviation. The results show that Grails is giving the
best performance and hence it is rightly chosen technology.
Response time should be minimum irrespective of the number of users. As it can be seen from the
graph that for grails the line is close to 0 and as we move from grails to spring the graph line
moves away from 0. For java/servlet version the average response time increases with the number
of users.
Figure 1. Graph showing comparative analysis of Table3, Table4 and Table 5 in average response time
with number of users in 1 sec
Standard deviation is a quantity calculated to indicate the extent of deviation for a group as a
whole and it should be near to 0 irrespective of the number of users. It is clear from the graph that
standard deviation in case of grails is very near to 0 and hence rightly chosen technology.
Figure 2. Graph showing comparative analysis of Table3, Table4 and Table 5 in standard deviation with
number of users in 1 sec
When we compared this report to the first test report of Grails version we found that for 10 sec
with 20 samples there was an overall improvement in response time and standard deviation.
10. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.4, July 2013
86
Response time is improved to 83% and there is 86% improvement in standard deviation which is
remarkable in itself.
Samples Avg
(msec)
Min
(msec)
Max
(msec)
Std. Dev. Error
%
Throughput(/
sec)
KB/sec Avg.
Bytes
10 sec
20 333 294 530 51.3 0 2.03 0.65 327
20 56 50 76 7.031358 0 2.093145 1.396111 683
Table 7. Comparison of 1st
v/s Optimized Groovy Grails Report
It is also to be noted that now for 60 sec with 700 users the response time is 50 milliseconds and
standard deviation is 8.2 and throughput is 11.5 request/sec.
6. CONCLUSION
Comparison of the three technologies namely Java/servelet, spring framework and Grails for
performance led us to the result that Grails is a better performing platform for the project we
undertook. Things like RSS feeds and domain modeling allows for faster development of the
application while allowing the focus to be on functional code. The system through various
optimizations has shown an overall improvement of 84 % in response time and 93% in standard
deviations. In latest work order the system did not show any performance issues and the servers
functioned smoothly without any downtime. After improvements done in our system we did not
faced any memory leak issues. Through continuous improvement of the application we have been
able to gain customer satisfaction.
ACKNOWLEDGEMENT
The authors would like to thank Mrs. R.T.Sundari for her continuous guidance in writing the
paper. We would also like to thank the EdCIL PEPAS team for their continuous effort in
improving the application.
REFERENCES
[1] Five Steps to Solving Software Performance Problems by Lloyd G. Williams, Ph.D.,Connie U. Smith,
Ph.D.
[2] Application of the Decision Analysis Resolution and Quantitative Project Management Technique
for Systems Developed Under Rapid Prototype Model by Priyanka Dutta, Vasudha Gupta, Santosh
Singh Chauhan
[3] http://www.processgroup.com/pgpostoct05.pdf
[4] http://www.springsource.com/developer/grails
[5] http://grails.org/plugin/flex-scaffold+1
[6] http://www.adobe.com/products/flex.html
[7] http://grails.org/plugin/asynchronous-mail
11. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.4, July 2013
87
About Authors
Priyanka Dutta has an experience of 8 Years and is currently working as Senior
Technical Officer with CDAC. She has worked on various web based projects namely
EdCIL Pre Examination Process Automation, Hospital Information Management System,
Development of Robust Document Analysis and Recognition System for Printed Indian
Scripts etc. There have been significantly higher achievements in projects like Provident
Fund for Automation for CDAC, OCR Tool Development, Payroll Generation for PGI
Chandigarh and Digital Library for Vigyan Prasar. She has an exposure to wide area of
technologies and platforms.
Vasudha Gupta has an experience of 4.5years and is currently working as Technical
Officer with CDAC. Prior she has worked with Oracle Financial Services Software
Limited for 1.5 years on the renowned banking product “Flexcube”. She has worked on
various web based projects namely EdCIL Pre Examination Process Automation, ONES,
etc.
Sunit Kumar Rana has an experience of 7 years and is currently working as Project
Engineer - II with CDAC. He has worked on various web based projects namely EdCIL
Pre Examination Process Automation, DPIMS, HIMS, DWH, FMS. He has an exposure
to wide area of technologies. He is an SCJP certified.