In the last few years it has been seen that many software vendors have started delivering projects
incrementally with very short release cycles. Best examples of success of this approach has been Ubuntu
Operating system that has a 6 months release cycle and popular web browsers such as Google Chrome,
Opera, Mozilla Firefox. However there is very little knowledge available to the project managers to
validate the chosen release cycle length. We propose a decision support system that helps to validate and
estimate release cycle length in the early development phase by assuming that release cycle length is
directly affected by three factors, (i) choosing right requirements for current cycle, (ii) estimating proximal
time for each requirement, (iii) requirement wise feedback from last iteration based on product reception,
model accuracy and failed requirements. We have altered and used the EVOLVE technique proposed by G.
Ruhe to select best requirements for current cycle and map it to time domain using UCP (Use Case Points)
based estimation and feedback factors. The model has been evaluated on both in-house as well as industry
projects.
SRGM Analyzers Tool of SDLC for Software Improving QualityIJERA Editor
Software Reliability Growth Models (SRGM) have been developed to estimate software reliability measures such as
software failure rate, number of remaining faults and software reliability. In this paper, the software analyzers tool proposed
for deriving several software reliability growth models based on Enhanced Non-homogeneous Poisson Process (ENHPP) in
the presence of imperfect debugging and error generation. The proposed models are initially formulated for the case when
there is no differentiation between failure observation and fault removal testing processes and then this extended for the case
when there is a clear differentiation between failure observation and fault removal testing processes. Many Software
Reliability Growth Models (SRGM) have been developed to describe software failures as a random process and can be used
to measure the development status during testing. With SRGM software consultants can easily measure (or evaluate) the
software reliability (or quality) and plot software reliability growth charts.
From previous year researches, it is concluded that testing is playing a vital role in the development of the software product. As, software testing is a single approach to assure the quality of the software so most of the development efforts are put on the software testing. But software testing is an expensive process and consumes a lot of time. So, testing should be start as early as possible in the development to control the money and time problems. Even, testing should be performed at every step in the software development life cycle (SDLC) which is a structured approach used in the development of the software product. Software testing is a tradeoff between budget, time and quality. Now a day, testing becomes a very important activity in terms of exposure, security, performance and usability. Hence, software testing faces a collection of challenges.
Determination of Software Release Instant of Three-Tier Client Server Softwar...Waqas Tariq
Quality of any software system mainly depends on how much time testing take place, what kind of testing methodologies are used, how complex the software is, the amount of efforts put by software developers and the type of testing environment subject to the cost and time constraint. More time developers spend on testing more errors can be removed leading to better reliable software but then testing cost will also increase. On the contrary, if testing time is too short, software cost could be reduced provided the customers take risk of buying unreliable software. However, this will increase the cost during operational phase since it is more expensive to fix an error during operational phase than during testing phase. Therefore it is essentially important to decide when to stop testing and release the software to customers based on cost and reliability assessment. In this paper we present a mechanism of when to stop testing process and release the software to end-user by developing a software cost model with risk factor. Based on the proposed method we specifically address the issues of how to decide that we should stop testing and release the software based on three-tier client server architecture which would facilitates software developers to ensure on-time delivery of a software product meeting the criteria of achieving predefined level of reliability and minimizing the cost. A numerical example has been cited to illustrate the experimental results showing significant improvements over the conventional statistical models based on NHPP.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
SRGM Analyzers Tool of SDLC for Software Improving QualityIJERA Editor
Software Reliability Growth Models (SRGM) have been developed to estimate software reliability measures such as
software failure rate, number of remaining faults and software reliability. In this paper, the software analyzers tool proposed
for deriving several software reliability growth models based on Enhanced Non-homogeneous Poisson Process (ENHPP) in
the presence of imperfect debugging and error generation. The proposed models are initially formulated for the case when
there is no differentiation between failure observation and fault removal testing processes and then this extended for the case
when there is a clear differentiation between failure observation and fault removal testing processes. Many Software
Reliability Growth Models (SRGM) have been developed to describe software failures as a random process and can be used
to measure the development status during testing. With SRGM software consultants can easily measure (or evaluate) the
software reliability (or quality) and plot software reliability growth charts.
From previous year researches, it is concluded that testing is playing a vital role in the development of the software product. As, software testing is a single approach to assure the quality of the software so most of the development efforts are put on the software testing. But software testing is an expensive process and consumes a lot of time. So, testing should be start as early as possible in the development to control the money and time problems. Even, testing should be performed at every step in the software development life cycle (SDLC) which is a structured approach used in the development of the software product. Software testing is a tradeoff between budget, time and quality. Now a day, testing becomes a very important activity in terms of exposure, security, performance and usability. Hence, software testing faces a collection of challenges.
Determination of Software Release Instant of Three-Tier Client Server Softwar...Waqas Tariq
Quality of any software system mainly depends on how much time testing take place, what kind of testing methodologies are used, how complex the software is, the amount of efforts put by software developers and the type of testing environment subject to the cost and time constraint. More time developers spend on testing more errors can be removed leading to better reliable software but then testing cost will also increase. On the contrary, if testing time is too short, software cost could be reduced provided the customers take risk of buying unreliable software. However, this will increase the cost during operational phase since it is more expensive to fix an error during operational phase than during testing phase. Therefore it is essentially important to decide when to stop testing and release the software to customers based on cost and reliability assessment. In this paper we present a mechanism of when to stop testing process and release the software to end-user by developing a software cost model with risk factor. Based on the proposed method we specifically address the issues of how to decide that we should stop testing and release the software based on three-tier client server architecture which would facilitates software developers to ensure on-time delivery of a software product meeting the criteria of achieving predefined level of reliability and minimizing the cost. A numerical example has been cited to illustrate the experimental results showing significant improvements over the conventional statistical models based on NHPP.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
One of the core quality assurance feature which combines fault prevention and fault detection, is often known as testability approach also. There are many assessment techniques and quantification method evolved for software testability prediction which actually identifies testability weakness or factors to further help reduce test effort. This paper examines all those measurement techniques that are being proposed for software testability assessment at various phases of object oriented software development life cycle. The aim is to find the best metrics suit for software quality improvisation through software testability support. The ultimate objective is to establish the ground work for finding ways reduce the testing effort by improvising software testability and its assessment using well planned guidelines for object-oriented software development with the help of suitable metrics.
AN APPROACH FOR TEST CASE PRIORITIZATION BASED UPON VARYING REQUIREMENTS IJCSEA Journal
Software testing is a process continuously performed by the development team during the life cycle of the software with the motive to detect the faults as early as possible. Regressing testing is the most suitable technique for this in which we test number of test cases. As the number of test cases can be very large it is always preferable to prioritize test cases based upon certain criterions.In this paper prioritization strategy is proposed which prioritize test cases based on requirements analysis. By regressing testing if the requirements will vary in future, the software will be modified in such a manner that it will not affect the remaining parts of the software. The proposed system improves the testing process and its efficiency to achieve goals regarding quality, cost, and effort as well user satisfaction and the result of the proposed method evaluated with the help of performance evaluation metric.
ANALYSIS OF SOFTWARE QUALITY USING SOFTWARE METRICSijcsa
Software metrics have a direct link with measurement in software engineering. Correct measurement is the prior condition in any engineering fields, and software engineering is not an exception, as the size and complexity of software increases, manual inspection of software becomes a harder task. Most Software Engineers worry about the quality of software, how to measure and enhance its quality. The overall objective of this study was to asses and analysis’s software metrics used to measure the software product and process.
In this Study, the researcher used a collection of literatures from various electronic databases, available since 2008 to understand and know the software metrics. Finally, in this study, the researcher has been identified software quality is a means of measuring how software is designed and how well the software conforms to that design. Some of the variables that we are looking for software quality are Correctness, Product quality, Scalability, Completeness and Absence of bugs, However the quality standard that was used from one organization is different from others for this reason it is better to apply the software metrics to measure the quality of software and the current most common software metrics tools to reduce the subjectivity of faults during the assessment of software quality. The central contribution of this study is an overview about software metrics that can illustrate us the development in this area, and a critical analysis about the main metrics founded on the various literatures.
In this talk, I explore what productivity means to software developers, how we might track the value that is delivered in software produced by developers and how we might begin to think about measuring the productive delivery of effective software.
Keynote at International Conference on Performance Engineering (ICPE) 2020.
Although there has been an extensive study over delivering, increasing and maintaining software quality, there has not been enough aide- mémoire on ‘Rating a Software‘s Quality’. This study would project the literature review thus far and also sculpt the scope and need for the evolution of a rating system of software quality for the future.
The Impact of Software Complexity on Cost and Quality - A Comparative Analysi...ijseajournal
Early prediction of software quality is important for better software planning and controlling. In early
development phases, design complexity metrics are considered as useful indicators of software testing
effort and some quality attributes. Although many studies investigate the relationship between design
complexity and cost and quality, it is unclear what we have learned beyond the scope of individual studies.
This paper presented a systematic review on the influence of software complexity metrics on quality
attributes. We aggregated Spearman correlation coefficients from 59 different data sets from 57 primary
studies by a tailored meta-analysis approach. We found that fault proneness and maintainability are most
frequently investigated attributes. Chidamber & Kemerer metric suite is most frequently used but not all of
them are good quality attribute indicators. Moreover, the impact of these metrics is not different in
proprietary and open source projects. The result provides some implications for building quality model
across project type.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
A HYBRID APPROACH COMBINING RULE-BASED AND ANOMALY-BASED DETECTION AGAINST DD...IJNSA Journal
We have designed a hybrid approach combining rule-based and anomaly-based detection against DDoS
attacks. In the approach, the rule-based detection has established a set of rules and the anomaly-based
detection use one-way ANOVA test to detect possible attacks. We adopt TFN2K (Tribe Flood, the Net 2K)
as an attack traffic generator and monitor the system resource of the victim like throughput, memory
utilization, CPU utilization consumed by attack traffic. Target users of the proposed scheme are data
center administrators. The types of attack traffic have been analysed and by that we develop a defense
scheme. The experiment has demonstrated that the proposed scheme can effectively detect the attack traffic.
AN APPORACH FOR SCRIPT IDENTIFICATION IN PRINTED TRILINGUAL DOCUMENTS USING T...ijaia
In this work, we review the outcome of texture features for script classification. Rectangular White Space
analysis algorithm is used to analyze and identify heterogeneous layouts of document images. The texture
features, namely the color texture moments, Local binary pattern (LBP) and responses of Gabor, LM-filter,
S-filter, R-filter are extracted, and combinations of these are considered in the classification. In this work,
a probabilistic neural network and Nearest Neighbor are used for classification. To corrabate the
adequacy of the proposed strategy, an experiment was operated on our own data set. To study the effect of
classification accuracy, we vary the database sizes and the results show that the combination of multiple
features vastly improves the performance.
MODEL DRIVEN WEB APPLICATION DEVELOPMENT WITH AGILE PRACTICESijseajournal
Model driven development is an effective method due to its benefits such as code transformation, increasing
productivity and reducing human based error possibilities. Meanwhile, agile software development
increases the software flexibility and customer satisfaction by using iterative method. Can these two
development approaches be combined to develop web applications efficiently? What are the challenges and
what are the benefits of this approach? In this paper, we answer these two crucial problems; combining
model driven development and agile software development results in not only fast development and
easiness of the user interface design but also efficient job tracking. We also defined an agile model based
approach for web applications whose implementation study has been carried out to support the answers we
gave these two crucial problems.
COLOR IMAGE ENCRYPTION BASED ON MULTIPLE CHAOTIC SYSTEMSIJNSA Journal
This paper proposed a novel color image encryption scheme based on multiple chaotic systems. The
ergodicity property of chaotic system is utilized to perform the permutation process; a substitution
operation is applied to achieve the diffusion effect. In permutation stage, the 3D color plain-image matrix
is converted to a 2D image matrix, then two generalized Arnold maps are employed to generate hybrid
chaotic sequences which are dependent on the plain-image’s content. The generated chaotic sequences are
then applied to perform the permutation process. The encryption’s key streams not only depend on the
cipher keys but also depend on plain-image and therefore can resist chosen-plaintext attack as well as
known-plaintext attack. In the diffusion stage, four pseudo-random gray value sequences are generated by
another generalized Arnold map. The gray value sequences are applied to perform the diffusion process by
bitxoring operation with the permuted image row-by-row or column-by-column to improve the encryption
rate. The security and performance analysis have been performed, including key space analysis, histogram
analysis, correlation analysis, information entropy analysis, key sensitivity analysis, differential analysis
etc. The experimental results show that the proposed image encryption scheme is highly secure thanks to its
large key space and efficient permutation-substitution operation, and therefore it is suitable for practical
image and video encryption
One of the core quality assurance feature which combines fault prevention and fault detection, is often known as testability approach also. There are many assessment techniques and quantification method evolved for software testability prediction which actually identifies testability weakness or factors to further help reduce test effort. This paper examines all those measurement techniques that are being proposed for software testability assessment at various phases of object oriented software development life cycle. The aim is to find the best metrics suit for software quality improvisation through software testability support. The ultimate objective is to establish the ground work for finding ways reduce the testing effort by improvising software testability and its assessment using well planned guidelines for object-oriented software development with the help of suitable metrics.
AN APPROACH FOR TEST CASE PRIORITIZATION BASED UPON VARYING REQUIREMENTS IJCSEA Journal
Software testing is a process continuously performed by the development team during the life cycle of the software with the motive to detect the faults as early as possible. Regressing testing is the most suitable technique for this in which we test number of test cases. As the number of test cases can be very large it is always preferable to prioritize test cases based upon certain criterions.In this paper prioritization strategy is proposed which prioritize test cases based on requirements analysis. By regressing testing if the requirements will vary in future, the software will be modified in such a manner that it will not affect the remaining parts of the software. The proposed system improves the testing process and its efficiency to achieve goals regarding quality, cost, and effort as well user satisfaction and the result of the proposed method evaluated with the help of performance evaluation metric.
ANALYSIS OF SOFTWARE QUALITY USING SOFTWARE METRICSijcsa
Software metrics have a direct link with measurement in software engineering. Correct measurement is the prior condition in any engineering fields, and software engineering is not an exception, as the size and complexity of software increases, manual inspection of software becomes a harder task. Most Software Engineers worry about the quality of software, how to measure and enhance its quality. The overall objective of this study was to asses and analysis’s software metrics used to measure the software product and process.
In this Study, the researcher used a collection of literatures from various electronic databases, available since 2008 to understand and know the software metrics. Finally, in this study, the researcher has been identified software quality is a means of measuring how software is designed and how well the software conforms to that design. Some of the variables that we are looking for software quality are Correctness, Product quality, Scalability, Completeness and Absence of bugs, However the quality standard that was used from one organization is different from others for this reason it is better to apply the software metrics to measure the quality of software and the current most common software metrics tools to reduce the subjectivity of faults during the assessment of software quality. The central contribution of this study is an overview about software metrics that can illustrate us the development in this area, and a critical analysis about the main metrics founded on the various literatures.
In this talk, I explore what productivity means to software developers, how we might track the value that is delivered in software produced by developers and how we might begin to think about measuring the productive delivery of effective software.
Keynote at International Conference on Performance Engineering (ICPE) 2020.
Although there has been an extensive study over delivering, increasing and maintaining software quality, there has not been enough aide- mémoire on ‘Rating a Software‘s Quality’. This study would project the literature review thus far and also sculpt the scope and need for the evolution of a rating system of software quality for the future.
The Impact of Software Complexity on Cost and Quality - A Comparative Analysi...ijseajournal
Early prediction of software quality is important for better software planning and controlling. In early
development phases, design complexity metrics are considered as useful indicators of software testing
effort and some quality attributes. Although many studies investigate the relationship between design
complexity and cost and quality, it is unclear what we have learned beyond the scope of individual studies.
This paper presented a systematic review on the influence of software complexity metrics on quality
attributes. We aggregated Spearman correlation coefficients from 59 different data sets from 57 primary
studies by a tailored meta-analysis approach. We found that fault proneness and maintainability are most
frequently investigated attributes. Chidamber & Kemerer metric suite is most frequently used but not all of
them are good quality attribute indicators. Moreover, the impact of these metrics is not different in
proprietary and open source projects. The result provides some implications for building quality model
across project type.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
A HYBRID APPROACH COMBINING RULE-BASED AND ANOMALY-BASED DETECTION AGAINST DD...IJNSA Journal
We have designed a hybrid approach combining rule-based and anomaly-based detection against DDoS
attacks. In the approach, the rule-based detection has established a set of rules and the anomaly-based
detection use one-way ANOVA test to detect possible attacks. We adopt TFN2K (Tribe Flood, the Net 2K)
as an attack traffic generator and monitor the system resource of the victim like throughput, memory
utilization, CPU utilization consumed by attack traffic. Target users of the proposed scheme are data
center administrators. The types of attack traffic have been analysed and by that we develop a defense
scheme. The experiment has demonstrated that the proposed scheme can effectively detect the attack traffic.
AN APPORACH FOR SCRIPT IDENTIFICATION IN PRINTED TRILINGUAL DOCUMENTS USING T...ijaia
In this work, we review the outcome of texture features for script classification. Rectangular White Space
analysis algorithm is used to analyze and identify heterogeneous layouts of document images. The texture
features, namely the color texture moments, Local binary pattern (LBP) and responses of Gabor, LM-filter,
S-filter, R-filter are extracted, and combinations of these are considered in the classification. In this work,
a probabilistic neural network and Nearest Neighbor are used for classification. To corrabate the
adequacy of the proposed strategy, an experiment was operated on our own data set. To study the effect of
classification accuracy, we vary the database sizes and the results show that the combination of multiple
features vastly improves the performance.
MODEL DRIVEN WEB APPLICATION DEVELOPMENT WITH AGILE PRACTICESijseajournal
Model driven development is an effective method due to its benefits such as code transformation, increasing
productivity and reducing human based error possibilities. Meanwhile, agile software development
increases the software flexibility and customer satisfaction by using iterative method. Can these two
development approaches be combined to develop web applications efficiently? What are the challenges and
what are the benefits of this approach? In this paper, we answer these two crucial problems; combining
model driven development and agile software development results in not only fast development and
easiness of the user interface design but also efficient job tracking. We also defined an agile model based
approach for web applications whose implementation study has been carried out to support the answers we
gave these two crucial problems.
COLOR IMAGE ENCRYPTION BASED ON MULTIPLE CHAOTIC SYSTEMSIJNSA Journal
This paper proposed a novel color image encryption scheme based on multiple chaotic systems. The
ergodicity property of chaotic system is utilized to perform the permutation process; a substitution
operation is applied to achieve the diffusion effect. In permutation stage, the 3D color plain-image matrix
is converted to a 2D image matrix, then two generalized Arnold maps are employed to generate hybrid
chaotic sequences which are dependent on the plain-image’s content. The generated chaotic sequences are
then applied to perform the permutation process. The encryption’s key streams not only depend on the
cipher keys but also depend on plain-image and therefore can resist chosen-plaintext attack as well as
known-plaintext attack. In the diffusion stage, four pseudo-random gray value sequences are generated by
another generalized Arnold map. The gray value sequences are applied to perform the diffusion process by
bitxoring operation with the permuted image row-by-row or column-by-column to improve the encryption
rate. The security and performance analysis have been performed, including key space analysis, histogram
analysis, correlation analysis, information entropy analysis, key sensitivity analysis, differential analysis
etc. The experimental results show that the proposed image encryption scheme is highly secure thanks to its
large key space and efficient permutation-substitution operation, and therefore it is suitable for practical
image and video encryption
This paper presents a review & performs a comparative evaluation of few known machine learning
algorithms in terms of their suitability & code performance on any given data set of any size. In this paper,
we describe our Machine Learning ToolBox that we have built using python programming language. The
algorithms used in the toolbox consists of supervised classification algorithms such as Naïve Bayes,
Decision Trees, SVM, K-nearest Neighbors and Neural Network (Backpropagation). The algorithms are
tested on iris and diabetes dataset and are compared on the basis of their accuracy under different
conditions. However using our tool one can apply any of the implemented ML algorithms on any dataset of
any size. The main goal of building a toolbox is to provide users with a platform to test their datasets on
different Machine Learning algorithms and use the accuracy results to determine which algorithms fits the
data best. The toolbox allows the user to choose a dataset of his/her choice either in structured or
unstructured form and then can choose the features he/she wants to use for training the machine We have
given our concluding remarks on the performance of implemented algorithms based on experimental
analysis
A FRAMEWORK FOR INTEGRATING USABILITY PRACTICES INTO SMALL-SIZED SOFTWARE DEV...ijseajournal
Usability now appears to be a highly important attribute for software quality; it is a critical factor that
needs to be considered by every software-development organization when developing software to improve
customer satisfaction and increase competition in the market. There exists a lack of a reference model or
framework for small-sized software-development organizations to indicate which usability practices should
be implemented, and where in the system-development life cycle they need to be considered. We offer
developers who have the objective of integrating usability practices into their development life cycle a
framework that characterizes 10 selected user-centered design (UCD) methods in relation to five relevant
criteria based on some ISO factors that have an effect on the selection of methods (ISO/TR16982). The
selection of the methods for inclusion in the framework responds to these organizations’ needs; and we
selected basic methods that are recommended, cost-effective, simple to plan and apply, and easy to learn by
developers; and which can be applied when time, resources, skills, and expertise are limited. We favor
methods that are generally applicable across a wide range of development environments. The selected
methods are organized in the framework according to the stages in the development process where they
might be applied. The only requirement for the existing development life cycle is that it to be based on an
iterative approach.
EVALUATION OF A NEW INCREMENTAL CLASSIFICATION TREE ALGORITHM FOR MINING HIGH...mlaij
Abstract—A new model for online machine learning process of high speed data stream is proposed, to
minimize the severe restrictions associated with the existing computer learning algorithms. Most of the
existing models have three principle steps. In the first step, the system would create a model incrementally.
In the second step the time taken by the examples to complete a prescribed procedure with their arrival
speed is computed. In the third and final step of the model the size of memory required for computation is
predicted in advance. To overcome these restrictions we proposed this new data stream classification
algorithm, where the data can be partitioned into stream of trees. In this algorithm, the new data set can be
updated with the existing tree. This algorithm, called incremental classification tree algorithm, is proved to
be an excellent solution for processing larger data streams. In this paper, we present the experimental
results of our new algorithm and prove that our method would eradicate the problems of the existing
method.
MODELING, IMPLEMENTATION AND PERFORMANCE ANALYSIS OF MOBILITY LOAD BALANCING ...IJCNCJournal
We propose in this paper a simulation implementation of Self-Organizing Networks (SON) optimization
related to mobility load balancing (MLB) for LTE systems using ns-3 [1]. The implementation is achieved
toward two MLB algorithms dynamically adjusting handover (HO) parameters based on the Reference
Signal Received Power (RSRP) measurements. Such adjustments are done with respect to loads of both an
overloaded cell and its cells’ neighbours having enough available resources enabling to achieve load
balancing. Numerical investigations through selected key performance indicators (KPIs) of the proposed
MLB algorithms when compared with another HO algorithm (already implemented in ns-3) based on A3
event [2] highlight the significant MLB gains provided in terms global network throughput, packet loss rate
and the number of successful HO without incurring significant overhead.
FLEXIBLE VIRTUAL ROUTING FUNCTION DEPLOYMENT IN NFV-BASED NETWORK WITH MINIMU...IJCNCJournal
In a conventional network, most network devices, such as routers, are dedicated devices that do not
have much variation in capacity. In recent years, a new concept of Network Functions
Virtualisation (NFV) has come into use. The intention is to implement a variety of network functions
with software on general-purpose servers and this allows the network operator to select any
capabilities and locations of network functions without any physical constraints.
This paper focuses on the deployment of NFV-based routing functions which are one of critical
virtual network functions, and present the algorithm of virtual routing function allocation that
minimize the total network cost. In addition, this paper presents the useful allocation policy of
virtual routing functions, based on an evaluation with a ladder-shaped network model. This policy
takes the ratio of the cost of a routing function to that of a circuit and traffic distribution in the
network into consideration. Furthermore, this paper shows that there are cases where the use of
NFV-based routing functions makes it possible to reduce the total network cost dramatically, in
comparison to a conventional network, in which it is not economically viable to distribute smallcapacity
routing functions
PREDICTING STUDENT ACADEMIC PERFORMANCE IN BLENDED LEARNING USING ARTIFICIAL ...ijaia
Along with the spreading of online education, the importance of active support of students involved in
online learning processes has grown. The application of artificial intelligence in education allows
instructors to analyze data extracted from university servers, identify patterns of student behavior and
develop interventions for struggling students. This study used student data stored in a Moodle server and
predicted student success in course, based on four learning activities - communication via emails,
collaborative content creation with wiki, content interaction measured by files viewed and self-evaluation
through online quizzes. Next, a model based on the Multi-Layer Perceptron Neural Network was trained to
predict student performance on a blended learning course environment. The model predicted the
performance of students with correct classification rate, CCR, of 98.3%.
GAME THEORY BASED INTERFERENCE CONTROL AND POWER CONTROL FOR D2D COMMUNICATIO...IJCNCJournal
With the current development of mobile communication services, people need personal communication of
high speed, excellent service, high quality and low latency,however, limited spectrum resources become
the most important factor to hamper improvement of cellular systems. As big amount of data traffic will
cause greater local consumption of spectrum resources, future networks are required to have appropriate
techniques to better support such forms of communication. D2D (Device-to-device) communication
technology in a cellular network makes full use of spectrum resources underlaying, reduces the load of the
base station, minimizes transmit power of the terminals and the base stations, thereby enhances the overall
throughput of the networks. Due to the use of multiplexing D2D UE (User equipment) resources and
spectrum, and the interference caused by the sharing of resources between adjacent cells, it has become a
major factor affecting coexisting of cellular subscribers and D2D users. When D2D communication
multiplexes the uplink resources, the base-stations are easily to be disturbed; when the downlink resources
are multiplexed, the users of downlink are susceptible to interference. In order to build a high-efficient
mobile network, we can meet the QoS requirements by controlling the power to suppress the interference
between the base station and a terminal user.
OPTIMIZATION OF NEURAL NETWORK ARCHITECTURE FOR BIOMECHANIC CLASSIFICATION TA...ijaia
Electromyogram signals (EMGs) contain valuable information that can be used in man-machine interfacing between human users and myoelectric prosthetic devices. However, EMG signals are
complicated and prove difficult to analyze due to physiological noise and other issues. Computational
intelligence and machine learning techniques, such as artificial neural networks (ANNs), serve as powerful
tools for analyzing EMG signals and creating optimal myoelectric control schemes for prostheses. This
research examines the performance of four different neural network architectures (feedforward, recurrent,
counter propagation, and self organizing map) that were tasked with classifying walking speed when given
EMG inputs from 14 different leg muscles. Experiments conducted on the data set suggest that self
organizing map neural networks are capable of classifying walking speed with greater than 99% accuracy.
IMAGE BASED RECOGNITION - RECENT CHALLENGES AND SOLUTIONS ILLUSTRATED ON APPL...mlaij
In this paper, problems and solutions for the automatic recognition of miscellaneous materials, especially
bulk materials are discussed. The fact that many materials, especially natural materials, have a strong
phenotypic variability resulting in high intra-class and low inter-class variability of the calculated features
poses a complex recognition problem. The recognition of components of a wheat sample or the
classification of mineral aggregates serves as an example to demonstrate different aspects in segmentation,
feature extraction, classifier design and complexity assessment. We present a technique for the
segmentation of highly overlapping and touching objects into single object images, a proposal for feature
selection and classifier parameter optimization, as well as a method to visualise the complexity of a highdimensional
recognition problem in a three-dimensional space. Every step of the pattern recognition
process needs to be optimized carefully with special attention to the risk of overfitting. Modern processors
and the application of field-programmable gate arrays as well as the outsourcing of processing steps to the
graphic processing unit speed up the calculation and make real-time computation possible also for highly
complex recognition problems such as the quality assurance of bulk materials.
ENERGY CONSUMPTION IMPROVEMENT OF TRADITIONAL CLUSTERING METHOD IN WIRELESS S...IJCNCJournal
In the traditional clustering routing protocol of wireless sensor network, LEACH protocol (Low Energy
Adaptive Clustering Hierarchy) is considered to have many outstanding advantages in the implementation
of the hierarchy according to low energy adaptive cluster to collect and distribute the data to the base
station. The main objective of LEACH is: To prolong life time of the network, reduce the energy
consumption by each node, using the data concentration to reduce bulletins in the network. However, in the
case of large network, the distance from the nodes to the base station is very different. Therefore, the
energy consumption when becoming the host node is very different but LEACH is not based on the
remaining energy to choose the host node, which is based on the number of times to become the host node
in the previous rounds. This makes the nodes far away from the base station lose power sooner.
In this paper, we give a new routing protocol based on the LEACH protocol in order to improve operating
time of sensor network by considering energy issues and distance in selecting the cluster-head (CH), at that
time the nodes with high energy and near the base station (BS) will have a greater probability of becoming
the cluster-head than the those in far and with lower energy.
PERFORMANCE EVALUATION OF MOBILE WIMAX IEEE 802.16E FOR HARD HANDOVERIJCNCJournal
Seamless handover in wireless networks is to guarantee both service continuity and service quality. In
WiMAX, providing scalability and quality of service for multimedia services during handover is a main
challenge because of high latency and packet loss. In this paper, we created four scenarios using Qualnet
5.2 Network Simulator to analyze the hard handover functionality of WiMAX under different conditions.
The scenarios such as Flag with 5 and 10 sec UCD and DCD interval values, Random mobility scenario
and DEM scenario using 6 WiMAX Cells have been considered. This study is performed over the real
urban area of JNU where we have used JNU map for scenarios 1, 2 and 3 but for scenario 4, the JNU
terrain data has been used. Further, each BS of 6 WiMAX cell is connected to four nodes. All nodes of each
scenario are fixed except Node 1. Node 1 is moving and performing the handover between the different BSs
while sending and receiving real time traffics. Flag mobility model is used in Scenario 1, 2 and 4 to model
the movement of the Node 1 while we use random mobility model in sceanrio3. 5 seconds time interval is
used for Scenarios 1, 3, and 4 while 10 seconds time interval is used for scenario 2 to study the effect of
management messages load on handover. Further, the statistical measures of handover performance of
WiMAX in terms of number of handover performed, throughput, end-to-end delay, jitter, and packets
dropped are observed and evaluated.
Big data is a prominent term which characterizes the improvement and availability of data in all three
formats like structure, unstructured and semi formats. Structure data is located in a fixed field of a record
or file and it is present in the relational data bases and spreadsheets whereas an unstructured data file
includes text and multimedia contents. The primary objective of this big data concept is to describe the
extreme volume of data sets i.e. both structured and unstructured. It is further defined with three “V”
dimensions namely Volume, Velocity and Variety, and two more “V” also added i.e. Value and Veracity.
Volume denotes the size of data, Velocity depends upon the speed of the data processing, Variety is
described with the types of the data, Value which derives the business value and Veracity describes about
the quality of the data and data understandability. Nowadays, big data has become unique and preferred
research areas in the field of computer science. Many open research problems are available in big data
and good solutions also been proposed by the researchers even though there is a need for development of
many new techniques and algorithms for big data analysis in order to get optimal solutions. In this paper,
a detailed study about big data, its basic concepts, history, applications, technique, research issues and
tools are discussed.
A NOVEL METHOD FOR REDUCING TESTING TIME IN SCRUM AGILE PROCESSijseajournal
Recently, the software development in the industry is moving towards agile due to the advantages provided
by the agile development process. Main advantages of agile software development process are: delivering
high quality software in shorter intervals and embracing change. Testing is a vital activity for delivering a
high quality software product. Often testing accounts for more project effort and time than any other
software development activities. Testing strategies for conventional process models are well established,
but these strategies are not directly applicable to agile testing without modifications and changes. In this
paper, a novel method for agile testing in the scrum software development environment is proposed and
presented. The sprint and testing activities which form the context for the proposed testing method are
presented. The proposed method is applied on two cases studies. The results indicated that the testing time
can be reduced considerably by applying the proposed method
ENHANCE RFID SECURITY AGAINST BRUTE FORCE ATTACK BASED ON PASSWORD STRENGTH A...IJNSA Journal
RFID systems are one of the important techniques that have been used in modern technologies; these
systems rely heavily on default and random passwords. Due to the increasing use of RFID in various
industries, security and privacy issues should be addressed carefully as there is no efficient way to achieve
security in this technology. Some active tags are low cost and basic tags cannot use standard cryptographic
operations where the uses of such techniques increase the cost of these cards. This paper sheds light on the
weaknesses of RFID system and identifies the threats and countermeasures of possible attacks. For the
sake of this paper, an algorithm was designed to ensure and measure the strength of passwords used in the
authentication process between tag and reader to enhance security in their communication and defend
against brute-force attacks. Our algorithm is design by modern techniques based on entropy, password
length, cardinality, Markov-model and Fuzzy Logic
MULTI MODEL DATA MINING APPROACH FOR HEART FAILURE PREDICTIONIJDKP
Developing predictive modelling solutions for risk estimation is extremely challenging in health-care
informatics. Risk estimation involves integration of heterogeneous clinical sources having different
representation from different health-care provider making the task increasingly complex. Such sources are
typically voluminous, diverse, and significantly change over the time. Therefore, distributed and parallel
computing tools collectively termed big data tools are in need which can synthesize and assist the physician
to make right clinical decisions. In this work we propose multi-model predictive architecture, a novel
approach for combining the predictive ability of multiple models for better prediction accuracy. We
demonstrate the effectiveness and efficiency of the proposed work on data from Framingham Heart study.
Results show that the proposed multi-model predictive architecture is able to provide better accuracy than
best model approach. By modelling the error of predictive models we are able to choose sub set of models
which yields accurate results. More information was modelled into system by multi-level mining which has
resulted in enhanced predictive accuracy.
ASSESSING THE PERFORMANCE AND ENERGY USAGE OF MULTI-CPUS, MULTI-CORE AND MANY...ijdpsjournal
This paper studies the performance and energy consumption of several multi-core, multi-CPUs and manycore
hardware platforms and software stacks for parallel programming. It uses the Multimedia Multiscale
Parser (MMP), a computationally demanding image encoder application, which was ported to several
hardware and software parallel environments as a benchmark. Hardware-wise, the study assesses
NVIDIA's Jetson TK1 development board, the Raspberry Pi 2, and a dual Intel Xeon E5-2620/v2 server, as
well as NVIDIA's discrete GPUs GTX 680, Titan Black Edition and GTX 750 Ti. The assessed parallel
programming paradigms are OpenMP, Pthreads and CUDA, and a single-thread sequential version, all
running in a Linux environment. While the CUDA-based implementation delivered the fastest execution, the
Jetson TK1 proved to be the most energy efficient platform, regardless of the used parallel software stack.
Although it has the lowest power demand, the Raspberry Pi 2 energy efficiency is hindered by its lengthy
execution times, effectively consuming more energy than the Jetson TK1. Surprisingly, OpenMP delivered
twice the performance of the Pthreads-based implementation, proving the maturity of the tools and
libraries supporting OpenMP.
EVALUATION OF SOFTWARE DEGRADATION AND FORECASTING FUTURE DEVELOPMENT NEEDS I...ijseajournal
This article is an extended version of a previously published conference paper. In this research, JHotDraw (JHD), a well-tested and widely used open source Java-based graphics framework developed with the best software engineering practice was selected as a test suite. Six versions of this software were profiled, and data collected dynamically, from which four metrics namely (1) entropy (2) software maturity index, COCOMO effort and duration metrics were used to analyze software degradation, maturity level and use
the obtained results as input to time series analysis in order to predict effort and duration period that may
be needed for the development of future versions. The novel idea is that, historical evolution data is used to
project, predict and forecast resource requirements for future developments. The technique presented in
this paper will empower software development decision makers with a viable tool for planning and decision
making.
SOFTWARE REQUIREMENT CHANGE EFFORT ESTIMATION MODEL PROTOTYPE TOOL FOR SOFTWA...ijseajournal
In software development phase software artifacts are not in consistent states such as: some of the class artifacts are fully developed some are half developed, some are major developed, some are minor developed and some are not developed yet. At this stage allowing too many software requirement changes may possibly delay in project delivery and increase development budget of the software. On the other hand rejecting too many changes may increase customer dissatisfaction. Software change effort estimation is one of the most challenging and important activity that helps software project managers in accepting or rejecting changes during software development phase. This paper extends our previous works on developing a software requirement change effort estimation model prototype tool for the software development phase. The significant achievements of the tool are demonstrated through an extensive experimental validation using several case studies. The experimental analysis shows improvement in the estimation accuracy over current change effort estimation models.
PROPOSING AUTOMATED REGRESSION SUITE USING OPEN SOURCE TOOLS FOR A HEALTH CAR...ijseajournal
Regression testing is very important for the delivery of high quality product. It helps to simulate a suite of critical test cases periodically and helps to identify if introduction of any new features or any source code change has adversely affected the software quality or functionality. As a result, regression testing cannot be ignored from the software testing life cycle (STLC). But just doing a regression testing cannot be beneficial until it is accompanied by automation testing. Automated regression suite not only saves time and cost by re-running test scripts again and again but it also provide the confidence that all the critical test cases has been covered, providing more confidence in the quality of the product and increasing the ability to meet schedules. IT has an ability to explore the whole software every day without requiring much of manual effort. Current software is going through continuous development which requires testing again and again to check if new feature implementation has affected the existing functionality. In addition to this, it is facing issue in validation of the installation at client site and requires availability of testers to check the critical functionality of the software manually. This paper came up with the solution of creating automated regression suite for the software. The current research will provide guidelines to the future researchers on how to create an automated regression suite for any web application using open source tools.
Tiara Ramadhani - Program Studi S1 Sistem Informasi - Fakultas Sains dan Tekn...Tiara Ramadhani
Tugas ini di buat untuk memenuhi salah satu tugas mata kuliah pada Program Studi S1 Sistem Informasi.
Oleh ;
Nama : Tiara Ramadhani.
NIM ; 11453201723
SIF VII E
UIN SUSKA RIAU
Defect Management Practices and Problems in Free/Open Source Software ProjectsWaqas Tariq
With the advent of Free/Open Source Software (F/OSS) paradigm, a large number of projects are evolving that make use of F/OSS infrastructure and development practices. Defect Management System is an important component in F/OSS infrastructure which maintains defect records as well as tracks their status. The defect data comprising more than 60,000 defect reports from 20 F/OSS Projects is analyzed from various perspectives, with special focus on evaluating the efficiency and effectiveness in resolving defects and determining responsiveness towards users. Major problems and inefficiencies encountered in Defect Management among F/OSS Projects have been identified. A process is proposed to distribute roles and responsibilities among F/OSS participants which can help F/OSS Projects to improve the effectiveness and efficiency of Defect Management and hence assure better quality of F/OSS Projects.
FROM THE ART OF SOFTWARE TESTING TO TEST-AS-A-SERVICE IN CLOUD COMPUTINGijseajournal
Researchers consider that the first edition of the book "The Art of Software Testing" by Myers (1979)
initiated research in Software Testing. Since then, software testing has gone through evolutions that have
driven standards and tools. This evolution has accompanied the complexity and variety of software
deployment platforms. The migration to the cloud allowed benefits such as scalability, agility, and better
return on investment. Cloud computing requires more significant involvement in software testing to ensure
that services work as expected. In addition to testing cloud applications, cloud computing has paved the
way for testing in the Test-as-a-Service model. This review aims to understand software testing in the
context of cloud computing. Based on the knowledge explained here, we sought to linearize the evolution of
software testing, characterizing fundamental points and allowing us to compose a synthesis of the body of
knowledge in software testing, expanded by the cloud computing paradigm.
From the Art of Software Testing to Test-as-a-Service in Cloud Computingijseajournal
Researchers consider that the first edition of the book "The Art of Software Testing" by Myers (1979)
initiated research in Software Testing. Since then, software testing has gone through evolutions that have
driven standards and tools. This evolution has accompanied the complexity and variety of software
deployment platforms. The migration to the cloud allowed benefits such as scalability, agility, and better
return on investment. Cloud computing requires more significant involvement in software testing to ensure
that services work as expected. In addition to testing cloud applications, cloud computing has paved the
way for testing in the Test-as-a-Service model. This review aims to understand software testing in the
context of cloud computing. Based on the knowledge explained here, we sought to linearize the evolution of
software testing, characterizing fundamental points and allowing us to compose a synthesis of the body of
knowledge in software testing, expanded by the cloud computing paradigm.
STATE-OF-THE-ART IN EMPIRICAL VALIDATION OF SOFTWARE METRICS FOR FAULT PRONEN...IJCSES Journal
With the sharp rise in software dependability and failure cost, high quality has been in great demand.However, guaranteeing high quality in software systems which have grown in size and complexity coupled with the constraints imposed on their development has become increasingly difficult, time and resource consuming activity. Consequently, it becomes inevitable to deliver software that have no serious faults. In
this case, object-oriented (OO) products being the de facto standard of software development with their unique features could have some faults that are hard to find or pinpoint the impacts of changes. The earlier faults are identified, found and fixed, the lesser the costs and the higher the quality. To assess product quality, software metrics are used. Many OO metrics have been proposed and developed. Furthermore,
many empirical studies have validated metrics and class fault proneness (FP) relationship. The challenge is which metrics are related to class FP and what activities are performed. Therefore, this study bring together the state-of-the-art in fault prediction of FP that utilizes CK and size metrics. We conducted a systematic literature review over relevant published empirical validation articles. The results obtained are
analysed and presented. It indicates that 29 relevant empirical studies exist and measures such as complexity, coupling and size were found to be strongly related to FP.
Testing is not a stand-alone activity. It has its place within a software development life cycle model and therefore the life cycle applied will largely determine how testing is organized.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
Call for paper 2012, hard copy of Certificate, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJCER, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, research and review articles, IJCER Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathematics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer review journal, indexed journal, research and review articles, engineering journal, www.ijceronline.com, research journals,
yahoo journals, bing journals, International Journal of Computational Engineering Research, Google journals, hard copy of Certificate,
journal of engineering, online Submission
A Complexity Based Regression Test Selection StrategyCSEIJJournal
Software is unequivocally the foremost and indispensable entity in this technologically driven world.
Therefore quality assurance, and in particular, software testing is a crucial step in the software
development cycle. This paper presents an effective test selection strategy that uses a Spectrum of
Complexity Metrics (SCM). Our aim in this paper is to increase the efficiency of the testing process by
significantly reducing the number of test cases without having a significant drop in test effectiveness. The
strategy makes use of a comprehensive taxonomy of complexity metrics based on the product level (class,
method, statement) and its characteristics.We use a series of experiments based on three applications with
a significant number of mutants to demonstrate the effectiveness of our selection strategy.For further
evaluation, we compareour approach to boundary value analysis. The results show the capability of our
approach to detect mutants as well as the seeded errors.
Similar to A DECISION SUPPORT SYSTEM TO CHOOSE OPTIMAL RELEASE CYCLE LENGTH IN INCREMENTAL SOFTWARE DEVELOPMENT ENVIRONMENTS (20)
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
Why React Native as a Strategic Advantage for Startup Innovation.pdfayushiqss
Do you know that React Native is being increasingly adopted by startups as well as big companies in the mobile app development industry? Big names like Facebook, Instagram, and Pinterest have already integrated this robust open-source framework.
In fact, according to a report by Statista, the number of React Native developers has been steadily increasing over the years, reaching an estimated 1.9 million by the end of 2024. This means that the demand for this framework in the job market has been growing making it a valuable skill.
But what makes React Native so popular for mobile application development? It offers excellent cross-platform capabilities among other benefits. This way, with React Native, developers can write code once and run it on both iOS and Android devices thus saving time and resources leading to shorter development cycles hence faster time-to-market for your app.
Let’s take the example of a startup, which wanted to release their app on both iOS and Android at once. Through the use of React Native they managed to create an app and bring it into the market within a very short period. This helped them gain an advantage over their competitors because they had access to a large user base who were able to generate revenue quickly for them.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Strategies for Successful Data Migration Tools.pptxvarshanayak241
Data migration is a complex but essential task for organizations aiming to modernize their IT infrastructure and leverage new technologies. By understanding common challenges and implementing these strategies, businesses can achieve a successful migration with minimal disruption. Data Migration Tool like Ask On Data play a pivotal role in this journey, offering features that streamline the process, ensure data integrity, and maintain security. With the right approach and tools, organizations can turn the challenge of data migration into an opportunity for growth and innovation.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Your Digital Assistant.
Making complex approach simple. Straightforward process saves time. No more waiting to connect with people that matter to you. Safety first is not a cliché - Securely protect information in cloud storage to prevent any third party from accessing data.
Would you rather make your visitors feel burdened by making them wait? Or choose VizMan for a stress-free experience? VizMan is an automated visitor management system that works for any industries not limited to factories, societies, government institutes, and warehouses. A new age contactless way of logging information of visitors, employees, packages, and vehicles. VizMan is a digital logbook so it deters unnecessary use of paper or space since there is no requirement of bundles of registers that is left to collect dust in a corner of a room. Visitor’s essential details, helps in scheduling meetings for visitors and employees, and assists in supervising the attendance of the employees. With VizMan, visitors don’t need to wait for hours in long queues. VizMan handles visitors with the value they deserve because we know time is important to you.
Feasible Features
One Subscription, Four Modules – Admin, Employee, Receptionist, and Gatekeeper ensures confidentiality and prevents data from being manipulated
User Friendly – can be easily used on Android, iOS, and Web Interface
Multiple Accessibility – Log in through any device from any place at any time
One app for all industries – a Visitor Management System that works for any organisation.
Stress-free Sign-up
Visitor is registered and checked-in by the Receptionist
Host gets a notification, where they opt to Approve the meeting
Host notifies the Receptionist of the end of the meeting
Visitor is checked-out by the Receptionist
Host enters notes and remarks of the meeting
Customizable Components
Scheduling Meetings – Host can invite visitors for meetings and also approve, reject and reschedule meetings
Single/Bulk invites – Invitations can be sent individually to a visitor or collectively to many visitors
VIP Visitors – Additional security of data for VIP visitors to avoid misuse of information
Courier Management – Keeps a check on deliveries like commodities being delivered in and out of establishments
Alerts & Notifications – Get notified on SMS, email, and application
Parking Management – Manage availability of parking space
Individual log-in – Every user has their own log-in id
Visitor/Meeting Analytics – Evaluate notes and remarks of the meeting stored in the system
Visitor Management System is a secure and user friendly database manager that records, filters, tracks the visitors to your organization.
"Secure Your Premises with VizMan (VMS) – Get It Now"
Modern design is crucial in today's digital environment, and this is especially true for SharePoint intranets. The design of these digital hubs is critical to user engagement and productivity enhancement. They are the cornerstone of internal collaboration and interaction within enterprises.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Advanced Flow Concepts Every Developer Should KnowPeter Caitens
Tim Combridge from Sensible Giraffe and Salesforce Ben presents some important tips that all developers should know when dealing with Flows in Salesforce.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Providing Globus Services to Users of JASMIN for Environmental Data Analysis
A DECISION SUPPORT SYSTEM TO CHOOSE OPTIMAL RELEASE CYCLE LENGTH IN INCREMENTAL SOFTWARE DEVELOPMENT ENVIRONMENTS
1. International Journal of Software Engineering & Applications (IJSEA), Vol.7, No.5, September 2016
DOI : 10.5121/ijsea.2016.7504 41
A DECISION SUPPORT SYSTEM TO CHOOSE
OPTIMAL RELEASE CYCLE LENGTH IN
INCREMENTAL SOFTWARE DEVELOPMENT
ENVIRONMENTS
Avnish Chandra Suman1
, Saraswati Mishra1
, Abhinav Anand2
1
Centre for Development of Telematics, New Delhi, India
2
Intel Technologies, Bengaluru, India
ABSTRACT
In the last few years it has been seen that many software vendors have started delivering projects
incrementally with very short release cycles. Best examples of success of this approach has been Ubuntu
Operating system that has a 6 months release cycle and popular web browsers such as Google Chrome,
Opera, Mozilla Firefox. However there is very little knowledge available to the project managers to
validate the chosen release cycle length. We propose a decision support system that helps to validate and
estimate release cycle length in the early development phase by assuming that release cycle length is
directly affected by three factors, (i) choosing right requirements for current cycle, (ii) estimating proximal
time for each requirement, (iii) requirement wise feedback from last iteration based on product reception,
model accuracy and failed requirements. We have altered and used the EVOLVE technique proposed by G.
Ruhe to select best requirements for current cycle and map it to time domain using UCP (Use Case Points)
based estimation and feedback factors. The model has been evaluated on both in-house as well as industry
projects.
KEYWORDS
EVOLVE, Use Case Points, Feedback Factor, SRGMs, Genetic Algorithms
1. INTRODUCTION
Software Release Planning has been a classical problem. Rather than making optimal release
policies, vendors now lean towards getting best in pre-enforced release times[2]. However it has
not been much time since the fashion of short release cycles has come to the scene, affecting the
Open source software market more than proprietary software market. The results first became
visible when Canonical started its own version of Debian operating system with a 6 months
release cycle instead of older average 4-5 year cycles of most of the operating systems. The
results were very promising, Ubuntu soon emerged as the third most Used OS in desktops with
the highest growth rate. Same has been continued by software such as Mozilla Firefox and
Chromium. A study shows that Mozilla has not been able to keep up the Overall quality though
the functionality has been improving noticeably [2]. On the other hand drastic downfall was
observed when Banshee shortened their cycle; the company reverted back to their old release
cycle. These varying results still leaves the question unanswered that how and with what external
factors a shorter release cycle affects the quality and how exactly is cycle time related to readiness
of software.
2. International Journal of Software Engineering & Applications (IJSEA), Vol.7, No.5, September 2016
42
There is a very small literature available on understanding this scenario. Two noticeable papers:
“Do Faster Releases Improve Software Quality? An Empirical Case Study of Mozilla Firefox” –
Foutse Khom h, Tejinder Dhaliwal, Ying Zou, Bram Adams [2] and “Software release planning:
an evolutionary and iterative approach” -D. Greer, G. Ruhe [1] may help us understand the
current scenario. First one is a case study of Firefox and deals with quality estimation, second one
tries to relate the incremental strategy to release decisions.
The problem of Software Release Planning dates back to early 80’s. The Early solutions to this
problem were fail proof as they insisted on limiting bugs to zero. One such fail-proof decision
technique by Brettschneider R.[4] specifies a condition such that no test failures are permitted to
be found in the a specified time limit before a release. Such a solution however no longer proves
practical in today’s business context. The aspect was Software Quality late 80’s which narrowed
down to Reliability. Various popular SRGMs( Software Reliability Growth Models) were
proposed such as Jelinski-Morandal Model[5], NHPP Models, Exponential (Goel-Okumoto)
Model[6], Modified Exponential Model etc..
These models were heavily used in software release time estimation in terms of saturation of a
reliability factor. A sample work by W.Y. Yun and D.S. Bai used all these models for Release
Estimation. [7].
In 90’s software release planning became more business oriented and qualitative than ever.
However the knowledge remained poorer. A few new approaches were used to model Release
Planning Policies rather than estimating the time itself[8]. In next decade Release planning soon
met field such as Data Mining & Soft Computing to solidify predictions. The most explanatory
work in this era was “The Art and Science of Software Release Planning” [9], which tried to
understand the problem with both qualitative and quantitative heads and human intuition.
Most of the works done so far used to estimate time using the data present in testing phase.
However our aim was to estimate time during the requirement Analysis phase of incremental
development. The only decision making data that might be present in this phase is feedback from
previous phase as well as human intuition. We chose two popular works , EVOLVE[1] and Use
Case Points[3] to which were directly in context with our problem and didn’t require any testing
data.
Since using a SGRM (Software Reliability Growth Model) was not possible in the Planning
phase, so we have developed our own feedback mechanism and used it modify the EVOLVE
approach.
It is very probable that in coming years more and more software will adhere to faster release
cycles to cope up with the technology and competition. The trend is gaining popularity and needs
to be thoroughly researched.
1.1 Evolve
EVOLVE is a proven evolutionary and iterative approach that optimally allocates requirements to
increments and aims at continuous planning during incremental software development. We will
use the EVOLVE to predict the requirements to be satisfied in the current iteration only.
According to EVOLVE [1]
3. International Journal of Software Engineering & Applications (IJSEA), Vol.7, No.5, September 2016
43
4. International Journal of Software Engineering & Applications (IJSEA), Vol.7, No.5, September 2016
44
Since we deal with all the combinations of requirements possible, a huge solution set is available
and hence genetic algorithms can be successfully tried. A genetic algorithm is now applied to
maximize objective function (6). Chromosomes satisfying 1-3 are the only valid solutions and
hence are filtered and considered suitable for Genetic Algorithm. Ruhe suggests a crossover and
mutation rate of 0.5 .The output is an assignment of all the requirements to increments.
In proposed model, the requirements assigned to current increment (release cycle) only will be of
primary concern. The inputs, stakeholder-determined requirement-priority and requirement-value
will be modified using a feedback mechanism discussed ahead. The Effort Constraints will be
replaced with a time constraint. The next section describes the altered version of EVOLVE used
in proposed approach
1.2 Altered EVOLVE
The EVOLVE model was primarily developed for requirements domain and doesn’t deal in any
way with time domain. Hence we needed to alter the model to make it suitable for time domain.
We alter the EVOLVE method in two places to fit it in Time domain.
1. He Effort Constraint is replaced by time constraint such that
Here represents the estimated time of a selected requirement. represents
the Deadline Limit.
2. The Prio and Value matrices are altered by multiplying the perceived values of all those
requirements in which are being re-implemented (including the requirements
generated as a consequence of previous requirement failures, e.g.: Major bugs) with
5. International Journal of Software Engineering & Applications (IJSEA), Vol.7, No.5, September 2016
45
inverse of the feedback factor i.e. , which will be introduced in further sections. In
short, a feedback factor is an overall evaluation of model on 0-1 scale. The significance
of feedback is discussed in the feedback factor section.
Time required for a pre-determined project can be best calculated in planning phase by Use Case
Points. Time for individual requirements is then calculated using a weighted version of UCP
discussed ahead
1.3 Use Case Points [3]
Use Case Points (UCP) is a widely-accepted use-case based software estimation approach. This
technique was developed in 1993 by Gustav Karner primarily for object oriented systems and
takes multiple technical and environmental considerations into account.
The equation is composed of four variables:
1. Technical Complexity Factor (TCF).
2. Environment Complexity Factor (ECF).
3. Unadjusted Use Case Points (UUCP).
4. Productivity Factor (PF).
Each variable is defined and computed separately, using perceived values and various constants.
The complete equation is: UCP = TCP * ECF * UUCP * PF
The UCP hence calculated is the estimated time for entire project considering that all the
requirements will be implemented in a single increment. A solution for estimating time for each
individual requirement is explained in the next section.
1.4 Weighted (Extended) Use case point’s analysis
Consider r(1) to r(n) be all the candidate requirements that can be chosen for current release
cycle. In a practical development scenario, we consider the requirements to be highly unique and
specific and can be mapped to single use-cases. We consider a situation where all such
requirements are needed to be implemented and apply the traditional UCP approach to determine
a time T. If the number of requirements are n then,
Divide n requirements into three clusters, based on time needed (small, medium, big). Now assign
proportional weights a, b, c respectively such that
• The value a/b, represents the approx ratio of time taken by small-size requirement to
a medium-size requirement.
• The value b/c, represents the approx ratio of time taken by medium-size requirement
to a big-size requirement.
• The value c/a, represents the approx ratio of time taken by big-size requirement to a
small-size requirement.
Now let
Let i, j, k be the respective number of requirement in small, medium and big size clusters.
The approximate time of a requirement is thus given by:
6. International Journal of Software Engineering & Applications (IJSEA), Vol.7, No.5, September 2016
46
The Weights a, b, c can be conventionally assigned values 1, 2, 3 if a relative weight can’t be
estimated. Estimation can be further improved by using more than three clusters.
We now have set that holds respective times of requirements set . We now calculate the
feedback factor.
If we are in first increment we take the feedback factor
The value 1 signifies that feedback is either perfect or not yet available.
Else, the feedback factor is calculated with the pre-mentioned technique.
All those requirements in which are being re-implemented (including the requirements
generated as a consequence of previous requirement failures, e.g.: Major bugs) are multiplied
with inverse of the feedback factor i.e. .
We now introduce the feedback factor which is used to modify EVOLVE [1] inputs
1.5 FEEDBACK Factor
The reasons for not using the Software reliability growth models have already been explained.
Instead a new approach is proposed to calculate the performance of our model and use this
feedback as a mechanism to improve the future predictions and estimations of the model.
Let us define that (for immediate previous release)
• dT is a measure of difference in the estimated and actual time.
• FR represents the number of selected requirements which failed in some manner, i.e. not
properly implemented, exceeded time by a huge amount , rejected by end users, faced a
high count of bugs etc and needs to be re-implemented.
• User Perception (UP) is the rating of overall release by the end user or customer.
The method assumes that the variance or low feedback occurred because of one or more of
following reasons:-
• Incorrect selection of requirements
• Incorrect priority or value Estimation by stakeholders
• Incorrect UCP time estimation
Hence we will now try to calculate a feedback factor (FF) which can be multiplied with the
estimation values of requirements of previous release being re-implemented in current release. (It
also applies to the newly generated requirements as a result of problems with previous release.)
A function Evl, which calculates the feedback factor is defined such that
Evl is a linear function that sums up all the positive and negative feedbacks and gives a
normalized output on 0-1 scale, 0 declaring a complete project failure and 1 declaring complete
success.
7. International Journal of Software Engineering & Applications (IJSEA), Vol.7, No.5, September 2016
47
It takes three inputs,
1. dT = 0, if actual time doesn’t exceed the predicted time
(T(actual)-T(estimated))/T(estimated), if actual exceeds the predicted time by a
factor of two or less
1, otherwise
2. FR = (total number of failed requirements)/ (total requirements implemented)
It will range from 0 to 1. 0 being no failed requirements and 1 being the scenario where
all requirements implemented failed.
3. UP is a customer rating [0-1], 0 being the minimum and 1 being maximum.
Now we define
Evl (dT, FR, UP) =
The above formula gives 50% weightage to user perception & 50% weight to model accuracy
(time & requirements) to calculate a normalized feedback factor. Importance percentages of user
perception and model accuracy can be adjusted according to the nature of project and business
environment.
FF (feedback factor) thus calculated will be used in proposed solution to Alter the UCP and
EVOLVE inputs.
2. PROBLEM STATEMENT
Project X has just been started and is at verge of planning phase. The project has been declared
feasible and all requirements are well defined and negotiated. The Project Manager has decided to
deliver the requirements in an incremental fashion and needs to estimate the length of each
release cycle. He asks all the stakeholders separately to prioritize and give a particular value to
each requirement. Since all the stakeholders are not of same importance and caliber, he himself
assigns relative importance to each one including himself. As the planning phase starts he now
has the requirements mapped to discrete use cases. He now needs to estimate the project release
cycle’s using the limited available knowledge. This calls for the need of a decision support
system to assist in required predictions.
3. PROPOSED SOLUTION
The solution is based on two assumptions. First, that choosing correct requirements helps in
estimating the cycle time. Furthermore choosing correct requirements is directly influenced by
performance of the model in previous increment, the ratio of failed requirements to total
implemented and the user perception of each requirement.
The project manager now has a deadline to meet for current release; he decides a release cycle
length. He needs a model to evaluate the decision as well as predict a best suited cycle time. A set
of requirements is first determined and. Weighted Use case point’s analysis is then performed to
assign estimated time to each well-defined requirement. He now needs to decide which
requirements to choose for the current release cycle. He uses the Altered EVOLVE model to
achieve this.
Project Manager has the following inputs in hand
• Feedback factor from previous release (if any)
• Stakeholder priorities Matrix (Prio) for all requirements.
8. International Journal of Software Engineering & Applications (IJSEA), Vol.7, No.5, September 2016
48
• Stakeholder Business Value Matrix (Value) for all requirements.
• Relative Stakeholder priorities
• Use Case Estimated time of each requirement
• Precedence Dependencies between requirements
• Coupling Dependencies between requirements
• A maximum deadline time (enforced by customer or higher management)
He can now proceed with the Altered EVOLVE method.
A random set of chromosomes is generated from candidate using the Subset-generation
Algorithm. Hence each chromosome generated is a subset of power-set of (excluding Null
Set). Hence for n requirements, the number of solutions generated is . This is a very large
possible-solution set and contains many invalid solutions. We apply three constraints to filter out
the invalid constraints.
• Time Constraint
• Precedence Constraint
• Coupling Constraint
Now with the valid solutions only in the possible-solution set, the Fitness function is calculated
using the linear sum of Benefit and Penalty (6). Crossover and Mutation are performed at rates
0.5 each as suggested by Ruhe [1].
After sufficient GA iterations, a set of close solutions is obtained and a particular solution is
manually chosen.
Time is then calculated as .
Project then moves on to the next release cycle.
The Algorithmic steps of the proposed solution are briefly described as follows:
1. Determining a set of Requirements. A requirement can be a new feature, bugs or
requirements not selected in previous releases. Each requirement must be map-able to
unique use cases.
2. Calculate the Estimated time for each requirement using the Extended UCP method as
explained
3. Calculate the feedback factor and multiply it with the selected requirements times.
4. Assign a time limit that must not be exceeded.
5. Input the Stakeholders data and their relative importance values. Use this matrix to
calculate the Eigen Values.
6. Assign the stakeholder priorities and stakeholder values to each requirement for the
current iteration.
7. Multiply the feedback factor to selected (repeated) stakeholder priorities and importance
values as explained.
8. Determine the Coupling and Precedence constraints
9. Generate all possible Requirement sets using subset-generation algorithm.
10. Filter out the invalid chromosomes based on coupling, precedence and time constraints.
11. Assign a fitness value to each chromosome using objective function (6). Our aim is to
maximize this function.
12. Randomly select 2 chromosomes from better half (having high fitness value) and perform
crossover o generate new offspring.
9. International Journal of Software Engineering & Applications (IJSEA), Vol.7, No.5, September 2016
49
13. Randomly select 2 chromosomes from better half (having high fitness value) and perform
mutation o generate new offspring.
14. Add Offspring to population.
15. Go back to step 10, if more iterations needed (Population not yet converged)
16. Choose a best solution from new high fitness population.
17. Calculate the release cycle time.
18. If more iterations, determine failed requirements and resulting bugs. Go to Step-1.
19. Exit
Following flow diagram sums up the steps described in the preceding Algorithm in brief. The
flow diagram represents the iterative nature of project as well as the proposed solution. A
stopping condition has not been mentioned to represent an ideal incremental-condition such that
project goes on. However the model stops as all the requirements are consumed and no major
bugs are detected. The detailed implementation Algorithm is discussed in Appendix.
Figure 1 Proposed Solution - Flow of Steps
10. International Journal of Software Engineering & Applications (IJSEA), Vol.7, No.5, September 2016
50
3. MODEL ANALYSIS CASE STUDIES
The solution is based on two assumptions. First, that choosing correct requirements helps in
estimating the cycle time. Furthermore choosing correct requirements is directly influenced by
performance of the model in previous increment, the ratio of failed requirements to total
implemented and the user perception of each requirement.
Description of Sample Project 1
An online file storage service is to be implemented incrementally. The 7 Core Requirements to be
coded are as follows:
1. Login Management
2. Session Management
3. Upload Module
4. Download Module
5. File Search Module
6. Sharing Management
7. Account Renewal
All these requirements pertain to the major use cases of the problem and hence Use Case Points
analysis is applied.
All the values (factors) used below were carefully chosen on the basis of our own experience to
suit the sample project as well as the college working environment.
Technical Complexity Factor (TCF) is estimated as follows:
Table 1 TCF Estimation
Factors Description Weight Perceived
Complexity
Calculated
Factor
T1 Distributed
System
2 1 2
T2 Performance 1 2 2
T3 End User
Efficiency
1 3 3
T4 Complex
Internal
Processing
1 2 2
T5 Reusability 1 2 2
T6 Easy to
Install
0.5 2 1
T7 Easy to Use 0.5 3 1.5
T8 Portable 2 1 2
T9 Easy to
Change
1 3 3
T10 Concurrent 1 3 3
T11 Special
security
features
1 4 4
T12 Provides
direct access
for third
parties
1 3 3
11. International Journal of Software Engineering & Applications (IJSEA), Vol.7, No.5, September 2016
51
T13 Special user
training
facilities are
required
1 1 1
Total Factor =29.5
TCP = 0.6 + (.01*Total Factor) = 0.895
Environmental Complexity Factor (ECF) is estimated as follows:
Table 2 ECF Estimation
Enviro
nmenta
l Factor
Description Weigh
t
Perceive
d Impact
Calculate
d Factor
E1 Familiarity
with UML
1.5 1 1.5
E2 Application
Experience
0.5 1 0.5
E3 Object
Oriented
Experience
1 1 1
E4 Lead analyst
capability
0.5 3 1.5
E5 Motivation 1 3 3
E6 Stable
Requiremen
ts
2 3 6
E7 Part-time
workers
-1 0 0
E8 Difficult
Programmin
g language
2 1 2
Total Factors:15.5
ECF = 1.4 + (-0.03*Total Factor) = 0.935
Unadjusted Use Case Points (UUCP) is a sum of Unadjusted Use Case Weight (UUCW) and
Unadjusted Actor Weight (UAW). UUCW is estimated as follows:
Table 3 UUCW Estimation
Use
Case
Type
Weight Number
of Use
Cases
Result
Simple 5 3 15
Average 10 1 10
Complex 15 3 45
Total UUCW:70
UAW is estimated as follows:
12. International Journal of Software Engineering & Applications (IJSEA), Vol.7, No.5, September 2016
52
Table 4 UAW Estimation
Actor
Type
Weight Number
of Actors
Result
Simple 1 2 2
Average 2 2 4
Complex 3 1 3
Total UAW: 9
UUCP = UUCW + UAW = 79
PF (Productivity Factor) = 20 (Industry Average)
UCP = TCP * ECF * UUCP * PF = 1325 hours
Therefore total estimated time T: 1325 hours
Maximum time limit per release (say): 400 hours. This value depends upon the project but is
always enforced by an authorizing stakeholder. Since we were supposed to complete the first
phase of project in approximately 20 days with 5 stakeholders working around 4 hours per day, a
value of 20*4*5 is taken as limit time.
The next step involves estimating approximate time for each requirement. Assuming 3 clusters of
requirements with weights 1, 2,3 the estimation is calculated as follows:
Table 5 Estimating Requirement Time
Cluster
Type
Requi
remen
ts
Number of
Requiremen
ts
Weigh
t
Time per
Requireme
nt
(As
predicted
by Altered
UCP)
Simple 1,6,7 3 1 95
Moderat
e
2 1 2 189
Comple
x
3,4,5 3 3 283
Feedback Factor = 1, Since it’s the first increment, hence no errors were occurred in previous
increment (as it didn’t exist), so Feedback factor becomes 1 .
Sample Stakeholder Assigned Values (on basis of their take on importance of each requirement
on a 0-5 scale)
Table 6 Stakeholder Assigned Values
R1 R2 R3 R4 R5 R6 R7
S1 4 4 5 5 5 1 2
S2 5 5 5 5 5 5 5
S3 2 2 5 5 2 3 1
S4 1 1 1 5 5 4 4
S5 2 1 3 5 4 1 3
13. International Journal of Software Engineering & Applications (IJSEA), Vol.7, No.5, September 2016
53
Sample Stakeholder Assigned Priorities (on basis of their take on priority of each requirement on
a 1-7 scale)
Table 7 Stakeholder Assigned Priorities
R1 R2 R3 R4 R5 R6 R7
S1 1 2 3 4 5 6 7
S2 1 3 2 5 4 6 7
S3 1 3 4 5 6 2 7
S4 1 4 5 6 2 3 7
S5 1 4 5 6 2 3 7
Pair wise comparison of Stakeholders by Project Manager (In this case, Team Leader)
Table 8 Stakeholder Comparison
S1 S2 S3 S4 S5
S1 1 2 3 4 1
S2 0.5 1 3 2 1
S3 0.33 0.33 1 2 4
S4 0.25 0.5 0.5 1 1
S5 1 1 0.25 1 1
Requirement Precedence Dependency: {(R1,R2), (R1,R3),(R1,R6),(R1,R7)}
Requirement Coupling Dependency: {(3, 4)}
Results
The implementation software uses Genetic Algorithm Approach to pin down dominating solution
sets. In most of the runs population converged at three highly fit solutions:
<R1>
<R1,R5>
<R1,R6>
We can now use our knowledge and logic to handpick one of them. We chose the <R1,R6>
solution and calculated time by adding their individual estimated times.
Estimated release time for current release: 378 hours. This solution was in perfect coordination
with our previous estimate as well as our actual project experience.
Description of Sample Project 2(Industry Project)
Sahara Bank, Libya (BNP Paribas Group) [11] needed to replace their legacy banking software in
a quick incremental way. The Project was outsourced to TCS (Software Consultancy
Organization) [12] and following modules were demanded from customer side.
1. Login Management
2. Scope Management
3. Admin Part
3.1 Account Management
3.2 Customer Management
3.3 Employee Management
14. International Journal of Software Engineering & Applications (IJSEA), Vol.7, No.5, September 2016
54
3.4 ATM Management
3.5 Brach or Bank Management
3.6 Region Management
4. Customer Part
4.1 ATM Banking
4.2 NET Banking
4.3 Core Banking
4.4 Phone Banking
The stipulated time for project was three weeks and a team of 27 members worked on the project.
The project was delivered successfully in three quick increments within the stipulated time. First
increment was released for beta testing on 9th
day of project, second on 15th
and final increment
on 20th
day. The feedback was highly positive for all three incremental releases.
As per the data provided by Tech Lead of Project, the following major use cases were determined
and later implemented.
1. Login Management
2. Scope Management
3. Customer and employee interface interaction for atm banking, core banking, net banking, and
phone banking
4. Create, view, view all, update, delete ,deactivate and activate region ,branch ,atm, customer,
employee, account etc.
5. Fund transfer from region to branch, branch to sub-branches and atms’ in morning and
evening accounting into threshold balance
6. Interest calculation
7. Cheque-book request & processing
8. Fund transfer from one account to another, Bill Payment
9. Foreign Currency exchange
10. Account, Balance and transaction limits
11 Validations -both back end and front end
All the values (factors) given below reflects the nature of requirements by Sahara Bank and are
assigned by Tech Lead on basis of his perception of project. (Note: No UCP Analysis was carried
out during the project and the following perceived complexity factors have been determined by
Project team to facilitate the analysis of our research work)
Technical Complexity Factor (TCF) is estimated as follows:
Table 9 TCF Estimation
Factors Description Weight Perceived
Complexity
Calculated
Factor
T1 Distributed
System
2 0 0
T2 Performance 1 4 4
T3 End User
Efficiency
1 4 4
T4 Complex
Internal
Processing
1 1 2
15. International Journal of Software Engineering & Applications (IJSEA), Vol.7, No.5, September 2016
55
T5 Reusability 1 2 2
T6 Easy to
Install
0.5 1 0.5
T7 Easy to Use 0.5 4 2
T8 Portable 2 1 2
T9 Easy to
Change
1 3 3
T10 Concurrent 1 3 3
T11 Special
security
features
1 5 5
T12 Provides
direct access
for third
parties
1 5 5
T13 Special user
training
facilities are
required
1 0 0
Total Factor =32.5
TCP = 0.6 + (.01*Total Factor) = 0.925
Environmental Complexity Factor (ECF) is estimated as follows:
Table 10 ECF Estimation
Enviro
nmenta
l Factor
Descriptio
n
Weight Perceive
d Impact
Calculate
d Factor
E1 Familiarity
with UML
1.5 2 3
E2 Applicatio
n
Experience
0.5 2 1
E3 Object
Oriented
Experience
1 4 4
E4 Lead
analyst
capability
0.5 4 2
E5 Motivation 1 4 4
E6 Stable
Requireme
nts
2 4 8
E7 Part-time
workers
-1 0 0
E8 Difficult
Programmi
ng
language
2 0 0
16. International Journal of Software Engineering & Applications (IJSEA), Vol.7, No.5, September 2016
56
Total Factors: 22
ECF = 1.4 + (-0.03*Total Factor) = 0.740
UUCW is estimated as follows:
Table 11 UUCW Estimation
Use Case
Type
Weight Number
of Use
Cases
Result
Simple 5 4 20
Average 10 4 40
Complex 15 3 45
Total UUCW: 105
UAW is estimated as follows:
Table 12 UAW Estimation
Actor
Type
Weight Number
of Actors
Result
Simple 1 3 3
Average 2 4 8
Complex 3 1 3
Total UAW: 14
UUCP = UUCW + UAW = 119
PF (Productivity Factor) = 24 (Estimated TCS Average)
UCP = TCP * ECF * UUCP * PF = 1955 hours
Therefore total estimated time T: 1955 hours
Maximum time limit per release: 1300 hours. This value is representative of the time constraints
enforced by Sahara Bank on TCS team for first review of Project.
The next step involves estimating approximate time for each requirement. Assuming 3 clusters of
requirements with weights 1, 2, 3 the estimation is calculated as follows:
Table 13 Estimating Requirement wise time
Cluster
Type
Requir
ements
Number of
Requireme
nts
Weigh
t
Time per
Requireme
nt
(As
predicted
by Altered
UCP)
Simple 1,7,9,1
0
4 1 70
Moderat
e
4,5,6,8 4 2 140
Comple
x
2,3,11 3 3 211
17. International Journal of Software Engineering & Applications (IJSEA), Vol.7, No.5, September 2016
57
Feedback Factor = 1, since it’s the first increment, hence no errors were occurred in previous
increment (as it didn’t exist), so Feedback factor becomes 1.
Eight Stakeholders including the domain expert from customer side were chosen such that they
represent the entire project team of 27 members.
Sample Stakeholder Assigned Values (on basis of their take on importance of each requirement
on a 0-5 scale and are representative of various streams of thoughts of the stakeholders)
Table 14 Stakeholder Assigned Values
R1 R2 R3 R4 R5 R6 R7 R8 R9 R10 R11
S1 4 3 5 4 4 2 2 3 2 2 4
S2 3 3 4 4 4 3 3 3 2 2 3
S3 3 2 5 5 2 3 3 3 3 3 2
S4 3 3 3 5 5 3 3 4 3 2 2
S5 3 2 5 5 2 3 3 3 3 3 2
S6 4 3 5 4 4 2 2 3 2 2 4
S7 3 3 4 4 4 3 3 3 2 2 3
S8 4 3 5 4 4 2 2 3 2 2 4
Sample Stakeholder Assigned Priorities (on basis of their take on priority of each requirement on
a 1-11 scale and are representative of various streams of thoughts of the stakeholders)
Table 15 Stakeholder Assigned Priorities
R1 R2 R3 R4 R5 R6 R7 R8 R9 R10 R11
S1 1 11 2 4 9 8 7 6 10 5 3
S2 1 11 2 5 4 6 7 9 10 8 3
S3 1 11 2 3 9 8 7 6 10 5 4
S4 1 11 2 4 9 8 7 6 10 5 3
S5 1 11 2 5 4 6 7 9 10 8 3
S6 1 11 2 4 9 8 7 6 10 5 3
S7 1 11 2 5 4 6 7 9 10 8 3
S8 1 11 2 4 9 8 7 6 10 5 3
Pair wise comparison Of Stakeholders by Project Manager (In this case, Team Leader has
determined the values)
Table 16 Stakeholder Comparison
S1 S2 S3 S4 S5 S6 S7 S8
S1 1 1 3 3 4 4 3 3
S2 1 1 3 2 3 3 2 2
S3 0.33 0.33 1 2 2 2 2 2
S4 0.33 0.5 0.5 1 1 1 1 1
S5 0.25 0.33 0.5 1 1 1 2 2
S6 0.25 0.33 0.5 1 1 1 1 1
S7 0.33 0.5 0.5 1 0.5 1 1 1
S8 0.33 0.5 0.5 1 0.5 1 1 1
18. International Journal of Software Engineering & Applications (IJSEA), Vol.7, No.5, September 2016
58
Requirement Precedence Dependency:
{(1, 3), (1, 11)}
Requirement Coupling Dependency:
{(3, 11)}
Results
We tested the project various time on our implementation and found that all requirements are
consumed in 3 to 4 iterations depending upon the feedback factor and requirements chosen.
From second iteration we considered a feedback of 0.8 to 0.9 which was representative of the
highly positive feedback from Sahara Team in each review.
Following Solutions were converged in first iteration.
<R1,R3,R4,R11>
<R1, R11,R3>
<R1,R3,R11>
Choosing one of these solutions determined the number of further iterations.
The results were in accordance with TCS original scenario, where 3 iterations were done such that
following requirements were implemented.
Iteration-1: R1, R3, R11
Iteration-2: R4, R10, R8, R6, R7
Iteration-3: R2, R5, R9
We also found that a positive a feedback tends towards reducing the number of iteration, a
detailed analysis of this result has been done in next section.
The Results we received in various runs were highly coherent with the actual TCS Project
experience. Fig-2 shows a comparison of various runs of proposed solution with the actual
results. Various runs assumed different values of feedback factor ranging from 0.75 to 0.9
(depicting a highly positive feedback by client) and slight variations were deliberately done in
choosing the solution set to check the robustness.
Figure 2 Comparison of Results
In above comparison, the first bar of each iteration depicts the actual TCS results followed by our
results in various runs. It was interesting to see that no solutions suggested a fifth iteration.
Result-1 assumed a feedback factor of 0.9 and was most coherent with actual results. Result-2 and
19. International Journal of Software Engineering & Applications (IJSEA), Vol.7, No.5, September 2016
59
Result-3 were determined at a lower feedback and hence led to more iterations. Such a coherency
with TCS Project confirmed the accuracy and robustness of the proposed solution.
4. MODEL COMPUTATIONAL ANALYSIS
The implementation method was tested on a i3, 2nd
generation machine and it was found that the
proposed solution becomes more and more memory-hungry as the number of requirements
increase beyond a saturation limit. Hence a parallel & distributed implementation of the solution
is advised. Fig-3 shows the tradeoff between number of requirements and time complexity. Fig-3
was extrapolated and interpolated to suit a complete requirement range. We detected an
exponential growth.
Figure 3 Requirements vs. Time
Coming to feedback factor, very positive results were observed. As the number of iterations
(increments) increases, the feedback factor decreases to a certain limit. This confirms that project
might be going in right direction, however as the number of increments increase beyond a certain
limit (which signifies that more and more bugs & failures are being encountered), the feedback
keeps on decreasing towards zero, confirming a failed project. Fig-4 was extrapolated and
interpolated to suit a complete requirement range based on 12 observations on sample projects.
Figure 4 Feedback Factor vs. Number of Increments
20. International Journal of Software Engineering & Applications (IJSEA), Vol.7, No.5, September 2016
60
5. CONCLUSIONS
In both the projects discussed above, time estimated matches the estimated time as well as close
to time taken in implementing the real project. Hence the model seems to suit both small (college
projects) as well as the large industry projects. However, the idea proposed has a lot of scope for
improvement, the factors considered in Use Case points can be studied further and necessary
alterations can be made to suit certain project types in general. The solution for now uses Subset
generation algorithm which demands very high computation as the project requirements and bugs
increases rapidly. From analysis, we can see that it might be difficult to handle projects that need
a complete reengineering. We must consider the need and solution for implementing the
approach on a parallel (or distributed) system to account for computational problems.
APPENDIX
Algorithm
BEGIN
FF: = getFEEDBACK(); // calculates feedback
UCP: = CalcUCP(); // calculates UCP
Assign (); // Extends UCP
Get-Matrix(Stakes); //Gets relative importance
Get-Eigen (Stakes); //Calculates Eigen Values
Get-Matrix (Prio); // Gets Stakeholder priorities
Get-Matrix (Value); // Gets Stakeholder values
Feed-Matrix (Prio); // Multiplies Feedback
Feed-Matrix (Value);
Get-Precedence(Prec); // Gets Requirements Precedence
Get-Coupling(Coup); // Gets Requirements Coupling
Solution list [] [] = get-Subsets ( )
//Generates Subsets
Loop(n)
For Each Element in Solution List[][]
If Check_Prec(Element,Prec) = False || Check_Coup(Element, Coup) = FALSE then
Delete ELEMENT;
End If
Next Element
For Each Element in Solution List[][]
Calc_Fitness(Element)
Next Element
Sort_List ( Solution List[][])
For Each E1,E2 in Solution List[][]
E1=Select_Element(Solution_List);
E2=Select_Element(Solution_List);
New_Solution_String = Crossover(E1,E2)
New_Solution_String = Mutation (New_Solution_String)
Solution_List= New_Solution_String
Next E1,E2
If (Converged Solution < X) , EXIT
21. International Journal of Software Engineering & Applications (IJSEA), Vol.7, No.5, September 2016
61
End Loop
Choose Element;
Determine_Time(Element, )
END
REFERENCES
[1] Greer D, Ruhe G, Software release planning: an evolutionary and iterative approach, Information
and Software Technology journal, Volume 46, Issue 1, pp. 243-253, Elsevier 2004
[2] Khom F, Dhaliwal T, Zou Y, Adams B, Do Faster Releases Improve Software Quality? An Empirical
Case Study of Mozilla Firefox, 9th IEEE Working Conference on Mining Software Repositories
(MSR), Zurich, pp. 179 - 188 , 2012
[3] Karner G, Use Case Points - Resource Estimation for Objectory Projects , Objective Systems SF AB
(copyright owned by Rational Software), 1993
[4] Brettschneider R, Is your Software Ready for Release, IEEE Software, Volume 6, Issue 4, 1989
[5] Jelinski, Z. Moranda, P.B., Jelinski-Morandal Model, Statistical Computer Performance Evaluation,.
Academic Press, New York, pp. 465-484, 1972
[6] Okumoto K, Goel A L, Optimum releases time for software system based on reliability and cost
criteria, Journal of Systems and Software, Elsevier, Volume 1, pp. 315-318, 1984
[7] W.Y. Yun and D.S., Bai, Optimum Software Release Policy with Random Life Cycle, IEEE, June
1990
[8] Hoek A, Hall R S, Heimbigner D, Wolf A L, Software Release Management, ACM SIGSOFT
Software Engineering Notes, Volume 22, Issue 6, pp. 159-175, 1997
[9] Ruhe G, Saliu M.O , The Art and Science of Software Release Planning, IEEE Software, Volume 22,
Issue 6, pp.47-53, 2005
[10] Wang Q, Lai X, Requirements Management for the Incremental Development Model, Proceedings
of Second Asia-Pacific Conference on Quality Software, Hong Kong pp. 295-301, 2001
[11] Sahara Bank, Libya (BNP Paribas Group), www.saharabank.com.ly, www.bnpparibas.com
[12] Tata Consultancy Services, www.tcs.com