The parameter estimation of Goel’s Okomotu Model is performed victimisation simulated annealing. The Goel’s Okomotu
Model is predicated on Exponential model and could be a easy non-homogeneous Poisson method (NHPP) model. Simulated
annealing could be a heuristic optimisation technique that provides a method to flee local optima. The information set is optimized
using simulated annealing technique. SA could be a random algorithmic program with higher performance than Genetic algorithmic
program (GA) that depends on the specification of the neighbourhood structure of a state area and parameter settings for its cooling
schedule.
A NOVEL APPROACH FOR TEST CASEPRIORITIZATIONIJCSEA Journal
Test case prioritization techniques basically schedule the execution of test cases in a definite order such that to attain an objective function with greater efficiency. This scheduling of test cases improves the results of regression testing. Test case prioritization techniques order the test cases such that the most important ones are executed first encountering the faults first and thus makes the testing effective. In this paper an approach is presented which calculates the product of statement coverage and function calls. The results illustrate the effectiveness of formula computed with the help of APFD metric.
Model based test case prioritization using neural network classificationcseij
Model-based testing for real-life software systems often require a large number of tests, all of which cannot
exhaustively be run due to time and cost constraints. Thus, it is necessary to prioritize the test cases in
accordance with their importance the tester perceives. In this paper, this problem is solved by improving
our given previous study, namely, applying classification approach to the results of our previous study
functional relationship between the test case prioritization group membership and the two attributes:
important index and frequency for all events belonging to given group are established. A for classification
purpose, neural network (NN) that is the most advances is preferred and a data set obtained from our study
for all test cases is classified using multilayer perceptron (MLP) NN. The classification results for
commercial test prioritization application show the high classification accuracies about 96% and the
acceptable test prioritization performances are achieved.
An Adaptive Hybrid Technique approach of Test Case PrioritizationINFOGAIN PUBLICATION
Test-Case Prioritization is the method to schedule any execution order of the test with the purpose of maximizing some objects like revealing faults early. In this paper we have proposed the hybrid approach for the purpose of the test case prioritization involving Robust Genetic Algorithm to improve the parameters like APSC and execution time. This technique involves robust approach, parent generation, cross-over and mutation over each test-case and then calculates APSC and execution time.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Reliability is concerned with decreasing faults and their impact. The earlier the faults are detected the better. That's why this presentation talks about automated techniques using machine learning to detect faults as early as possible.
Fault localization is time-consuming and difficult,
which makes it the bottleneck of the
debugging progress. To help facilitate this task, t
here exist many fault localization techniques
that help narrow down the region of the suspicious
code in a program. Better accuracy in fault
localization is achieved from heavy computation cos
t. Fault localization techniques that can
effectively locate faults also manifest slow respon
se rate. In this paper, we promote the use of
pre-computing to distribute the time-intensive comp
utations to the idle period of coding phase,
in order to speed up such techniques and achieve bo
th low-cost and high accuracy. We raise the
research problems of finding suitable techniques th
at can be pre-computed and adapt it to the
pre-computing paradigm in a continuous integration
environment. Further, we use an existing
fault localization technique to demonstrate our res
earch exploration, and shows visions and
challenges of the related methodologies.
Optimal Selection of Software Reliability Growth Model-A StudyIJEEE
People use software and sometime software fails.so they try to quantify software reliability and try to understand how and why it fails.For this purpose so many software Reliability models have been developed to estimate the defects in the software while delivering it to the customer.Till now so many software Reliability models have been developed,but main issue is that it remain largely unsolved that how to calculate software reliability efficiently.In everycircumstance we cannotuse one model because no single model can completely represent all features.This paper describes the circumstances and criteria under which particular model can be selected.
ESTIMATING HANDLING TIME OF SOFTWARE DEFECTScsandit
The problem of accurately predicting handling time for software defects is of great practical
importance. However, it is difficult to suggest a practical generic algorithm for such estimates,
due in part to the limited information available when opening a defect and the lack of a uniform
standard for defect structure. We suggest an algorithm to address these challenges that is
implementable over different defect management tools. Our algorithm uses machine learning
regression techniques to predict the handling time of defects based on past behaviour of similar
defects. The algorithm relies only on a minimal set of assumptions about the structure of the
input data. We show how an implementation of this algorithm predicts defect handling time with
promising accuracy results
A NOVEL APPROACH FOR TEST CASEPRIORITIZATIONIJCSEA Journal
Test case prioritization techniques basically schedule the execution of test cases in a definite order such that to attain an objective function with greater efficiency. This scheduling of test cases improves the results of regression testing. Test case prioritization techniques order the test cases such that the most important ones are executed first encountering the faults first and thus makes the testing effective. In this paper an approach is presented which calculates the product of statement coverage and function calls. The results illustrate the effectiveness of formula computed with the help of APFD metric.
Model based test case prioritization using neural network classificationcseij
Model-based testing for real-life software systems often require a large number of tests, all of which cannot
exhaustively be run due to time and cost constraints. Thus, it is necessary to prioritize the test cases in
accordance with their importance the tester perceives. In this paper, this problem is solved by improving
our given previous study, namely, applying classification approach to the results of our previous study
functional relationship between the test case prioritization group membership and the two attributes:
important index and frequency for all events belonging to given group are established. A for classification
purpose, neural network (NN) that is the most advances is preferred and a data set obtained from our study
for all test cases is classified using multilayer perceptron (MLP) NN. The classification results for
commercial test prioritization application show the high classification accuracies about 96% and the
acceptable test prioritization performances are achieved.
An Adaptive Hybrid Technique approach of Test Case PrioritizationINFOGAIN PUBLICATION
Test-Case Prioritization is the method to schedule any execution order of the test with the purpose of maximizing some objects like revealing faults early. In this paper we have proposed the hybrid approach for the purpose of the test case prioritization involving Robust Genetic Algorithm to improve the parameters like APSC and execution time. This technique involves robust approach, parent generation, cross-over and mutation over each test-case and then calculates APSC and execution time.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Reliability is concerned with decreasing faults and their impact. The earlier the faults are detected the better. That's why this presentation talks about automated techniques using machine learning to detect faults as early as possible.
Fault localization is time-consuming and difficult,
which makes it the bottleneck of the
debugging progress. To help facilitate this task, t
here exist many fault localization techniques
that help narrow down the region of the suspicious
code in a program. Better accuracy in fault
localization is achieved from heavy computation cos
t. Fault localization techniques that can
effectively locate faults also manifest slow respon
se rate. In this paper, we promote the use of
pre-computing to distribute the time-intensive comp
utations to the idle period of coding phase,
in order to speed up such techniques and achieve bo
th low-cost and high accuracy. We raise the
research problems of finding suitable techniques th
at can be pre-computed and adapt it to the
pre-computing paradigm in a continuous integration
environment. Further, we use an existing
fault localization technique to demonstrate our res
earch exploration, and shows visions and
challenges of the related methodologies.
Optimal Selection of Software Reliability Growth Model-A StudyIJEEE
People use software and sometime software fails.so they try to quantify software reliability and try to understand how and why it fails.For this purpose so many software Reliability models have been developed to estimate the defects in the software while delivering it to the customer.Till now so many software Reliability models have been developed,but main issue is that it remain largely unsolved that how to calculate software reliability efficiently.In everycircumstance we cannotuse one model because no single model can completely represent all features.This paper describes the circumstances and criteria under which particular model can be selected.
ESTIMATING HANDLING TIME OF SOFTWARE DEFECTScsandit
The problem of accurately predicting handling time for software defects is of great practical
importance. However, it is difficult to suggest a practical generic algorithm for such estimates,
due in part to the limited information available when opening a defect and the lack of a uniform
standard for defect structure. We suggest an algorithm to address these challenges that is
implementable over different defect management tools. Our algorithm uses machine learning
regression techniques to predict the handling time of defects based on past behaviour of similar
defects. The algorithm relies only on a minimal set of assumptions about the structure of the
input data. We show how an implementation of this algorithm predicts defect handling time with
promising accuracy results
Software testing is an important activity of the software development process. Software testing is most
efforts consuming phase in software development. One would like to minimize the effort and maximize the
number of faults detected and automated test case generation contributes to reduce cost and time effort.
Hence test case generation may be treated as an optimization problem In this paper we have used genetic
algorithm to optimize the test case that are generated applying conditional coverage on source code. Test
case data is generated automatically using genetic algorithm are optimized and outperforms the test cases
generated by random testing.
Benchmark methods to analyze embedded processors and systemsXMOS
xCORE multicore microcontrollers are 100x more responsive than traditional micros. The unparalleled responsiveness of the xCORE I/O ports is rooted in some fundamental features:
- Single cycle instruction execution
- No interrupts
- No cache
- Multiple cores allow concurrent independent task execution
- Hardware scheduler performs 'RTOS-like' functions
SOFTWARE TESTING: ISSUES AND CHALLENGES OF ARTIFICIAL INTELLIGENCE & MACHINE ...ijaia
The history of Artificial Intelligence and Machine Learning dates back to 1950’s. In recent years, there has
been an increase in popularity for applications that implement AI and ML technology. As with traditional
development, software testing is a critical component of an efficient AI/ML application. However, the
approach to development methodology used in AI/ML varies significantly from traditional development.
Owing to these variations, numerous software testing challenges occur. This paper aims to recognize and
to explain some of the biggest challenges that software testers face in dealing with AI/ML applications. For
future research, this study has key implications. Each of the challenges outlined in this paper is ideal for
further investigation and has great potential to shed light on the way to more productive software testing
strategies and methodologies that can be applied to AI/ML applications.
Design and Implementation of Proportional Integral Observer based Linear Mode...IDES Editor
This paper presents an interior-point method (IPM)
based quadratic programming (QP) solver for the solution of
optimal control problem in linear model predictive control
(MPC). LU factorization is used to solve the system of linear
equations efficiently at each iteration of IPM, which renders
faster execution of QP solver. The controller requires internal
states of the system. To address this issue, a Proportional
Integral Observer (PIO) is designed, which estimates the state
vector, as well as the uncertainties in an integrated manner.
MPC uses the states estimated by PIO, and the effect of
uncertainty is compensated by augmenting MPC with PIOestimated
uncertainties and external disturbances. The
approach is demonstrated practically by applying MPC to QET
DC servomotor for position control application. The proposed
method is compared with classical control strategy-PID
control.
AN APPROACH FOR TEST CASE PRIORITIZATION BASED UPON VARYING REQUIREMENTS IJCSEA Journal
Software testing is a process continuously performed by the development team during the life cycle of the software with the motive to detect the faults as early as possible. Regressing testing is the most suitable technique for this in which we test number of test cases. As the number of test cases can be very large it is always preferable to prioritize test cases based upon certain criterions.In this paper prioritization strategy is proposed which prioritize test cases based on requirements analysis. By regressing testing if the requirements will vary in future, the software will be modified in such a manner that it will not affect the remaining parts of the software. The proposed system improves the testing process and its efficiency to achieve goals regarding quality, cost, and effort as well user satisfaction and the result of the proposed method evaluated with the help of performance evaluation metric.
A Defect Prediction Model for Software Product based on ANFISIJSRD
Artificial intelligence techniques are day by day getting involvement in all the classification and prediction based process like environmental monitoring, stock exchange conditions, biomedical diagnosis, software engineering etc. However still there are yet to be simplify the challenges of selecting training criteria for design of artificial intelligence models used for prediction of results. This work focus on the defect prediction mechanism development using software metric data of KC1.We have taken subtractive clustering approach for generation of fuzzy inference system (FIS).The FIS rules are generated at different radius of influence of input attribute vectors and the developed rules are further modified by ANFIS technique to obtain the prediction of number of defects in software project using fuzzy logic system.
Estimation of resources, cost, and schedule for a software engineering effort requires experience, access to good historical information, and the courage to commit to quantitative predictions when qualitative information is all that exists. Halstead’s Measure & COCOMO Modeol COCOMO II Model of Estimation techniquesused or S/w Developments and Maintenance
Determination of Software Release Instant of Three-Tier Client Server Softwar...Waqas Tariq
Quality of any software system mainly depends on how much time testing take place, what kind of testing methodologies are used, how complex the software is, the amount of efforts put by software developers and the type of testing environment subject to the cost and time constraint. More time developers spend on testing more errors can be removed leading to better reliable software but then testing cost will also increase. On the contrary, if testing time is too short, software cost could be reduced provided the customers take risk of buying unreliable software. However, this will increase the cost during operational phase since it is more expensive to fix an error during operational phase than during testing phase. Therefore it is essentially important to decide when to stop testing and release the software to customers based on cost and reliability assessment. In this paper we present a mechanism of when to stop testing process and release the software to end-user by developing a software cost model with risk factor. Based on the proposed method we specifically address the issues of how to decide that we should stop testing and release the software based on three-tier client server architecture which would facilitates software developers to ensure on-time delivery of a software product meeting the criteria of achieving predefined level of reliability and minimizing the cost. A numerical example has been cited to illustrate the experimental results showing significant improvements over the conventional statistical models based on NHPP.
The International Journal of Engineering and Science (IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
A Comparative Analysis of Slicing for Structured ProgramsEditor IJCATR
Program Slicing is a method for automatically decomposing programs by analyzing their data flow and control flow.
Slicing reduces the program to a minimal form called “slice” which still produces that behavior. Program slice singles out all
statements that may have affected the value of a given variable at a specific program point. Slicing is useful in program
debugging, program maintenance and other applications that involve understanding program behavior. In this paper we have
discuss the static and dynamic slicing and its comparison by taking number of examples
Software test-case generation is the process of identifying a set of test cases. It is necessary to generate the test sequence that satisfies the testing criteria. For solving this kind of difficult problem there were a lot of research works, which have been done in the past. The length of the test sequence plays an important role in software testing. The length of test sequence decides whether the sufficient testing is carried or not. Many existing test sequence generation techniques uses genetic algorithm for test-case generation in software testing. The Genetic Algorithm (GA) is an optimization heuristic technique that is implemented through evolution and fitness function. It generates new test cases from the existing test sequence. Further to improve the existing techniques, a new technique is proposed in this paper which combines the tabu search algorithm and the genetic algorithm. The hybrid technique combines the strength of the two meta-heuristic methods and produces efficient test- case sequence.
How is Mortgage Lending Influencing the Economic Growth in AlbaniaEditor IJCATR
Banks in Albania have been playing an important role in providing credit to the households especially to the
housing sector and thereby contributing to the aggregate demand in the sector. Moreover, Albanian banks also extend
various types of loans against the individuals and corporate before the financial crisis. After the crisis the banks become
more restricted due to increase of non-performing loans as well as the macroeconomic volatility which is higher in
emerging economies like Albania. In my study I will focus on the influence of mortgage loans and nonperforming loans
in the country economic growth during the interval 2008-2013 that corresponds with the after crisis period.
Software testing is an important activity of the software development process. Software testing is most
efforts consuming phase in software development. One would like to minimize the effort and maximize the
number of faults detected and automated test case generation contributes to reduce cost and time effort.
Hence test case generation may be treated as an optimization problem In this paper we have used genetic
algorithm to optimize the test case that are generated applying conditional coverage on source code. Test
case data is generated automatically using genetic algorithm are optimized and outperforms the test cases
generated by random testing.
Benchmark methods to analyze embedded processors and systemsXMOS
xCORE multicore microcontrollers are 100x more responsive than traditional micros. The unparalleled responsiveness of the xCORE I/O ports is rooted in some fundamental features:
- Single cycle instruction execution
- No interrupts
- No cache
- Multiple cores allow concurrent independent task execution
- Hardware scheduler performs 'RTOS-like' functions
SOFTWARE TESTING: ISSUES AND CHALLENGES OF ARTIFICIAL INTELLIGENCE & MACHINE ...ijaia
The history of Artificial Intelligence and Machine Learning dates back to 1950’s. In recent years, there has
been an increase in popularity for applications that implement AI and ML technology. As with traditional
development, software testing is a critical component of an efficient AI/ML application. However, the
approach to development methodology used in AI/ML varies significantly from traditional development.
Owing to these variations, numerous software testing challenges occur. This paper aims to recognize and
to explain some of the biggest challenges that software testers face in dealing with AI/ML applications. For
future research, this study has key implications. Each of the challenges outlined in this paper is ideal for
further investigation and has great potential to shed light on the way to more productive software testing
strategies and methodologies that can be applied to AI/ML applications.
Design and Implementation of Proportional Integral Observer based Linear Mode...IDES Editor
This paper presents an interior-point method (IPM)
based quadratic programming (QP) solver for the solution of
optimal control problem in linear model predictive control
(MPC). LU factorization is used to solve the system of linear
equations efficiently at each iteration of IPM, which renders
faster execution of QP solver. The controller requires internal
states of the system. To address this issue, a Proportional
Integral Observer (PIO) is designed, which estimates the state
vector, as well as the uncertainties in an integrated manner.
MPC uses the states estimated by PIO, and the effect of
uncertainty is compensated by augmenting MPC with PIOestimated
uncertainties and external disturbances. The
approach is demonstrated practically by applying MPC to QET
DC servomotor for position control application. The proposed
method is compared with classical control strategy-PID
control.
AN APPROACH FOR TEST CASE PRIORITIZATION BASED UPON VARYING REQUIREMENTS IJCSEA Journal
Software testing is a process continuously performed by the development team during the life cycle of the software with the motive to detect the faults as early as possible. Regressing testing is the most suitable technique for this in which we test number of test cases. As the number of test cases can be very large it is always preferable to prioritize test cases based upon certain criterions.In this paper prioritization strategy is proposed which prioritize test cases based on requirements analysis. By regressing testing if the requirements will vary in future, the software will be modified in such a manner that it will not affect the remaining parts of the software. The proposed system improves the testing process and its efficiency to achieve goals regarding quality, cost, and effort as well user satisfaction and the result of the proposed method evaluated with the help of performance evaluation metric.
A Defect Prediction Model for Software Product based on ANFISIJSRD
Artificial intelligence techniques are day by day getting involvement in all the classification and prediction based process like environmental monitoring, stock exchange conditions, biomedical diagnosis, software engineering etc. However still there are yet to be simplify the challenges of selecting training criteria for design of artificial intelligence models used for prediction of results. This work focus on the defect prediction mechanism development using software metric data of KC1.We have taken subtractive clustering approach for generation of fuzzy inference system (FIS).The FIS rules are generated at different radius of influence of input attribute vectors and the developed rules are further modified by ANFIS technique to obtain the prediction of number of defects in software project using fuzzy logic system.
Estimation of resources, cost, and schedule for a software engineering effort requires experience, access to good historical information, and the courage to commit to quantitative predictions when qualitative information is all that exists. Halstead’s Measure & COCOMO Modeol COCOMO II Model of Estimation techniquesused or S/w Developments and Maintenance
Determination of Software Release Instant of Three-Tier Client Server Softwar...Waqas Tariq
Quality of any software system mainly depends on how much time testing take place, what kind of testing methodologies are used, how complex the software is, the amount of efforts put by software developers and the type of testing environment subject to the cost and time constraint. More time developers spend on testing more errors can be removed leading to better reliable software but then testing cost will also increase. On the contrary, if testing time is too short, software cost could be reduced provided the customers take risk of buying unreliable software. However, this will increase the cost during operational phase since it is more expensive to fix an error during operational phase than during testing phase. Therefore it is essentially important to decide when to stop testing and release the software to customers based on cost and reliability assessment. In this paper we present a mechanism of when to stop testing process and release the software to end-user by developing a software cost model with risk factor. Based on the proposed method we specifically address the issues of how to decide that we should stop testing and release the software based on three-tier client server architecture which would facilitates software developers to ensure on-time delivery of a software product meeting the criteria of achieving predefined level of reliability and minimizing the cost. A numerical example has been cited to illustrate the experimental results showing significant improvements over the conventional statistical models based on NHPP.
The International Journal of Engineering and Science (IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
A Comparative Analysis of Slicing for Structured ProgramsEditor IJCATR
Program Slicing is a method for automatically decomposing programs by analyzing their data flow and control flow.
Slicing reduces the program to a minimal form called “slice” which still produces that behavior. Program slice singles out all
statements that may have affected the value of a given variable at a specific program point. Slicing is useful in program
debugging, program maintenance and other applications that involve understanding program behavior. In this paper we have
discuss the static and dynamic slicing and its comparison by taking number of examples
Software test-case generation is the process of identifying a set of test cases. It is necessary to generate the test sequence that satisfies the testing criteria. For solving this kind of difficult problem there were a lot of research works, which have been done in the past. The length of the test sequence plays an important role in software testing. The length of test sequence decides whether the sufficient testing is carried or not. Many existing test sequence generation techniques uses genetic algorithm for test-case generation in software testing. The Genetic Algorithm (GA) is an optimization heuristic technique that is implemented through evolution and fitness function. It generates new test cases from the existing test sequence. Further to improve the existing techniques, a new technique is proposed in this paper which combines the tabu search algorithm and the genetic algorithm. The hybrid technique combines the strength of the two meta-heuristic methods and produces efficient test- case sequence.
How is Mortgage Lending Influencing the Economic Growth in AlbaniaEditor IJCATR
Banks in Albania have been playing an important role in providing credit to the households especially to the
housing sector and thereby contributing to the aggregate demand in the sector. Moreover, Albanian banks also extend
various types of loans against the individuals and corporate before the financial crisis. After the crisis the banks become
more restricted due to increase of non-performing loans as well as the macroeconomic volatility which is higher in
emerging economies like Albania. In my study I will focus on the influence of mortgage loans and nonperforming loans
in the country economic growth during the interval 2008-2013 that corresponds with the after crisis period.
Steganography using Interpolation and LSB with Cryptography on Video Images-A...Editor IJCATR
Stegnography is the most common term used in the IT industry, which specifically means, "covered writing" and is derived
from the Greek language. Stegnography is defined as the art and science of invisible communication i.e. it hides the existence of the
communication between the sender and the receiver. In distinction to Cryptography, where the opponent is permitted to detect,
interrupt and alter messages without being able to breach definite security grounds guaranteed by the cryptosystem, the prime
objective of Stegnography is to conceal messages inside other risk-free messages in a manner that does not agree to any enemy to even
sense that there is any second message present. Nowadays, it is an emerging area which is used for secured data transmission over any
public medium such as internet. In this research a novel approach of image stegnography based on LSB (Least Significant Bit)
insertion and cryptography method for the lossless jpeg images has been projected. This paper is comprising an application which
ranks images in a users library on the basis of their appropriateness as cover objects for some facts. Here, the data is matched to an
image, so there is a less possibility of an invader being able to employ steganalysis to recuperate the data. Furthermore, the application
first encrypts the data by means of cryptography and message bits that are to be hidden are embedded into the image using Least
Significant Bits insertion technique. Moreover, interpolation is used to increase the density
Different Approach of VIDEO Compression Technique: A StudyEditor IJCATR
The main objective of video compression is to achieve video compression with less possible losses to reduce the
transmission bandwidth and storage memory. This paper discusses different approach of video compression for better transmission of
video frames for multimedia application. Video compression methods such as frame difference approach, PCA based method,
accordion function, fuzzy concept, and EZW and FSBM were analyzed in this paper. Those methods were compared for performance,
speed and accuracy and which method produces better visual quality.
Website security is a critical issue that needs to be considered in the web, in order to run your online business healthy and
smoothly. It is very difficult situation when security of website is compromised when a brute force or other kind of attacker attacks on
your web creation. It not only consume all your resources but create heavy log dumps on the server which causes your website stop
working.
Recent studies have suggested some backup and recovery modules that should be installed into your website which can take timely
backups of your website to 3rd party servers which are not under the scope of attacker. The Study also suggested different type of
recovery methods such as incremental backups, decremental backups, differential backups and remote backup.
Moreover these studies also suggested that Rsync is used to reduce the transferred data efficiently. The experimental results show
that the remote backup and recovery system can work fast and it can meet the requirements of website protection. The automatic backup
and recovery system for Web site not only plays an important role in the web defence system but also is the last line for disaster
recovery.
This paper suggests different kind of approaches that can be incorporated in the WordPress CMS to make it healthy, secure and
prepared web attacks. The paper suggests various possibilities of the attacks that can be made on CMS and some of the possible
solutions as well as preventive mechanisms.
Some of the proposed security measures –
1. Secret login screen
2. Blocking bad boats
3. Changing db. prefixes
4. Protecting configuration files
5. 2 factor security
6. Flight mode in Web Servers
7. Protecting htaccess file itself
8. Detecting vulnerabilities
9. Unauthorized access made to the system checker
However, this is to be done by balancing the trade-off between website security and backup recovery modules of a website, as measures
taken to secure web page should not affect the user‟s experience and recovery modules
Multimedia Video transmission is over Wireless Local Area Networks is expected to be an important component of many
emerging multimedia applications. However, Wireless networks will always be bandwidth limited compared to fixed networks due to
background noise, limited frequency spectrum, and varying degrees of network coverage and signal strength One of the critical issues
for multimedia applications is to ensure that the Quality of Service (QoS) requirement to be maintained at an acceptable level. Modern
mobile devices are equipped with multiple network interfaces, including 3G/LTE WiFi. Bandwidth aggregation over LTE and WiFi
links offers an attractive opportunity of supporting bandwidth-intensive services, such as high-quality video streaming, on mobile
devices. Achieving effective bandwidth aggregation in wireless environments raises several challenges related to deployment, link
heterogeneity, Network congestion, network fluctuation, and energy consumption. In this work, an overview of schemes for video
transmission over wireless networks is presented where an acceptable quality of service (QoS) for video applications required realtime
video transmission is achieved
Classification and Searching in Java API Reference DocumentationEditor IJCATR
Application Program Interface (API) allows programmers to use predefined functions instead of writing them from scratch.
Description of API elements that is Methods, Classes, Constructors etc. is provided through API Reference Documentation. Hence
API Reference Documentation acts as a guide to user or developer to use API’s. Different types of Knowledge Types are generated by
processing this API Reference Documentation. And this Knowledge Types will be used for Classification and Searching of Java API
Reference Documentation
A Review on a web based Punjabi t o English Machine Transliteration SystemEditor IJCATR
The paper presents the transliteration of noun phrases from Punjabi to English using statistical machine translation
approach.Transliteration maps the letters of source scrip
ts to letters of another language.Forward transliteration converts an original
word or phrase in the source language into a word in the target language.Backward transliteration is the reverse process that
converts
the transliterated word or phrase back int
o its original word or phrase.Transliteration is an important part of research in NLP.Natural
Language Processing (NLP) is the ability of a
computer program to understand human speech as it is spoken.NLP is an important
component of AI.Artificial Intellig
ence is a branch of science which deals with helping machines find solutions to complex programs
in a human like fashion.The transliteration system is going to developed using SMT.Statistical Machine Translation (SMT) is a
data
oriented statistical framewo
rk for translating text from one natural language to another based on the knowledge
IA Literature Review on Synthesis and Characterization of enamelled copper wi...Editor IJCATR
This paper discusses about the survey on the various magazines, conference papers and journals for understanding the
properties of enamelled copper wires mixed with nano fillers, fundamental methods for synthesis and characterization of carbon
nanotubes. From all these papers, it was noted that the research work carried out in an enamelled copper wires filled with nano fillers
has shown better results. It was also recorded that the research work was carried mostly with single metal catalysts and very little
amount of research work has been carried out on the synthesis of carbon nanotubes using bimetallic catalysts.
Dielectric and Thermal Characterization of Insulating Organic Varnish Used in...Editor IJCATR
In recent days, a lot of attention was being drawn towards the polymer nanocomposites for use in electrical applications due
to encouraging results obtained for their dielectric properties. Polymer nanocomposites were commonly defined as a combination of
polymer matrix and additives that have at least one dimension in the nanometer range scale. Carbon nanotubes were of a special
interest as the possible organic component in such a composite coating. The carbon atoms were arranged in a hexagonal network and
then rolled up to form a seamless cylinder which measures several nanometers across, but can be thousands of nanometers long. There
were many different types, but the two main categories are single-walled nanotubes (SWNTs) and multi-walled nanotubes (MWNTs),
which are made from multiple layers of graphite. Carbon nanotubes were an example of a nanostructure varying in size from 1-100
nanometers (the scale of atoms and molecules). Nano composites were one of the fastest growing fields in nanotechnology. Extensive
literature survey has been done on the nanocomposites, synthesis and preparation of nano filler. The following objectives were set
based on the literature survey and understanding the technology.
Complete study of Organic varnish and CNT
Chemical properties
Electrical properties
Thermal properties
Mechanical properties
Synthesis and characterization of carbon nanotubes
Preparation of polymer nanocomposites
Study of characteristics of the nanocomposite insulation
Dimensioning an insulation system requires exact knowledge of the type, magnitude and duration of the electric stress while
simultaneously considering the ambient conditions. But, on the other hand, properties of the insulating materials in question must also
be known, so that in addition to the proper material, the optimum, e.g. the most economical design of the insulation system must be
chosen.
Spam filtering by using Genetic based Feature SelectionEditor IJCATR
Spam is defined as redundant and unwanted electronica letters, and nowadays, it has created many problems in business life such as
occupying networks bandwidth and the space of user’s mailbox. Due to these problems, much research has been carried out in this
regard by using classification technique. The resent research show that feature selection can have positive effect on the efficiency of
machine learning algorithm. Most algorithms try to present a data model depending on certain detection of small set of features.
Unrelated features in the process of making model result in weak estimation and more computations. In this research it has been tried
to evaluate spam detection in legal electronica letters, and their effect on several Machin learning algorithms through presenting a
feature selection method based on genetic algorithm. Bayesian network and KNN classifiers have been taken into account in
classification phase and spam base dataset is used.
Designing of Semantic Nearest Neighbor Search: SurveyEditor IJCATR
Conventional spatial queries, such as range search and nearest neighbor retrieval, involve only conditions on objects’
geometric properties. Today, many modern applications call for novel forms of queries that aim to find objects satisfying both a spatial
predicate, and a predicate on their associated texts. For example, instead of considering all the restaurants, a nearest neighbor query
would instead ask for the restaurant that is the closest among those whose menus contain ―steak, spaghetti, brandy‖ all at the same
time. Currently the best solution to such queries is based on the IR2-tree, which, as shown in this paper, has a few deficiencies that
seriously impact its efficiency. Motivated by this, we develop a new access method called the spatial inverted index that extends the
conventional inverted index to cope with multidimensional data, and comes with algorithms that can answer nearest neighbor queries
with keywords in real time. As verified by experiments, the proposed techniques outperform the IR2-tree in query response time
significantly, often by a factor of orders of magnitude.
Design of STT-RAM cell in 45nm hybrid CMOS/MTJ processEditor IJCATR
This paper evaluates the performance of Spin-Torque Transfer Random Access Memory (STT-RAM) basic memory cell
configurations in 45nm hybrid CMOS/MTJ process. Switching speed and current drawn by the cells have been calculated and
compared. Cell design has been done using cadence tools. The results obtained show good agreement with theoretical results.
Software Testing and Quality Assurance Assignment 2Gurpreet singh
Short questions :
Q1: What is stress testing?
Q2: What is Cyclomatic complexity?
Q3: Define Object Oriented Testing
Q4: What is regression testing? When it is done?
Q5: How loop testing is different from the path testing?
Q6: What is client server environment?
Q7: What is graph based testing?
Q8: How security testing is useful in real applications?
Q9: What are main characteristics of real time system?
Q10: What are the benefits of data flow testing?
Long Questions:
Q1: Design test case for: ERP, Traffic controller and university management system?
Q2: Assuming a real time system of your choice, discuss the concepts. Analysis and design factors of same, elaborate
Q3: How testing in multiplatform environment is performed?
Q4: Explain graph based testing in detail
Q5: Differentiate between Equivalence partitioning and boundary value analysis
Configuration Navigation Analysis Model for Regression Test Case Prioritizationijsrd.com
Regression testing has been receiving increasing attention nowadays. Numerous regression testing strategies have been proposed. Most of them take into account various metrics like cost as well as the ability to find faults quickly thereby saving overall testing time. In this paper, a new model called the Configuration Navigation Analysis Model is proposed which tries to consider all stakeholders and various testing aspects while prioritizing regression test cases.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Test Case Optimization and Redundancy Reduction Using GA and Neural Networks IJECEIAES
More than 50% of software development effort is spent in testing phase in a typical software development project. Test case design as well as execution consume a lot of time. Hence, automated generation of test cases is highly required. Here a novel testing methodology is being presented to test objectoriented software based on UML state chart diagrams. In this approach, function minimization technique is being applied and generate test cases automatically from UML state chart diagrams. Software testing forms an integral part of the software development life cycle. Since the objective of testing is to ensure the conformity of an application to its specification, a test “oracle” is needed to determine whether a given test case exposes a fault or not. An automated oracle to support the activities of human testers can reduce the actual cost of the testing process and the related maintenance costs. In this paper, a new concept is being presented using an UML state chart diagram and tables for the test case generation, artificial neural network as an optimization tool for reducing the redundancy in the test case generated using the genetic algorithm. A neural network is trained by the backpropagation algorithm on a set of test cases applied to the original version of the system.
Real time implementation of the software system requires being more versatile. In the maintenance phase, the modified system under regression testing must assure that the existing system remains defect free. Test case prioritization technique of regression testing includes code as well as model based methods of prioritizing the test cases. System model based test case prioritization can detect the severe faults early as compare to the code based test case prioritization. Model based prioritization techniques based on requirements in a cost effective manner has not been taken for study so far. Model based testing used to test the functionality of the software system based on requirement. An effective model based approach is defined for prioritizing test cases and to generate the effective test sequence. The test cases are rescheduled based on requirement analysis and user view analysis. With the use of weighted approach the overall cost is estimated to test the functionality of the model elements. Here, the genetic approach has been applied to generate efficient test path. The regression cost in terms of effort has been reduced under model based prioritization approach.
A Defect Prediction Model for Software Product based on ANFISIJSRD
Artificial intelligence techniques are day by day getting involvement in all the classification and prediction based process like environmental monitoring, stock exchange conditions, biomedical diagnosis, software engineering etc. However still there are yet to be simplify the challenges of selecting training criteria for design of artificial intelligence models used for prediction of results. This work focus on the defect prediction mechanism development using software metric data of KC1.We have taken subtractive clustering approach for generation of fuzzy inference system (FIS).The FIS rules are generated at different radius of influence of input attribute vectors and the developed rules are further modified by ANFIS technique to obtain the prediction of number of defects in software project using fuzzy logic system.
A Review on Software Fault Detection and Prevention Mechanism in Software Dev...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
A Review on Parameter Estimation Techniques of Software Reliability Growth Mo...Editor IJCATR
Software reliability is considered as a quantifiable metric, which is defined as the probability of a software to operate
without failure for a specified period of time in a specific environment. Various software reliability growth models have been proposed
to predict the reliability of a software. These models help vendors to predict the behaviour of the software before shipment. The
reliability is predicted by estimating the parameters of the software reliability growth models. But the model parameters are generally
in nonlinear relationships which creates many problems in finding the optimal parameters using traditional techniques like Maximum
Likelihood and least Square Estimation. Various stochastic search algorithms have been introduced which have made the task of
parameter estimation, more reliable and computationally easier. Parameter estimation of NHPP based reliability models, using MLE
and using an evolutionary search algorithm called Particle Swarm Optimization, has been explored in the paper.
Implementation of reducing features to improve code change based bug predicti...eSAT Journals
Abstract Today, we are getting plenty of bugs in the software because of variations in the software and hardware technologies. Bugs are nothing but Software faults, existing a severe challenge for system reliability and dependability. To identify the bugs from the software bug prediction is convenient approach. To visualize the presence of a bug in a source code file, recently, Machine learning classifiers approach is developed. Because of a huge number of machine learned features current classifier-based bug prediction have two major problems i) inadequate precision for practical usage ii) measured prediction time. In this paper we used two techniques first, cos-triage algorithm which have a go to enhance the accuracy and also lower the price of bug prediction and second, feature selection methods which eliminate less significant features. Reducing features get better the quality of knowledge extracted and also boost the speed of computation. Keywords: Efficiency, Bug Prediction, Classification, Feature Selection, Accuracy
Abstract—Combinatorial testing (also called interaction testing) is an effective specification-based test input generation technique. By now most of research work in combinatorial testing aims to propose novel approaches trying to generate test suites with minimum size that still cover all the pairwise, triple, or n-way combinations of factors. Since the difficulty of solving this problem is demonstrated to be NP-hard, existing approaches have been designed to generate optimal or near optimal combinatorial test suites in polynomial time. In this paper, we try to apply particle swarm optimization (PSO), a kind of meta-heuristic search technique, to pairwise testing (i.e. a special case of combinatorial testing aiming to cover all the pairwise combinations). To systematically build pairwise test suites, we propose two different PSO based algorithms. One algorithm is based on one-test-at-a-time strategy and the other is based on IPO-like strategy. In these two different algorithms, we use PSO to complete the construction of a single test. To successfully apply PSO to cover more uncovered pairwise combinations in this construction process, we provide a detailed description on how to formulate the search space, define the fitness function and set some heuristic settings. To verify the effectiveness of our approach, we implement these algorithms and choose some typical inputs. In our empirical study, we analyze the impact factors of our approach and compare our approach to other well-known approaches. Final empirical results show the effectiveness and efficiency of our approach.
Similar to Parameter Estimation of Software Reliability Growth Models Using Simulated Annealing Method (20)
Text Mining in Digital Libraries using OKAPI BM25 ModelEditor IJCATR
The emergence of the internet has made vast amounts of information available and easily accessible online. As a result, most libraries have digitized their content in order to remain relevant to their users and to keep pace with the advancement of the internet. However, these digital libraries have been criticized for using inefficient information retrieval models that do not perform relevance ranking to the retrieved results. This paper proposed the use of OKAPI BM25 model in text mining so as means of improving relevance ranking of digital libraries. Okapi BM25 model was selected because it is a probability-based relevance ranking algorithm. A case study research was conducted and the model design was based on information retrieval processes. The performance of Boolean, vector space, and Okapi BM25 models was compared for data retrieval. Relevant ranked documents were retrieved and displayed at the OPAC framework search page. The results revealed that Okapi BM 25 outperformed Boolean model and Vector Space model. Therefore, this paper proposes the use of Okapi BM25 model to reward terms according to their relative frequencies in a document so as to improve the performance of text mining in digital libraries.
Green Computing, eco trends, climate change, e-waste and eco-friendlyEditor IJCATR
This study focused on the practice of using computing resources more efficiently while maintaining or increasing overall performance. Sustainable IT services require the integration of green computing practices such as power management, virtualization, improving cooling technology, recycling, electronic waste disposal, and optimization of the IT infrastructure to meet sustainability requirements. Studies have shown that costs of power utilized by IT departments can approach 50% of the overall energy costs for an organization. While there is an expectation that green IT should lower costs and the firm’s impact on the environment, there has been far less attention directed at understanding the strategic benefits of sustainable IT services in terms of the creation of customer value, business value and societal value. This paper provides a review of the literature on sustainable IT, key areas of focus, and identifies a core set of principles to guide sustainable IT service design.
Policies for Green Computing and E-Waste in NigeriaEditor IJCATR
Computers today are an integral part of individuals’ lives all around the world, but unfortunately these devices are toxic to the environment given the materials used, their limited battery life and technological obsolescence. Individuals are concerned about the hazardous materials ever present in computers, even if the importance of various attributes differs, and that a more environment -friendly attitude can be obtained through exposure to educational materials. In this paper, we aim to delineate the problem of e-waste in Nigeria and highlight a series of measures and the advantage they herald for our country and propose a series of action steps to develop in these areas further. It is possible for Nigeria to have an immediate economic stimulus and job creation while moving quickly to abide by the requirements of climate change legislation and energy efficiency directives. The costs of implementing energy efficiency and renewable energy measures are minimal as they are not cash expenditures but rather investments paid back by future, continuous energy savings.
Performance Evaluation of VANETs for Evaluating Node Stability in Dynamic Sce...Editor IJCATR
Vehicular ad hoc networks (VANETs) are a favorable area of exploration which empowers the interconnection amid the movable vehicles and between transportable units (vehicles) and road side units (RSU). In Vehicular Ad Hoc Networks (VANETs), mobile vehicles can be organized into assemblage to promote interconnection links. The assemblage arrangement according to dimensions and geographical extend has serious influence on attribute of interaction .Vehicular ad hoc networks (VANETs) are subclass of mobile Ad-hoc network involving more complex mobility patterns. Because of mobility the topology changes very frequently. This raises a number of technical challenges including the stability of the network .There is a need for assemblage configuration leading to more stable realistic network. The paper provides investigation of various simulation scenarios in which cluster using k-means algorithm are generated and their numbers are varied to find the more stable configuration in real scenario of road.
Optimum Location of DG Units Considering Operation ConditionsEditor IJCATR
The optimal sizing and placement of Distributed Generation units (DG) are becoming very attractive to researchers these days. In this paper a two stage approach has been used for allocation and sizing of DGs in distribution system with time varying load model. The strategic placement of DGs can help in reducing energy losses and improving voltage profile. The proposed work discusses time varying loads that can be useful for selecting the location and optimizing DG operation. The method has the potential to be used for integrating the available DGs by identifying the best locations in a power system. The proposed method has been demonstrated on 9-bus test system.
Analysis of Comparison of Fuzzy Knn, C4.5 Algorithm, and Naïve Bayes Classifi...Editor IJCATR
Early detection of diabetes mellitus (DM) can prevent or inhibit complication. There are several laboratory test that must be done to detect DM. The result of this laboratory test then converted into data training. Data training used in this study generated from UCI Pima Database with 6 attributes that were used to classify positive or negative diabetes. There are various classification methods that are commonly used, and in this study three of them were compared, which were fuzzy KNN, C4.5 algorithm and Naïve Bayes Classifier (NBC) with one identical case. The objective of this study was to create software to classify DM using tested methods and compared the three methods based on accuracy, precision, and recall. The results showed that the best method was Fuzzy KNN with average and maximum accuracy reached 96% and 98%, respectively. In second place, NBC method had respective average and maximum accuracy of 87.5% and 90%. Lastly, C4.5 algorithm had average and maximum accuracy of 79.5% and 86%, respectively.
Web Scraping for Estimating new Record from Source SiteEditor IJCATR
Study in the Competitive field of Intelligent, and studies in the field of Web Scraping, have a symbiotic relationship mutualism. In the information age today, the website serves as a main source. The research focus is on how to get data from websites and how to slow down the intensity of the download. The problem that arises is the website sources are autonomous so that vulnerable changes the structure of the content at any time. The next problem is the system intrusion detection snort installed on the server to detect bot crawler. So the researchers propose the use of the methods of Mining Data Records and the method of Exponential Smoothing so that adaptive to changes in the structure of the content and do a browse or fetch automatically follow the pattern of the occurrences of the news. The results of the tests, with the threshold 0.3 for MDR and similarity threshold score 0.65 for STM, using recall and precision values produce f-measure average 92.6%. While the results of the tests of the exponential estimation smoothing using ? = 0.5 produces MAE 18.2 datarecord duplicate. It slowed down to 3.6 datarecord from 21.8 datarecord results schedule download/fetch fix in an average time of occurrence news.
Evaluating Semantic Similarity between Biomedical Concepts/Classes through S...Editor IJCATR
Most of the existing semantic similarity measures that use ontology structure as their primary source can measure semantic similarity between concepts/classes using single ontology. The ontology-based semantic similarity techniques such as structure-based semantic similarity techniques (Path Length Measure, Wu and Palmer’s Measure, and Leacock and Chodorow’s measure), information content-based similarity techniques (Resnik’s measure, Lin’s measure), and biomedical domain ontology techniques (Al-Mubaid and Nguyen’s measure (SimDist)) were evaluated relative to human experts’ ratings, and compared on sets of concepts using the ICD-10 “V1.0” terminology within the UMLS. The experimental results validate the efficiency of the SemDist technique in single ontology, and demonstrate that SemDist semantic similarity techniques, compared with the existing techniques, gives the best overall results of correlation with experts’ ratings.
Semantic Similarity Measures between Terms in the Biomedical Domain within f...Editor IJCATR
The techniques and tests are tools used to define how measure the goodness of ontology or its resources. The similarity between biomedical classes/concepts is an important task for the biomedical information extraction and knowledge discovery. However, most of the semantic similarity techniques can be adopted to be used in the biomedical domain (UMLS). Many experiments have been conducted to check the applicability of these measures. In this paper, we investigate to measure semantic similarity between two terms within single ontology or multiple ontologies in ICD-10 “V1.0” as primary source, and compare my results to human experts score by correlation coefficient.
A Strategy for Improving the Performance of Small Files in Openstack Swift Editor IJCATR
This is an effective way to improve the storage access performance of small files in Openstack Swift by adding an aggregate storage module. Because Swift will lead to too much disk operation when querying metadata, the transfer performance of plenty of small files is low. In this paper, we propose an aggregated storage strategy (ASS), and implement it in Swift. ASS comprises two parts which include merge storage and index storage. At the first stage, ASS arranges the write request queue in chronological order, and then stores objects in volumes. These volumes are large files that are stored in Swift actually. During the short encounter time, the object-to-volume mapping information is stored in Key-Value store at the second stage. The experimental results show that the ASS can effectively improve Swift's small file transfer performance.
Integrated System for Vehicle Clearance and RegistrationEditor IJCATR
Efficient management and control of government's cash resources rely on government banking arrangements. Nigeria, like many low income countries, employed fragmented systems in handling government receipts and payments. Later in 2016, Nigeria implemented a unified structure as recommended by the IMF, where all government funds are collected in one account would reduce borrowing costs, extend credit and improve government's fiscal policy among other benefits to government. This situation motivated us to embark on this research to design and implement an integrated system for vehicle clearance and registration. This system complies with the new Treasury Single Account policy to enable proper interaction and collaboration among five different level agencies (NCS, FRSC, SBIR, VIO and NPF) saddled with vehicular administration and activities in Nigeria. Since the system is web based, Object Oriented Hypermedia Design Methodology (OOHDM) is used. Tools such as Php, JavaScript, css, html, AJAX and other web development technologies were used. The result is a web based system that gives proper information about a vehicle starting from the exact date of importation to registration and renewal of licensing. Vehicle owner information, custom duty information, plate number registration details, etc. will also be efficiently retrieved from the system by any of the agencies without contacting the other agency at any point in time. Also number plate will no longer be the only means of vehicle identification as it is presently the case in Nigeria, because the unified system will automatically generate and assigned a Unique Vehicle Identification Pin Number (UVIPN) on payment of duty in the system to the vehicle and the UVIPN will be linked to the various agencies in the management information system.
Assessment of the Efficiency of Customer Order Management System: A Case Stu...Editor IJCATR
The Supermarket Management System deals with the automation of buying and selling of good and services. It includes both sales and purchase of items. The project Supermarket Management System is to be developed with the objective of making the system reliable, easier, fast, and more informative.
Energy-Aware Routing in Wireless Sensor Network Using Modified Bi-Directional A*Editor IJCATR
Energy is a key component in the Wireless Sensor Network (WSN)[1]. The system will not be able to run according to its function without the availability of adequate power units. One of the characteristics of wireless sensor network is Limitation energy[2]. A lot of research has been done to develop strategies to overcome this problem. One of them is clustering technique. The popular clustering technique is Low Energy Adaptive Clustering Hierarchy (LEACH)[3]. In LEACH, clustering techniques are used to determine Cluster Head (CH), which will then be assigned to forward packets to Base Station (BS). In this research, we propose other clustering techniques, which utilize the Social Network Analysis approach theory of Betweeness Centrality (BC) which will then be implemented in the Setup phase. While in the Steady-State phase, one of the heuristic searching algorithms, Modified Bi-Directional A* (MBDA *) is implemented. The experiment was performed deploy 100 nodes statically in the 100x100 area, with one Base Station at coordinates (50,50). To find out the reliability of the system, the experiment to do in 5000 rounds. The performance of the designed routing protocol strategy will be tested based on network lifetime, throughput, and residual energy. The results show that BC-MBDA * is better than LEACH. This is influenced by the ways of working LEACH in determining the CH that is dynamic, which is always changing in every data transmission process. This will result in the use of energy, because they always doing any computation to determine CH in every transmission process. In contrast to BC-MBDA *, CH is statically determined, so it can decrease energy usage.
Security in Software Defined Networks (SDN): Challenges and Research Opportun...Editor IJCATR
In networks, the rapidly changing traffic patterns of search engines, Internet of Things (IoT) devices, Big Data and data centers has thrown up new challenges for legacy; existing networks; and prompted the need for a more intelligent and innovative way to dynamically manage traffic and allocate limited network resources. Software Defined Network (SDN) which decouples the control plane from the data plane through network vitalizations aims to address these challenges. This paper has explored the SDN architecture and its implementation with the OpenFlow protocol. It has also assessed some of its benefits over traditional network architectures, security concerns and how it can be addressed in future research and related works in emerging economies such as Nigeria.
Measure the Similarity of Complaint Document Using Cosine Similarity Based on...Editor IJCATR
Report handling on "LAPOR!" (Laporan, Aspirasi dan Pengaduan Online Rakyat) system depending on the system administrator who manually reads every incoming report [3]. Read manually can lead to errors in handling complaints [4] if the data flow is huge and grows rapidly, it needs at least three days to prepare a confirmation and it sensitive to inconsistencies [3]. In this study, the authors propose a model that can measure the identities of the Query (Incoming) with Document (Archive). The authors employed Class-Based Indexing term weighting scheme, and Cosine Similarities to analyse document similarities. CoSimTFIDF, CoSimTFICF and CoSimTFIDFICF values used in classification as feature for K-Nearest Neighbour (K-NN) classifier. The optimum result evaluation is pre-processing employ 75% of training data ratio and 25% of test data with CoSimTFIDF feature. It deliver a high accuracy 84%. The k = 5 value obtain high accuracy 84.12%
Hangul Recognition Using Support Vector MachineEditor IJCATR
The recognition of Hangul Image is more difficult compared with that of Latin. It could be recognized from the structural arrangement. Hangul is arranged from two dimensions while Latin is only from the left to the right. The current research creates a system to convert Hangul image into Latin text in order to use it as a learning material on reading Hangul. In general, image recognition system is divided into three steps. The first step is preprocessing, which includes binarization, segmentation through connected component-labeling method, and thinning with Zhang Suen to decrease some pattern information. The second is receiving the feature from every single image, whose identification process is done through chain code method. The third is recognizing the process using Support Vector Machine (SVM) with some kernels. It works through letter image and Hangul word recognition. It consists of 34 letters, each of which has 15 different patterns. The whole patterns are 510, divided into 3 data scenarios. The highest result achieved is 94,7% using SVM kernel polynomial and radial basis function. The level of recognition result is influenced by many trained data. Whilst the recognition process of Hangul word applies to the type 2 Hangul word with 6 different patterns. The difference of these patterns appears from the change of the font type. The chosen fonts for data training are such as Batang, Dotum, Gaeul, Gulim, Malgun Gothic. Arial Unicode MS is used to test the data. The lowest accuracy is achieved through the use of SVM kernel radial basis function, which is 69%. The same result, 72 %, is given by the SVM kernel linear and polynomial.
Application of 3D Printing in EducationEditor IJCATR
This paper provides a review of literature concerning the application of 3D printing in the education system. The review identifies that 3D Printing is being applied across the Educational levels [1] as well as in Libraries, Laboratories, and Distance education systems. The review also finds that 3D Printing is being used to teach both students and trainers about 3D Printing and to develop 3D Printing skills.
Survey on Energy-Efficient Routing Algorithms for Underwater Wireless Sensor ...Editor IJCATR
In underwater environment, for retrieval of information the routing mechanism is used. In routing mechanism there are three to four types of nodes are used, one is sink node which is deployed on the water surface and can collect the information, courier/super/AUV or dolphin powerful nodes are deployed in the middle of the water for forwarding the packets, ordinary nodes are also forwarder nodes which can be deployed from bottom to surface of the water and source nodes are deployed at the seabed which can extract the valuable information from the bottom of the sea. In underwater environment the battery power of the nodes is limited and that power can be enhanced through better selection of the routing algorithm. This paper focuses the energy-efficient routing algorithms for their routing mechanisms to prolong the battery power of the nodes. This paper also focuses the performance analysis of the energy-efficient algorithms under which we can examine the better performance of the route selection mechanism which can prolong the battery power of the node
Comparative analysis on Void Node Removal Routing algorithms for Underwater W...Editor IJCATR
The designing of routing algorithms faces many challenges in underwater environment like: propagation delay, acoustic channel behaviour, limited bandwidth, high bit error rate, limited battery power, underwater pressure, node mobility, localization 3D deployment, and underwater obstacles (voids). This paper focuses the underwater voids which affects the overall performance of the entire network. The majority of the researchers have used the better approaches for removal of voids through alternate path selection mechanism but still research needs improvement. This paper also focuses the architecture and its operation through merits and demerits of the existing algorithms. This research article further focuses the analytical method of the performance analysis of existing algorithms through which we found the better approach for removal of voids
Decay Property for Solutions to Plate Type Equations with Variable CoefficientsEditor IJCATR
In this paper we consider the initial value problem for a plate type equation with variable coefficients and memory in
1 n R n ), which is of regularity-loss property. By using spectrally resolution, we study the pointwise estimates in the spectral
space of the fundamental solution to the corresponding linear problem. Appealing to this pointwise estimates, we obtain the global
existence and the decay estimates of solutions to the semilinear problem by employing the fixed point theorem
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Parameter Estimation of Software Reliability Growth Models Using Simulated Annealing Method
1. International Journal of Computer Applications Technology and Research
Volume 3– Issue 6, 377 - 380, 2014
www.ijcat.com 377
Parameter Estimation of Software Reliability Growth
Models Using Simulated Annealing Method
Chander Diwaker
UIET
Kurukshetra, Haryana
India
Sumit Goyat
UIET
Kurukshetra, Haryana
India
Abstract: The parameter estimation of Goel’s Okomotu Model is performed victimisation simulated annealing. The Goel’s Okomotu
Model is predicated on Exponential model and could be a easy non-homogeneous Poisson method (NHPP) model. Simulated
annealing could be a heuristic optimisation technique that provides a method to flee local optima. The information set is optimized
using simulated annealing technique. SA could be a random algorithmic program with higher performance than Genetic algorithmic
program (GA) that depends on the specification of the neighbourhood structure of a state area and parameter settings for its cooling
schedule.
Keywords: SA, NHPP, SRGM, MVF, FIF.
1. INTRODUCTION
Due to increasing dependence and demand for software
system, it's necessary to develop and maintain its reliableness.
The system-reliability drawback is that the series-parallel
redundancy allocation drawback wherever either system
dependability is maximized or total system testing cost/effort
is reduced. The keen interest of users in software system
reliability models has enhanced since the software system
element became a very important issue in several Engineering
projects [1]. Reliableness is measured over execution time so
it a lot of accurately reflects system usage. Reliability isn't
time dependent. Failures occur once the logic path that
contains a mistake is executed. Reliableness growth is
determined as errors are detected and corrected. During this
paper we have a tendency to present an approach to estimate
the parameters of Goel’s Okomotu Model victimisation
simulated annealing. The planned approach provide similar
results because the ancient estimation approach victimization
the most probability technique while not using info from
previous projects, except that the planned approach is far
easier to use and no numerical technique is needed.
2. SOFTWARE RELIABILITY
GROWTH MODELS
Software reliability growth model could be an elementary
technique that asses the reliableness of the software system
quantitatively. The SRGM need smart performance in terms
of predictability. Any software system needed to control smart
reliableness should endure intensive testing and debugging.
These processes might be expensive and time intense and
managers still need correct info associated with however
software system reliability grows. SRGMs will estimate the
amount of initial faults, the software system reliability, the
failure intensity, the mean time-interval between failures, etc.
These models facilitate to measure & track the expansion
of reliability of the software system as software system is
improved [7]. Within the literature survey of SRGM, solely
random processes like NHPP, S-shaped, Exponential, etc., are
considered that model the entire behaviour of the amount of
failures as a function of time.
3. GOEL’S OKOMOTU MODEL
This Model is predicated on Exponential model and could be
an easy non-homogeneous Poisson process (NHPP) model.
Goel-Okumoto (G-O) model curve is incurvature. This model
provides an analytical framework for describing the software
system failure development throughout testing.
• It is predicated on following observations:-
-Number of failures per unit testing time decreases
exponentially.
-Cumulative variety of failures versus check time follows an
exponentially growing curve.
• In this model it's assumed that a softwar package
is subject to failures haphazardly times caused by faults
present within the system
.
P{N(t)=y} = (m(t))y
e-m(t)
, y = 0,1,2,……….
y!
Where
m(t) = a(1- e-bt
)
λ(t) = m’(t) = abe-bt
Here m(t) is that the expected variety of failures determined
by time t and λ(t) is that the failure rate.
Information Requirements:-
- The failure counts in every of the testing intervals i.
e. fi
- The completion time of every period that the
software package is below observations i. e. ti
2. www.ijcat.com 378
The mean value value (MVF)
µ (t) = a (1-e-bt
)
The Failure Intensity function (FIF)
λ(t) = abe-bt
Where a is taken because the expected total variety of faults
within the software system before testing and b is that the
failure detection rate.
4. SIMULATED ANNEALING
Simulated annealing (SA) could be a heuristic improvement
model which will been applied to resolve several tough issues
within the numerous fields like programming, facility layout,
and graph colouring / graph partitioning issues. SA
algorithmic program is galvanized from the method of
hardening in metal work. SA has its name related to the
employment of temperature as a quantity which may be
modified based on a cooling schedule used as a tunable
algorithmic program parameter . Annealing involves heating
and cooling a fabric to {change} its physical properties
because of change in its internal structure. Travelling
salesman problem is that the best example of this algorithmic
program. SA could be a random algorithmic program with
higher performance than Genetic algorithmic program (GA)
that depends on the specification of the neighbourhood
structure of a state area and parameter settings for its cooling
schedule. The key algorithmic feature of simulated annealing
is that it provides a method to flee native optima by permitting
hill-climbing moves i.e. moves that worsen the target operate
worth. The value function returns the output f related to a
collection of variables. If the output decreases then the new
variable set replaces the previous variable set. If the output
will increase then the output is accepted
R<= e [f (p
old
) – f (p
new
)]/T
R<= e [f (p
old
) – f (dp
old
)]/T
Where R could be a uniform random variety and T could be a
variable analogous to temperature. The new variable set is
rejected. The new variable set if found by taking a random
step from the previous variable set
P new =dp old
The variable d is either uniformly or unremarkably distributed
regarding P previous. This management variable sets the step
size so at the start of the method the algorithmic program is
forced to form massive changes in variable values. At this
time the values of T and d decreases by a definite percent and
also the algorithmic program repeats. The algorithmic
program stops once T0. The decrease in T is thought because
the cooling schedule. If initial temperature is T0 and also the
ending temperature in tn then the temperature at step n is
given by
Tn = f (T0, TN, N, n)
Where f decrease with time.
5. LITERATURE SURVEY
.Karambir et al. [1] represented that it's harder to measure and
improve the reliableness of internet applications as a result of
the big system has extremely distributed nature. Hardware
faults might be simply expected instead of the software
system faults. During this paper the author used the Goel
Okumoto SRGM to observe the amount of faults during a
fixed time and estimate its reliableness in regard of internet
applications. The speed of modification was calculated by
executing the check cases for actual defects per day. The
Goel-Okumoto model used exponential distribution to predict
the amount of faults in internet applications. This work don't
predict the reliability of internet applications that could be a
limitation to the present proposed work. Praveen Ranjan
Srivastava et al. [2] steered that software system testing could
be a crucial a part of the software system Development Life
Cycle. The amount of faults occurred and glued throughout
the testing section might probably improve the standard of a
software package by increasing the likelihood of product
success within the market. The method of deciding the time of
allocation for testing section is a crucial activity among
quality assurance. Extending or reducing the testing time was
enthusiastic about the errors uncovered within the software
system parts that will deeply have an effect on the general
project success. Since testing software system incurs
significant project value over-testing the project that resut in
higher expenditure whereas inadequate testing would go away
major bugs undiscovered resulting in risking the project
quality. Therefore prioritizing the parts for checking was
essential to realize the optimum testing performance within
the assigned test time. This paper conferred a check point
Analysis primarily based Module Priority approach to work
out the optimum time to prevent testing and unleash the
software system. Razeef Mohd et al. [3] planned variety of
analytical models throughout the past 3 decades for assessing
the reliableness of the software. During this paper the author
summarize some existing SRGM give a critical assessment of
the underlying assumptions and assess the relevance of those
models throughout the software system development cycle
using an example. Latha Shanmugam et al. [4] planned that
several software system reliableness growth models had been
steered for estimating reliableness of software system as
software system reliableness growth models. The Functions
steered were non-linear in nature therefore it had been tough
to estimate the correct parameters. During this paper the
discussion includes an Estimation technique supported ant
Colony algorithmic program during which parameters were
estimated. Using existing ways information sets cannot be
obtained wherever as within the planned technique at least
one answer might be obtained. The accuracy of the results
using planned technique in comparison with PSO algorithmic
program has higher accuracy for a minimum of ten times for
majority of the models. This work cannot support ways for
dividing the answer area and setting the initial worth
parameters. D. Haritha et al. [5] steered that the usage of
software system reliableness growth model place a really vital
role in observation progress accurately predicting the amount
of faults within the software system throughout each
development and testing method. The high complexness of
software system is that the major contributory issue of
software system reliableness problem. The amount of faults
determined in real software system development
surroundings. During this paper the author explore the
utilization of particle swarm optimization algorithmic
program to estimate software system reliableness growth
models parameters. This work didn't develop a polynomial
structure model to supply a correct result for the higher model
within the computer code reliableness prediction method.
Pradeep Kumar et al. [6] planned a NHPP primarily based
computer code reliableness growth model for three-tier
shopper server systems. The conferred model was composed
of 3 layers of client-server design associated with presentation
logic, business logic and information keep at backend.
Presentation layer contains forms or server pages that presents
the program for the applying, displays the information,
3. www.ijcat.com 379
collects the user inputs and sends the requests to next layer.
Business layer, which offer the support services to receive the
requests for information from user tier evaluates against
business rules passes them to the information tier and
incorporates the business rules for the applying. Information
layer includes information access logic, information drivers,
and query engines used for communication directly with the
information store of an information. The limitation of this
work is that this algorithmic program didn't have any
mechanism once to prevent the testing method and unleash
the products to the top user with higher quality inside budget
and without any delay. Shih-Wei Lina b et al. [7] steered that
Support vector machine (SVM) could be a novel pattern
classification technique that's valuable in several applications.
Kernel parameter setting within the SVM coaching method
beside the feature choice significantly affected classification
accuracy. The target of this study was to get the higher
parameter values whereas additionally finding a set of options
that don't degrade the SVM classification accuracy. This study
developed a simulated Annealing (SA) approach for
parameter determination and has choice within the SVM
termed SA-SVM. To live the planned SA-SVM approach
many datasets in UCI machine learning repository are adopted
to calculate the classification accuracy rate. The planned
approach was compared with grid search that could be a
standard technique of performing arts parameter setting and
numerous alternative ways. The disadvantage of this analysis
is that this cannot be applied on real world issues. Chin-Yu
Huang et al. [8] steered that Software Reliability Growth
Models (SRGM) have been developed to greatly facilitate
engineers and managers in tracking and measuring the growth
of reliability as software is being improved. However some
research work indicates that the delayed S-shaped model may
not fit the software failure data well when the testing-effort
spent on fault detection is not a constant. Thus in this paper
the author first review the logistic testing-effort function that
can be used to describe the amount of testing-effort spent on
software testing. The author describes how to incorporate the
logistic testing-effort function into both exponential- type and
S-shaped software reliability models. Results from applying
the proposed models to two real data sets are discussed and
compared with other traditional SRGM to show that the
proposed models can give better predictions and that the
logistic testing-effort function is suitable for incorporating
directly into both exponential-type and S-shaped software
reliability models. Gaurav Aggarwal et al. [9] categorized
Software Reliability Model into two types, one is static model
and the other one is dynamic model. Dynamic models
observed that the temporary behaviour of debugging process
during testing phase. In Static Models, modelling and analysis
of program logic was performed on the same code. This paper
reviewed various existing software reliability models and
there failure intensity function and the mean value function.
On the basis of this review a model was proposed for the
software reliability having different mean value function and
failure intensity function. Lilly Florence et al. [10] described
that the ability to predict the number of faults during
development phase and a proper testing process helped in
specifying timely release of software and efficient
management of project resources. In the Present Study
Enhancement and Comparison of Ant Colony Optimization
Methods for Software Reliability Models were studied and the
estimation accuracy was calculated. The Enhanced method
showed significant advantages in finding the goodness of fit
for software reliability model such as finite and infinite failure
Poisson model and binomial models.
6. PROPOSED WORK
In Earlier Researches PSO (Particle Swarm Optimization) and
ACO (Ant Colony Optimization) algorithmic program are
explored to estimate SRGM parameters. These algorithms
were accustomed handle the modeling issues for the facility
model, the Delayed s-shaped model and also the Exponential
model. The planned Model can estimate the parameters
victimization Simulated annealing (SA) algorithmic program.
Simulated annealing could be a common native search meta-
heuristic accustomed address separate and, to a lesser extent,
continuous optimization issues. The Simulated annealing
algorithmic program is employed to handle these models and
can show potential benefits in finding the matter. Initial
answer is assumed as ω. The temperature counter is modified
and tk is meant to be the temperature cooling schedule.
Assume an initial temperature T and variety of iterations are
performed at every tk. choose a repetition schedule Mk that
represent the repetition rate of every issue. This simulated
annealing formulation leads to M0 + M1 +……….. + Mk
total iterations being executed, wherever k corresponds to the
value for at that the stopping criteria is met. Additionally if for
all k, then the temperature changes at every iteration.
7. CONCLUSION AND FUTURE SCOPE
The optimized result of the proposed model using simulated
annealing are far better than optimizing the software
reliability models using other techniques such as PSO, ACO,
Neural Network. Using PSO it is hard to balance development
time and budget with software reliability.The PSO produce
good results but the number of iteration were increased which
indirectly decreases the efficiency. The SA will increase the
efficiency and reliability of the software. The failure rate will
be reduced. In future works this technique would be
applicable to various real life domains such as biometrics,
VLSI design, Data mining etc.
8. REFERENCES
1. Mr. Karambir, Jyoti Tamak “Use of Software Reliability
Growth model to Estimate the Reliability of Web
Applications” International Journal of Advanced
Research in Computer Science and Software Engineering
, ISSN : 2277-128X, June 2013, Volume 3, Issue 6, pp:-
53-59.
2. Praveen Ranjan Srivastava, SubrahmanyanSankaran,
Pushkar Pandey “Optimal Software Release Policy
Approach Using Test Point Analysis and Module
Prioritization” MIS Review Vol. 18, No. 2, pp: 19 – 50,
March 2013
3. RazeefMohd., MohsinNazir “Software Reliability
Growth Models: Overview and Applications” Journal of
Emerging Trends in Computing and Information
Sciences VOL. 3, NO. 9, pp: 1309 – 1320, SEP 2012
4. LathaShanmugam and Dr. Lilly Florence “A Comparison
Of Parameter Best Estimation Method For Software
Reliability Models” International Journal of Software
Engineering & Applications (IJSEA), Vol.3, No.5, pp:91
- 102, September 2012
5. D. Haritha, T.Vinisha, K.sachin “Detection of reliable
software using particle swarm optimization” Vinisha et
al, GJCAT, Volume 1 (4), pp: 489-494, 2011
6. Pradeep Kumar and Yogesh Singh “A Software
Reliability Growth Model for Three-Tier Client Server
System” International Journal of Computer Applications,
pp: 9 – 16,2010
7. Shih-Wei Lina,b , Zne-Jung Lee b, Shih-ChiehChenc and
Tsung-Yuan Tseng “Parameter determination of support
4. www.ijcat.com 380
8. vector machine and feature selection using simulated
annealing approach” Applied Soft Computing 8, pp:
1505–1512,2008
9. Chin-Yu Huang, Sy-Yen Kuo, “An Assessment of
Testing-Effort Dependent Software Reliability Growth
Models” IEEE Transactions On Reliability, ISSN :0018-
9529,June 2007, Vol. 56, No. 2, pp:- 198-211
10. Gaurav Aggarwal and Dr. V.K Gupta “ Software
Reliability Growth Model” International Journal of
Advanced Research in Computer Science and Software
Engineering, January 2014,Volume 4, Issue 1,pp :- 475-
479,
1 1 . Lilly Florence, Latha Shanmugam “Enhancement
And Comparison Of Ant Colony Optimization For
Software Reliability Models” Journal of Computer
Science 9 (9), ISSN: 1549-3636, 2013,pp: 1232-1240.