Software effort estimation is an important process of system development life cycle, as it may affect the
success of software projects if project designers estimate the projects inaccurately. In the past of few
decades, various effort prediction models have been proposed by academicians and practitioners.
Traditional estimation techniques include Lines of Codes (LOC), Function Point Analysis (FPA) method
and Mark II Function Points (Mark II FP) which have proven unsatisfactory for predicting effort of all
types of software. In this study, the author proposed a regression model to predict the effort required to
design small and medium scale application software. To develop such a model, the author used 60
completed software projects developed by a software company in Macau. From the projects, the author
extracted factors and applied them to a regression model. A prediction of software effort with accuracy of
MMRE = 8% was constructed.
Abstract The management of software cost, development effort and project planning are the key aspects of software development. Throughout the sixty-odd years of software development, the industry has gone at least four generations of programming languages and three major development paradigms. Still the total ability to move consistently from idea to product is yet to be achieved. In fact, recent studies document that the failure rate for software development has risen almost to 50 percent. There is no magic in managing software development successfully, but a number of issues related to software development make it unique. The basic problem of software development is risky. Some example of risk is error in estimation, schedule slips, project cancelled after numerous slips, high defect rate, system goes sour, business misunderstanding, false feature rich, staff turnover. XSoft Estimation addresses the risks by accurate measurement. A new methodology to estimate using software COSMIC-Full Function Point and named as EXtreme Software Estimation (XSoft Estimation). Based on the experience gained on the original XSoft project develpment, this paper describes what makes XSoft Estimation work from sizing to estimation. Keywords: -COSMIC function size unit, XSoft Estimation, XSoft Measurement, Cost Estimation.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
SOFTWARE REQUIREMENT CHANGE EFFORT ESTIMATION MODEL PROTOTYPE TOOL FOR SOFTWA...ijseajournal
In software development phase software artifacts are not in consistent states such as: some of the class artifacts are fully developed some are half developed, some are major developed, some are minor developed and some are not developed yet. At this stage allowing too many software requirement changes may possibly delay in project delivery and increase development budget of the software. On the other hand rejecting too many changes may increase customer dissatisfaction. Software change effort estimation is one of the most challenging and important activity that helps software project managers in accepting or rejecting changes during software development phase. This paper extends our previous works on developing a software requirement change effort estimation model prototype tool for the software development phase. The significant achievements of the tool are demonstrated through an extensive experimental validation using several case studies. The experimental analysis shows improvement in the estimation accuracy over current change effort estimation models.
APPLYING REQUIREMENT BASED COMPLEXITY FOR THE ESTIMATION OF SOFTWARE DEVELOPM...cscpconf
The need of computing the software complexity in requirement analysis phase of software
development life cycle (SDLC) would be an enormous benefit for estimating the required
development and testing effort for yet to be developed software. Also, a relationship between
source code and difficulty in developing a source code are also attempted in order to estimate the
complexity of the proposed software for cost estimation, man power build up, code and
developer’s evaluation. Therefore, this paper presents a systematic and an integrated approach
for the estimation of software development and testing effort on the basis of improved
requirement based complexity (IRBC) of the proposed software. The IRBC measure serves as the
basis for estimation of these software development activities to enable the developers and
practitioners to predict the critical information about the software development intricacies and
obtained from software requirement specification (SRS) of proposed software. Hence, this paper
presents an integrated approach, for the prediction of software development and testing effort
using IRBC. For validation purpose, the proposed measures are categorically compared with
various established and prevalent practices proposed in the past. Finally, the results obtained, validates the claim, for the approaches discussed in this paper, for estimation of software development and testing effort, in the early phases of SDLC appears to be robust, comprehensive, early alarming and compares well with other measures proposed in the past.
A NEW HYBRID FOR SOFTWARE COST ESTIMATION USING PARTICLE SWARM OPTIMIZATION A...ieijjournal
Software Cost Estimation (SCE) is considered one of the most important sections in software engineering that results in capabilities and well-deserved influence on the processes of cost and effort. Two factors of cost and effort in software projects determine the success and failure of projects. The project that will be completed in a certain time and manpower is a successful one and will have good profit to project
managers. In most of the SCE techniques, algorithmic models such as COCOMO algorithm models have been used. COCOMO model is not capable of estimating the close approximations to the actual cost, because it runs in the form of linear. So, the models should be adapted that simultaneously with the number of Lines of Code (LOC) has the ability to estimate in a fair and accurate fashion for effort factors. Metaheuristic algorithms can be a good model for SCE due to the ability of local and global search. In this paper, we have used the hybrid of Particle Swarm Optimization (PSO) and Differential Evolution (DE) for the SCE. Test results on NASA60 software dataset show that the rate of Mean Magnitude of Relative Error (MMRE) error on hybrid model, in comparison with COCOMO model is reduced to about 9.55%
Abstract The management of software cost, development effort and project planning are the key aspects of software development. Throughout the sixty-odd years of software development, the industry has gone at least four generations of programming languages and three major development paradigms. Still the total ability to move consistently from idea to product is yet to be achieved. In fact, recent studies document that the failure rate for software development has risen almost to 50 percent. There is no magic in managing software development successfully, but a number of issues related to software development make it unique. The basic problem of software development is risky. Some example of risk is error in estimation, schedule slips, project cancelled after numerous slips, high defect rate, system goes sour, business misunderstanding, false feature rich, staff turnover. XSoft Estimation addresses the risks by accurate measurement. A new methodology to estimate using software COSMIC-Full Function Point and named as EXtreme Software Estimation (XSoft Estimation). Based on the experience gained on the original XSoft project develpment, this paper describes what makes XSoft Estimation work from sizing to estimation. Keywords: -COSMIC function size unit, XSoft Estimation, XSoft Measurement, Cost Estimation.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
SOFTWARE REQUIREMENT CHANGE EFFORT ESTIMATION MODEL PROTOTYPE TOOL FOR SOFTWA...ijseajournal
In software development phase software artifacts are not in consistent states such as: some of the class artifacts are fully developed some are half developed, some are major developed, some are minor developed and some are not developed yet. At this stage allowing too many software requirement changes may possibly delay in project delivery and increase development budget of the software. On the other hand rejecting too many changes may increase customer dissatisfaction. Software change effort estimation is one of the most challenging and important activity that helps software project managers in accepting or rejecting changes during software development phase. This paper extends our previous works on developing a software requirement change effort estimation model prototype tool for the software development phase. The significant achievements of the tool are demonstrated through an extensive experimental validation using several case studies. The experimental analysis shows improvement in the estimation accuracy over current change effort estimation models.
APPLYING REQUIREMENT BASED COMPLEXITY FOR THE ESTIMATION OF SOFTWARE DEVELOPM...cscpconf
The need of computing the software complexity in requirement analysis phase of software
development life cycle (SDLC) would be an enormous benefit for estimating the required
development and testing effort for yet to be developed software. Also, a relationship between
source code and difficulty in developing a source code are also attempted in order to estimate the
complexity of the proposed software for cost estimation, man power build up, code and
developer’s evaluation. Therefore, this paper presents a systematic and an integrated approach
for the estimation of software development and testing effort on the basis of improved
requirement based complexity (IRBC) of the proposed software. The IRBC measure serves as the
basis for estimation of these software development activities to enable the developers and
practitioners to predict the critical information about the software development intricacies and
obtained from software requirement specification (SRS) of proposed software. Hence, this paper
presents an integrated approach, for the prediction of software development and testing effort
using IRBC. For validation purpose, the proposed measures are categorically compared with
various established and prevalent practices proposed in the past. Finally, the results obtained, validates the claim, for the approaches discussed in this paper, for estimation of software development and testing effort, in the early phases of SDLC appears to be robust, comprehensive, early alarming and compares well with other measures proposed in the past.
A NEW HYBRID FOR SOFTWARE COST ESTIMATION USING PARTICLE SWARM OPTIMIZATION A...ieijjournal
Software Cost Estimation (SCE) is considered one of the most important sections in software engineering that results in capabilities and well-deserved influence on the processes of cost and effort. Two factors of cost and effort in software projects determine the success and failure of projects. The project that will be completed in a certain time and manpower is a successful one and will have good profit to project
managers. In most of the SCE techniques, algorithmic models such as COCOMO algorithm models have been used. COCOMO model is not capable of estimating the close approximations to the actual cost, because it runs in the form of linear. So, the models should be adapted that simultaneously with the number of Lines of Code (LOC) has the ability to estimate in a fair and accurate fashion for effort factors. Metaheuristic algorithms can be a good model for SCE due to the ability of local and global search. In this paper, we have used the hybrid of Particle Swarm Optimization (PSO) and Differential Evolution (DE) for the SCE. Test results on NASA60 software dataset show that the rate of Mean Magnitude of Relative Error (MMRE) error on hybrid model, in comparison with COCOMO model is reduced to about 9.55%
A Novel Effort Estimation Model For Software Requirement Changes During Softw...ijseajournal
Software Requirements Changes is a typical phenomenon in any software development project. Restricting incoming changes might cause user dissatisfaction and allowing too many changes might cause delay in project delivery. Moreover, the acceptance or rejection of the change requests become challenging for software project managers when these changes are occurred in Software Development Phase. Where in
Software Development Phase software artifacts are not in consistent state such as: some of the class artifacts are Fully Developed, some are Half Developed, some are Major Developed, some are Minor Developed and some are Not Developed yet. However, software effort estimation and change impact analysis are the two most common techniques which might help software project managers in accepting or
rejecting change requests during Software Development Phase. The aim of this research is to develop a new software change effort estimation model which helps software project manager in estimating the effort for software Requirement Changes during Software Development Phase. Thus, this research has analyzed
the existing effort estimation models and change impact analysis techniques for Softwrae Development Phase from the literature and proposed a new software change effort estimation model by combining change impact analysis technique with effort estimation model. Later, the new proposed model has been evaluated by selecting four small size software projects as case selections in applying experimental approch. The experiment results show that the overall Mean Magnitude Relative Error value produced by the new proposed model is under 25%. Hence it is concluded that the new proposed model is applicable in estimating the amount of effort for requirement changes during SDP.
A new model for software costestimationijfcstjournal
Accurate and realistic estimation is always considered to be a great challenge in software industry.
Software Cost Estimation (SCE) is the standard application used to manage software projects. Determining
the amount of estimation in the initial stages of the project depends on planning other activities of the
project. In fact, the estimation is confronted with a number of uncertainties and barriers’, yet assessing the
previous projects is essential to solve this problem. Several models have been developed for the analysis of
software projects. But the classical reference method is the COCOMO model, there are other methods
which are also applied such as Function Point (FP), Line of Code(LOC); meanwhile, the expert`s opinions
matter in this regard. In recent years, the growth and the combination of meta-heuristic algorithms with
high accuracy have brought about a great achievement in software engineering. Meta-heuristic algorithms
which can analyze data from multiple dimensions and identify the optimum solution between them are
analytical tools for the analysis of data. In this paper, we have used the Harmony Search (HS)algorithm for
SCE. The proposed model which is a collection of 60 standard projects from Dataset NASA60 has been
assessed.The experimental results show that HS algorithm is a good way for determining the weight
similarity measures factors of software effort, and reducing the error of MRE.
THE UNIFIED APPROACH FOR ORGANIZATIONAL NETWORK VULNERABILITY ASSESSMENTijseajournal
The present business network infrastructure is quickly varying with latest servers, services, connections,
and ports added often, at times day by day, and with a uncontrollably inflow of laptops, storage media and
wireless networks. With the increasing amount of vulnerabilities and exploits coupled with the recurrent
evolution of IT infrastructure, organizations at present require more numerous vulnerability assessments.
In this paper new approach the Unified process for Network vulnerability Assessments hereafter called as
a unified NVA is proposed for network vulnerability assessment derived from Unified Software
Development Process or Unified Process, it is a popular iterative and incremental software development
process framework.
Integrating profiling into mde compilersijseajournal
Scientific computation requires more and more performance in its algorithms. New massively parallel
architectures suit well to these algorithms. They are known for offering high performance and power
efficiency. Unfortunately, as parallel programming for these architectures requires a complex distribution
of tasks and data, developers find difficult to implement their applications effectively. Although approaches
based on source-to-source intends to provide a low learning curve for parallel programming and take
advantage of architecture features to create optimized applications, programming remains difficult for
neophytes. This work aims at improving performance by returning to the high-level models, specific
execution data from a profiling tool enhanced by smart advices computed by an analysis engine. In order to
keep the link between execution and model, the process is based on a traceability mechanism. Once the
model is automatically annotated, it can be re-factored aiming better performances on the re-generated
code. Hence, this work allows keeping coherence between model and code without forgetting to harness the
power of parallel architectures. To illustrate and clarify key points of this approach, we provide an
experimental example in GPUs context. The example uses a transformation chain from UML-MARTE
models to OpenCL code.
STATISTICAL ANALYSIS FOR PERFORMANCE COMPARISONijseajournal
Performance responsiveness and scalability is a make-or-break quality for software. Nearly everyone runs into performance problems at one time or another. This paper discusses about performance issues faced during Pre Examination Process Automation System (PEPAS) implemented in java technology. The challenges faced during the life cycle of the project and the mitigation actions performed. It compares 3 java technologies and shows how improvements are made through statistical analysis in response time of the application. The paper concludes with result analysis.
Measure, Metrics, Indicators, Metrics of Process Improvement, Statistical Software Process Improvement, Metrics of Project Management, Metrics of the Software Product, 12 Steps to Useful Software Metrics
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Performance Evaluation using Blackboard Technique in Software ArchitectureEditor IJCATR
Validation of software systems is very useful at the primary stages of their development cycle. Evaluation of functional
requirements is supported by clear and appropriate approaches, but there is no similar strategy for evaluation of non-functional
requirements (such as performance). Whereas establishing the non-functional requirements have significant effect on success of
software systems, therefore considerable necessities are needed for evaluation of non-functional requirements. Also, if the software
performance has been specified based on performance models, may be evaluated at the primary stages of software development cycle.
Therefore, modeling and evaluation of non-functional requirements in software architecture level, that are designed at the primary
stages of software systems development cycle and prior to implementation, will be very effective.
We propose an approach for evaluate the performance of software systems, based on black board technique in software architecture
level. In this approach, at first, software architecture using blackboard technique is described by UML use case, activity and
component diagrams. then UML model is transformed to an executable model based on timed colored petri nets(TCPN)
Consequently, upon execution of an executive model and analysis of its results, non-functional requirements including performance
(such as response time) may be evaluated in software architecture level.
Software Metrics for Identifying Software Size in Software Development ProjectsVishvi Vidanapathirana
This paper defines the best software metrics that can be used to define the size of the software in the current industry of information technology (IT)
Feature Model Configuration Based on Two-Layer Modelling in Software Product ...IJECEIAES
The aim of the Software Product Line (SPL) approach is to improve the software development process by producing software products that match the stakeholders’ requirements. One of the important topics in SPLs is the feature model (FM) configuration process. The purpose of configuration here is to select and remove specific features from the FM in order to produce the required software product. At the same time, detection of differences between application’s requirements and the available capabilities of the implementation platform is a major concern of application requirements engineering. It is possible that the implementation of the selected features of FM needs certain software and hardware infrastructures such as database, operating system and hardware that cannot be made available by stakeholders. We address the FM configuration problem by proposing a method, which employs a two-layer FM comprising the application and infrastructure layers. We also show this method in the context of a case study in the SPL of a sample E-Shop website. The results demonstrate that this method can support both functional and non-functional requirements and can solve the problems arising from lack of attention to implementation requirements in SPL FM selection phase.
A Novel Effort Estimation Model For Software Requirement Changes During Softw...ijseajournal
Software Requirements Changes is a typical phenomenon in any software development project. Restricting incoming changes might cause user dissatisfaction and allowing too many changes might cause delay in project delivery. Moreover, the acceptance or rejection of the change requests become challenging for software project managers when these changes are occurred in Software Development Phase. Where in
Software Development Phase software artifacts are not in consistent state such as: some of the class artifacts are Fully Developed, some are Half Developed, some are Major Developed, some are Minor Developed and some are Not Developed yet. However, software effort estimation and change impact analysis are the two most common techniques which might help software project managers in accepting or
rejecting change requests during Software Development Phase. The aim of this research is to develop a new software change effort estimation model which helps software project manager in estimating the effort for software Requirement Changes during Software Development Phase. Thus, this research has analyzed
the existing effort estimation models and change impact analysis techniques for Softwrae Development Phase from the literature and proposed a new software change effort estimation model by combining change impact analysis technique with effort estimation model. Later, the new proposed model has been evaluated by selecting four small size software projects as case selections in applying experimental approch. The experiment results show that the overall Mean Magnitude Relative Error value produced by the new proposed model is under 25%. Hence it is concluded that the new proposed model is applicable in estimating the amount of effort for requirement changes during SDP.
A new model for software costestimationijfcstjournal
Accurate and realistic estimation is always considered to be a great challenge in software industry.
Software Cost Estimation (SCE) is the standard application used to manage software projects. Determining
the amount of estimation in the initial stages of the project depends on planning other activities of the
project. In fact, the estimation is confronted with a number of uncertainties and barriers’, yet assessing the
previous projects is essential to solve this problem. Several models have been developed for the analysis of
software projects. But the classical reference method is the COCOMO model, there are other methods
which are also applied such as Function Point (FP), Line of Code(LOC); meanwhile, the expert`s opinions
matter in this regard. In recent years, the growth and the combination of meta-heuristic algorithms with
high accuracy have brought about a great achievement in software engineering. Meta-heuristic algorithms
which can analyze data from multiple dimensions and identify the optimum solution between them are
analytical tools for the analysis of data. In this paper, we have used the Harmony Search (HS)algorithm for
SCE. The proposed model which is a collection of 60 standard projects from Dataset NASA60 has been
assessed.The experimental results show that HS algorithm is a good way for determining the weight
similarity measures factors of software effort, and reducing the error of MRE.
THE UNIFIED APPROACH FOR ORGANIZATIONAL NETWORK VULNERABILITY ASSESSMENTijseajournal
The present business network infrastructure is quickly varying with latest servers, services, connections,
and ports added often, at times day by day, and with a uncontrollably inflow of laptops, storage media and
wireless networks. With the increasing amount of vulnerabilities and exploits coupled with the recurrent
evolution of IT infrastructure, organizations at present require more numerous vulnerability assessments.
In this paper new approach the Unified process for Network vulnerability Assessments hereafter called as
a unified NVA is proposed for network vulnerability assessment derived from Unified Software
Development Process or Unified Process, it is a popular iterative and incremental software development
process framework.
Integrating profiling into mde compilersijseajournal
Scientific computation requires more and more performance in its algorithms. New massively parallel
architectures suit well to these algorithms. They are known for offering high performance and power
efficiency. Unfortunately, as parallel programming for these architectures requires a complex distribution
of tasks and data, developers find difficult to implement their applications effectively. Although approaches
based on source-to-source intends to provide a low learning curve for parallel programming and take
advantage of architecture features to create optimized applications, programming remains difficult for
neophytes. This work aims at improving performance by returning to the high-level models, specific
execution data from a profiling tool enhanced by smart advices computed by an analysis engine. In order to
keep the link between execution and model, the process is based on a traceability mechanism. Once the
model is automatically annotated, it can be re-factored aiming better performances on the re-generated
code. Hence, this work allows keeping coherence between model and code without forgetting to harness the
power of parallel architectures. To illustrate and clarify key points of this approach, we provide an
experimental example in GPUs context. The example uses a transformation chain from UML-MARTE
models to OpenCL code.
STATISTICAL ANALYSIS FOR PERFORMANCE COMPARISONijseajournal
Performance responsiveness and scalability is a make-or-break quality for software. Nearly everyone runs into performance problems at one time or another. This paper discusses about performance issues faced during Pre Examination Process Automation System (PEPAS) implemented in java technology. The challenges faced during the life cycle of the project and the mitigation actions performed. It compares 3 java technologies and shows how improvements are made through statistical analysis in response time of the application. The paper concludes with result analysis.
Measure, Metrics, Indicators, Metrics of Process Improvement, Statistical Software Process Improvement, Metrics of Project Management, Metrics of the Software Product, 12 Steps to Useful Software Metrics
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Performance Evaluation using Blackboard Technique in Software ArchitectureEditor IJCATR
Validation of software systems is very useful at the primary stages of their development cycle. Evaluation of functional
requirements is supported by clear and appropriate approaches, but there is no similar strategy for evaluation of non-functional
requirements (such as performance). Whereas establishing the non-functional requirements have significant effect on success of
software systems, therefore considerable necessities are needed for evaluation of non-functional requirements. Also, if the software
performance has been specified based on performance models, may be evaluated at the primary stages of software development cycle.
Therefore, modeling and evaluation of non-functional requirements in software architecture level, that are designed at the primary
stages of software systems development cycle and prior to implementation, will be very effective.
We propose an approach for evaluate the performance of software systems, based on black board technique in software architecture
level. In this approach, at first, software architecture using blackboard technique is described by UML use case, activity and
component diagrams. then UML model is transformed to an executable model based on timed colored petri nets(TCPN)
Consequently, upon execution of an executive model and analysis of its results, non-functional requirements including performance
(such as response time) may be evaluated in software architecture level.
Software Metrics for Identifying Software Size in Software Development ProjectsVishvi Vidanapathirana
This paper defines the best software metrics that can be used to define the size of the software in the current industry of information technology (IT)
Feature Model Configuration Based on Two-Layer Modelling in Software Product ...IJECEIAES
The aim of the Software Product Line (SPL) approach is to improve the software development process by producing software products that match the stakeholders’ requirements. One of the important topics in SPLs is the feature model (FM) configuration process. The purpose of configuration here is to select and remove specific features from the FM in order to produce the required software product. At the same time, detection of differences between application’s requirements and the available capabilities of the implementation platform is a major concern of application requirements engineering. It is possible that the implementation of the selected features of FM needs certain software and hardware infrastructures such as database, operating system and hardware that cannot be made available by stakeholders. We address the FM configuration problem by proposing a method, which employs a two-layer FM comprising the application and infrastructure layers. We also show this method in the context of a case study in the SPL of a sample E-Shop website. The results demonstrate that this method can support both functional and non-functional requirements and can solve the problems arising from lack of attention to implementation requirements in SPL FM selection phase.
Médico Especialista Álvaro Miguel Carranza Montalvo, soy Médico General Alto, Rubio, de Piel Blanca, ojos claros , soy Atlético Simpático, me esmero a seguir Adelante solucionando los Problemas de las demás Personas para salvar su Vida en Salud y en Enfermedades. Internet, Networds….
Médico Especialista Álvaro Miguel Carranza Montalvo, la VIDA es una VIRTUD que cada Humano, Persona tiene es Valeroso y Digno lograr SALVAR la VIDA de una Persona que está en Peligro, cada Persona es una sóla Unidad único no hay nadie como esa persona somos distintos. Internet, Networds….
Médico Especialista Álvaro Miguel Carranza Montalvo, la NATURALEZA es Bella y Linda Vivirla al Aire Libre, con Agua, la Vegetación, los Bellos Animales en el Ecosistema la Biodiversidad hay que Valorar y Gozar lo que hay en el Mundo Vivirla y Disfrutarla. Internet, Networds….
Médico Especialista Álvaro Miguel Carranza Montalvo, ME GUSTA LO QUE SOY MI FORMA DE SER ME ENCANTA LO QUE SOY YÓ MI FÍSICO, MENTE, PENSAMIENTOS, ALMA Y CUERPO, FÍSICO. Y VIVIR LA VIDA, NATURALEZA LA BELLEZA. Web, Redes Sociales….
Médico Especialista Álvaro Miguel Carranza Montalvo, Me gusta la Naturaleza y la Vida. VIVIR LA VIDA RESPETANDO A LOS DEMÁS CHICAS Y CHICOS A TODAS LAS PERSONAS LES RESPETO Y ADMIRO PORQUE TIENEN SUS VALORES Y DONES. HACER EL BIEN NUNCA EL MAL A LA PERSONA TRATAR COMO A UNO LE GUSTARÍA QUE LE TRATEN. Web, Redes Sociales….
Médico Especialista Álvaro Miguel Carranza Montalvo, "creo que las artes marciales mixtas sirven principalmente para desarrollar la energía. A veces es necesario darse cuenta de un peligro y conocer el medio para salvar la vida. Web, Redes Sociales….
Médico Especialista Álvaro Miguel Carranza Montalvo, La Energía es Vital para lograr una Meta con Fuerza y Salud es lo más Importante en la Vida. ", Web, Internet….
Médico Especialista Álvaro Miguel Carranza Montalvo, "es necesario realizar ejercicios determinados en la columna, para proporcionar oxígeno al cerebro y ayudarle a descansar totalmente", Web, Internet….
Médico Especialista Álvaro Miguel Carranza Montalvo, "hay tres palabras que aprendemos a gritar que llevan consigo descanso y energía; fuerza, valor y convicción", Web, Internet….
SOFTWARE DESIGN ANALYSIS WITH DYNAMIC SYSTEM RUN-TIME ARCHITECTURE DECOMPOSITIONijseajournal
Software re-engineering involves the studying of targeting system’s design and architecture. However,
enterprise legacy software systems tend to be large and complex, making the analysis of system design
architecture a difficult task. To solve this problem, the study proposes an approach that dynamically
decomposes software architecture using the run-time system information to reduce the complexity
associated with analyzing large scale architecture artifacts. The study demonstrates that dynamic
architecture decomposition is an efficient way to limit the complexity and risk associated with reengineering
activities of a large legacy system. This new approach divides the system into a collection of
meaningful modular parts with low coupling, high cohesion, and a minimal interface. This division
facilitates the design analysis and incremental software re-engineering process. This paper details two
major techniques to decompose legacy system architecture. The approach is also supported by automated
reverse engineering tools that were developed during the course of the study. The preliminary results
indicate that this novel approach is very promising.
A METRICS -BASED MODEL FOR ESTIMATING THE MAINTENANCE EFFORT OF PYTHON SOFTWAREijseajournal
Software project management includes a substantial area for estimating software maintenance effort.
Estimation of software maintenance effort improves the overall performance and efficiency of software.
The Constructive Cost Model (COCOMO) and other effort estimation models are mentioned in literature
but are inappropriate for Python programming language. This research aimed to modify the Constructive
Cost Model (COCOMO II) by considering a range of Python maintenance effort influencing factors to get
estimations and incorporated size and complexity metrics to estimate maintenance effort. A within-subjects
experimental design was adopted and an experiment questionnaire was administered to forty subjects
aiming to rate the maintainability of twenty Python programs. Data collected from the experiment
questionnaire was analyzed using descriptive statistics. Metric values were collected using a developed
metric tool. The subject ratings on software maintainability were correlated with the developed model’s
maintenance effort, a strong correlation of 0.610 was reported meaning that the model is valid.
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
COMPARATIVE STUDY OF SOFTWARE ESTIMATION TECHNIQUES ijseajournal
Many information technology firms among other organizations have been working on how to perform estimation of the sources such as fund and other resources during software development processes. Software development life cycles require lot of activities and skills to avoid risks and the best software estimation technique is supposed to be employed. Therefore, in this research, a comparative study was conducted, that consider the accuracy, usage, and suitability of existing methods. It will be suitable for the project managers and project consultants during the whole software project development process. In this project technique such as linear regression; both algorithmic and non-algorithmic are applied. Model, composite and regression techniques are used to derive COCOMO, COCOMO II, SLIM and linear multiple respectively. Moreover, expertise-based and linear-based rules are applied in non-algorithm methods. However, the technique needs some advancement to reduce the errors that are experienced during the software development process. Therefore, this paper in relation to software estimation techniques has proposed a model that can be helpful to the information technology firms, researchers and other firms that use information technology in the processes such as budgeting and decision-making processes.
A NEW HYBRID FOR SOFTWARE COST ESTIMATION USING PARTICLE SWARM OPTIMIZATION A...ieijjournal1
Software Cost Estimation (SCE) is considered one of the most important sections in software engineering
that results in capabilities and well-deserved influence on the processes of cost and effort. Two factors of
cost and effort in software projects determine the success and failure of projects. The project that will be
completed in a certain time and manpower is a successful one and will have good profit to project
managers. In most of the SCE techniques, algorithmic models such as COCOMO algorithm models have
been used. COCOMO model is not capable of estimating the close approximations to the actual cost,
because it runs in the form of linear. So, the models should be adapted that simultaneously with the number
of Lines of Code (LOC) has the ability to estimate in a fair and accurate fashion for effort factors. Metaheuristic
algorithms can be a good model for SCE due to the ability of local and global search. In this
paper, we have used the hybrid of Particle Swarm Optimization (PSO) and Differential Evolution (DE) for
the SCE. Test results on NASA60 software dataset show that the rate of Mean Magnitude of Relative Error
(MMRE) error on hybrid model, in comparison with COCOMO model is reduced to about 9.55%.
A NEW MODEL FOR SOFTWARE COSTESTIMATION USING HARMONY SEARCHijfcstjournal
Accurate and realistic estimation is always considered to be a great challenge in software industry.
Software Cost Estimation (SCE) is the standard application used to manage software projects. Determining
the amount of estimation in the initial stages of the project depends on planning other activities of the
project. In fact, the estimation is confronted with a number of uncertainties and barriers’, yet assessing the
previous projects is essential to solve this problem. Several models have been developed for the analysis of
software projects. But the classical reference method is the COCOMO model, there are other methods
which are also applied such as Function Point (FP), Line of Code(LOC); meanwhile, the expert`s opinions
matter in this regard. In recent years, the growth and the combination of meta-heuristic algorithms with
high accuracy have brought about a great achievement in software engineering. Meta-heuristic algorithms
which can analyze data from multiple dimensions and identify the optimum solution between them are
analytical tools for the analysis of data. In this paper, we have used the Harmony Search (HS)algorithm for
SCE. The proposed model which is a collection of 60 standard projects from Dataset NASA60 has been
assessed.The experimental results show that HS algorithm is a good way for determining the weight
similarity measures factors of software effort, and reducing the error of MRE.
Analysis of software cost estimation usingijfcstjournal
The growing application of software and resource constraints in software projects development need a
more accurate estimate of the cost and effort because of the importance in program planning, coordinated
scheduling and resource management including the number of programming's and software design using
tools and modern methods of modeling. Effectively control of investment for software development is
achieved by accurate cost estimation.The accurate Software Cost Estimation (SCE) is very difficult in the
early stages of software development because many of input parameters that are effective in software's
effort are very vague and uncertain in the early stages. SCE that is the basis of software projects
development planning is considered to be of high accuracy, because if the estimate is less than actual
values, confidence factor is reduce and this is means the possibility of failure in project. Conversely, if the
project is estimated at more than the actual value it would be the concept of unhelpful investment and
waste of resources. In the evaluation of software projects is commonly used deterministic method. But
software world is totally different from the linear variables and nowadays for performance and estimation
should be used nonlinear and non-probabilistic methods. In this paper, we have studied the SCE Using
Fuzzy Logic (FL) and we have compared it with COCOMO model. Results of investigations show that FL is
a performance model for SCE.
Efficient Indicators to Evaluate the Status of Software Development Effort Es...IJMIT JOURNAL
Development effort is an undeniable part of the project management which considerably influences the
success of project. Inaccurate and unreliable estimation of effort can easily lead to the failure of project.
Due to the special specifications, accurate estimation of effort in the software projects is a vital
management activity that must be carefully done to avoid from the unforeseen results. However numerous
effort estimation methods have been proposed in this field, the accuracy of estimates is not satisfying and
the attempts continue to improve the performance of estimation methods. Prior researches conducted in
this area have focused on numerical and quantitative approaches and there are a few research works that
investigate the root problems and issues behind the inaccurate effort estimation of software development
effort. In this paper, a framework is proposed to evaluate and investigate the situation of an organization in
terms of effort estimation. The proposed framework includes various indicators which cover the critical
issues in field of software development effort estimation. Since the capabilities and shortages of
organizations for effort estimation are not the same, the proposed indicators can lead to have a systematic
approach in which the strengths and weaknesses of organizations in field of effort estimation are
discovered
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
How Should We Estimate Agile Software Development Projects and What Data Do W...Glen Alleman
Estimating techniques for an acquisition program progresses from analogies to actual cost method as the program matures and more information is known. The analogy method is most appropriate early in the program life cycle when the system is not yet fully defined.
Optimizing Time and Effort Parameters of COCOMO II Using Fuzzy Multi-objectiv...TELKOMNIKA JOURNAL
Estimating the efforts, costs, and schedules of software projects is a frequent challenge to software development projects. A bad estimation will result in bad management of a project. Various models of estimation have been defined to complete this estimate. The Constructive Cost Model II (COCOMO II) is one of the most famous models as a model for estimating efforts, costs, and schedules. To estimate the effort, cost, and schedule in project of software, the COCOMO II uses inputs: Effort Multiplier (EM), Scale Factor (SF), and Source Line of Code (SLOC). Evidently, this model is still lack in terms of accuracy rates in both efforts estimated and time of development. In this paper, we introduced to use Gaussian Membership Function (GMF) of Fuzzy Logic and Multi-Objective Particle Swarm Optimization (MOPSO) method to calibrate and optimize the parameters of COCOMO II. It is to achieve a new level of accuracy better on COCOMO II. The Nasa93 dataset is used to implement the method proposed. The experimental results of the method proposed have reduced the error downto 11.89% and 8.08% compared to the original COCOMO II. This method proposed has achieved better results than previous studies.
call for papers, research paper publishing, where to publish research paper, journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJEI, call for papers 2012,journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, research and review articles, engineering journal, International Journal of Engineering Inventions, hard copy of journal, hard copy of certificates, journal of engineering, online Submission, where to publish research paper, journal publishing, international journal, publishing a paper, hard copy journal, engineering journal
Enhancing software development cost control by forecasting the cost of rework...nooriasukmaningtyas
Industrial reports show massive cost overruns associated with software
projects. The cost of software reworks constitutes a large portion of the
overall cost, reflecting a substantial challenge in cost control. Earned value
management (EVM) is the most recognized model for project cost control.
However, it shows many limitations in forecasting the software project cost,
leading to a considerable challenge in cost control. Nevertheless, the major
EVM limitation found is its inability to forecast the cost of software rework.
This research investigated the factors affecting this limitation and suggests an
enhanced EVM model. The significant contribution of this research is its
incorporation of software-related factors into the EVM model. We
introduced the software rework index (SRI), which is incorporated into the
traditional EVM model to enhance its predictability of the software project
cost at completion, including the rework cost. We defined the SRI in terms of
two factors: product functional complexity and the team competency.
Finally, we evaluated the proposed model using a dataset drawn from five
actual projects. The results showed a significant enhancement in forecasting
accuracy.
A DECISION SUPPORT SYSTEM FOR ESTIMATING COST OF SOFTWARE PROJECTS USING A HY...ijfcstjournal
One of the major challenges for software, nowadays, is software cost estimation. It refers to estimating the
cost of all activities including software development, design, supervision, maintenance and so on. Accurate
cost-estimation of software projects optimizes the internal and external processes, staff works, efforts and
the overheads to be coordinated with one another. In the management software projects, estimation must
be taken into account so that reduces costs, timing and possible risks to avoid project failure. In this paper,
a decision- support system using a combination of multi-layer artificial neural network and decision tree is
proposed to estimate the cost of software projects. In the model included into the proposed system,
normalizing factors, which is vital in evaluating efforts and costs estimation, is carried out using C4.5
decision tree. Moreover, testing and training factors are done by multi-layer artificial neural network and
the most optimal values are allocated to them. The experimental results and evaluations on Dataset
NASA60 show that the proposed system has less amount of the total average relative error compared with
COCOMO model.
A DECISION SUPPORT SYSTEM FOR ESTIMATING COST OF SOFTWARE PROJECTS USING A HY...ijfcstjournal
One of the major challenges for software, nowadays, is software cost estimation. It refers to estimating the
cost of all activities including software development, design, supervision, maintenance and so on. Accurate
cost-estimation of software projects optimizes the internal and external processes, staff works, efforts and
the overheads to be coordinated with one another. In the management software projects, estimation must
be taken into account so that reduces costs, timing and possible risks to avoid project failure. In this paper,
a decision- support system using a combination of multi-layer artificial neural network and decision tree is
proposed to estimate the cost of software projects. In the model included into the proposed system,
normalizing factors, which is vital in evaluating efforts and costs estimation, is carried out using C4.5
decision tree. Moreover, testing and training factors are done by multi-layer artificial neural network and
the most optimal values are allocated to them. The experimental results and evaluations on Dataset
NASA60 show that the proposed system has less amount of the total average relative error compared with
COCOMO model.
Effort estimation is the process of predicting the most realistic amount of effort (expressed in terms of person-hours or money) required to develop or maintain software based on incomplete, uncertain and noisy input.
Effort estimation is essential for many people and different departments in an organization.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Search and Society: Reimagining Information Access for Radical Futures
FACTORS ON SOFTWARE EFFORT ESTIMATION
1. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.1, January 2017
DOI : 10.5121/ijsea.2017.8103 23
FACTORS ON SOFTWARE EFFORT ESTIMATION
Simon WU Iok Kuan
Faculty of Business Administration, University of Macao, Macau, China
ABSTRACT
Software effort estimation is an important process of system development life cycle, as it may affect the
success of software projects if project designers estimate the projects inaccurately. In the past of few
decades, various effort prediction models have been proposed by academicians and practitioners.
Traditional estimation techniques include Lines of Codes (LOC), Function Point Analysis (FPA) method
and Mark II Function Points (Mark II FP) which have proven unsatisfactory for predicting effort of all
types of software. In this study, the author proposed a regression model to predict the effort required to
design small and medium scale application software. To develop such a model, the author used 60
completed software projects developed by a software company in Macau. From the projects, the author
extracted factors and applied them to a regression model. A prediction of software effort with accuracy of
MMRE = 8% was constructed.
KEYWORDS
Effort Estimation, Software Projects, Software Applications, System Development Life Cycle
1. INTRODUCTION
The problems faced by project designers in controlling and managing software projects are
overrun of effort estimate. With inaccurate effort estimates, it surely affects project designers to
make correct decisions and leading to the failure of the entire software project development [1].
Over the last ten couple of years, there is an increasing strong demand for software development
team to build quality application software in the competitive global markets [2]. The total
investment of application software development and maintenance has been overrun for the past
ten years. As analyzed by [3] [4], the overestimated effort schedule is varied substantially from
41% to 258%, and the total investment over-prediction is from 97% to 151%. Besides, there are
research conducted by US government agencies revealing that 60% of the software project
development overrun the original scheduled completion time, 50% of these projects have been
overrun the estimated costs while in the case of 46% of those completed projects were useless at
all [5]. The results indicated that not only the importance of planning and controlling over
software projects, but also the early effort estimation of software project development is
important too.
Over the years, there are many discussions related to effort estimation of software project
development [6] [7] [8] [9] [10] [11] [12]. However, some of those well-developed models cannot
be used to make early estimation. Other models require more time spent by project developers to
understand both user requirements and program specifications or until the programs are
completely designed. For example, COCOMO measures program size with total LOC for
development effort estimation. Unfortunately, LOC is unknown until the entire application
2. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.1, January 2017
24
software is developed. This makes the effort estimate unstable. Similarly, FPA method is used to
make predication of program size using inputs, master files, logical files, interfaces and outputs.
Although this method has been widely used by organizations, particularly in Europe, than LOC, it
is still not known yet until the design phase is complete [13]. Due to problems of calibrating early
effort estimate using both LOC and FPA, it is, therefore, needed to propose a general model for
making effort estimates for small and medium application software.
2. LITERATURE REVIEW
Software effort estimation is one of the important activities in software development life cycle
[10], as such, many estimation models have been proposed previously. These models are
classified into non-algorithmic models and algorithmic models [11]. Non-algorithmic models are
mainly based on comparatively methodologies such as Expert Judgment and Machine Learning
Techniques [14] [15]. While algorithmic models are constructed using history numerical data [16].
For example, SLIM [17] [18]. COCOMO is a well-known estimation model which is
procedurally complete and thoroughly documented for effort estimates [19]. Using COCOMO
method, effort estimate proceeds through two steps. First, the basic level of COCOMO, a nominal
effort estimate must be calculated by using a formula. While size measured in LOC is the only
one independent variable in this level. Second, upon the calculation of nominal effort estimate, it
is multiplied by a composite multiplier function, m(X), where X indicates the result of all 15
independent variables referred to cost drivers in the intermediate level of COCOMO. The
functional form of the model is shown below:
E = aiSbi
m(X)
where S is the total program size measured in LOC and m(X) is the multiplier function.
By following a gradation, multipliers are to determine the rating of the cost drivers running from
very low to very high. The coefficients ai and bi are calculated from a combination of the mode
and the level. Software projects were categorized into three modes by Boehm [20], they are
namely organic, semidetached and embedded. For those projects falling into organic type, they
are relatively small in terms of program size, little innovation is required, and are designed by
companies themselves. For those projects classifying into the type of embedded, they are large in
terms of functions, they must be in operations within tight constraints and they require high
innovation. Those projects of semi-detached class are falling somewhere between organic and
embedded [19]. This makes the project designers finding difficulties to estimate the effort
accurately required for application software [21].
In addition to COCOMO, Albrecht [22] [23] proposed another method named FPA for predicting
application software effort using program requirements. This method is to measure the functional
requirements of systems according to their complexities. In this method, the function types, such
as external inputs, outputs, inquiries, external interfaces to other systems, and logical internal files,
of systems are derived by classifying the system components perceived by system counters, they
are then classified further and added together. According to the number of data elements of each
component, it is further categorized into simple, average or complex classification. Each
component is then assigned a number of calculation points according to its type and complexity.
The total effort arrived for a system is calculated by multiplying the total sum of function points
3. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.1, January 2017
25
for all user function types by the technical complexity factors (TCF). The TCF is obtained by
calibrating the level of influence (DI) of the 14 components [7].
Despite the research works for proposing the models, they have been proven unsatisfactory for
predicting effort of different types of software applications [9]. With COCOMO model, it
consists of three different sub-models, namely Basic, Intermediate, and Detailed COCOMO.
According to evaluation suggested by Boehm [20], Intermediate model is proven to be better than
the Basic model for effort prediction among the three levels. While the Detailed model is only
slightly better than the Intermediate one [18] [24]. However, the 15 cost drivers of the
Intermediate model are scalars ranging from 0.7 to 1.66 with a nominal value of one. This value
is to represent the characteristics and the types of system projects as well as the environment
where the projects were constructed. Therefore, the results of Intermediate model are significantly
influenced by the values of those cost drivers, in turn detailed information of the software projects
and the development environment must be identified and gathered. Since the values of the cost
drivers were originally from the software projects constructed during the period of the 1960s and
1970s. This model is still at a high level of uncertainty particularly in today’s complex software
environment regarding its accuracy, reliability and validity [24]. Therefore, there is a high
possibility of leading to inaccurate effort estimate in such a complex software environment.
For FPA method, several studies reported that project designers faced similar difficulties during
system constructions. For example, due to the counting experience of the project counters who
are varied from person to person. Because of this reason, the cost involved to reach the final
number of function point data may be different [25] [26] [27]. In addition, when the function
point data were counted by system analysts at a software house, the time spent by calculating
system functions was mostly 40 hours for a medium-size system (800-2,400 FPs) and even more
for large (>2,400 FPs) computer application systems. Furthermore, a study reported by [26], due
to the growth of variability of function point counting, the differences in function point measures
of the same system arisen by different system counters with an average of 12.2%. Jeffery [28]
reported an even worse result with a 30% of variance within the same company and more than
30% among different companies. Due to the subjectivity in which function types are categorized
into simple, average and complex, the potential for error variance may exist.
3. OBJECTIVE OF THE STUDY
The aim of this research is to find out how design factors, such as Development kit, Designer
experience, Number of programmers, Complexity, and Education level, affect the effort required
for system development. The reason of choosing these factors is because they are free from
calculation of total lines of code and number of function points. Unlike LOC and FPA, the
results of effort estimate are based on the information available after preliminary and detailed
design. Instead, our model makes the effort estimate earlier than that of LOC and FPA, because
we can determine these factors before the application software is fully developed.
4. PROJECT DATA
The author interviewed a local software company from which data were collected. The company
designs small and medium systems for local customers and clients. The characteristics of the
system projects include: small to medium size in terms of LOC ranged from 5,875 to 80,918; they
were all developed using fourth generation programming language; they relate to shipment,
4. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.1, January 2017
26
import and export, inventory control and report generation activities; they were developed by a
small team of two to five programmers and system analyst; the software company has a well-
developed way to keep track the detailed information of the developed programs. All collected
software projects were developed by following standard system development life cycle. In an
attempt to confirm the reliability of both dependent and independent variables, all software
projects were verified by a programmer and double checked by a system analyst. Minor mistakes
were reported and corrected afterwards. The sample size consists of 60 completed software
projects, from which 40 projects were drawn as a main sample and 20 projects as the holdout
sample respectively. The main sample was used to develop a model for prediction purpose, and
the holdout sample was used to measure the validity of the proposed model.
5. RESEARCH MODEL AND ITS VARIABLES
Upon data collection, the following variables were proposed. Definitions of the variables are
explained below. Effort is a dependent variable referring to total man-hour effort required to build
a software project. Independent variables include Development_kit(Dev_kit),
Designer_experience(Designer_exp), No_of_programmers(No_prog), Complexity(Comp) and
Education_level(Edu_level).
5.1 Effort
This variable emphasizes the effort (man-hour) spent by project developers to design application
software. Effort is measured either in man-hour or man-month depending on the size of software
projects [23] [27]. In the study, we consider man-hour is because the software projects are small
to medium. Some software projects didn’t last several months. For those software projects
studied, only the time spent in analyzing and designing by project designers is counted. While the
time spent to discuss with clients and end users are excluded. The measurement used to count the
effort is the total number of man-hours for single software project. The software company has a
very good practice to record detailed information, such as time spent for each project, the number
of project designers assigned to a project and the development tool used, of each developed
software project. Therefore, the data collection process was easy and straight forward.
5.2 Dev_kit
This variable is to measure the complexity of system development kit used by project designers.
Usually, the complexity of a development kit correlate to the time required to develop software
projects, as a good development kit can make programmers more productive during system
development. When a suitable development kit is used, it can support the construction process by
automating tasks executed at every stage of system development life cycle. It facilitates
interaction among project designers by diagramming a dynamic, iterative process, rather than one
in which changes are cumbersome [29]. It is also a useful tool to enable project designers to
clarify end user’s requirements at the very early stage of system development life cycle [30].
CASE tool is the common development kit used to support development process in many
companies. This factor is measured with a five-point Liker-like scale ranging from (1) very low
productivity to (5) very high productivity.
5. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.1, January 2017
27
5.3 Designer_exp
This variable is to measure the actual working experience of project designers designing
application software in computer industry. The actual experience of project designers in
developing software projects and the experience in a specific kind of programming language are
key determinants. By common sense, an experienced project designer can reduce the number of
errors to program codes if he has good mastering of that type of programming language and has a
number of years in developing software projects. This leads to a minimum time in developing and
maintaining programs in the future. Thus, the more the number of years of service that a designer
serves in the industry, the higher the level of working experience the designer has gained. We
take the average of years of experience among the team members if there is more than one
participates in a project.
5.4 No_prog
This variable is to count the number of project designers working collaboratively as a team. In
order to make sure a late project which can be completed on time, there are project designers who
often add extra programmers [31]. Sometimes, this arrangement may not work well, especially
when there is lack of proper communication among project designers and no training offered
before the development. This could definitely slow down the development process and lead to
many problems. However, the situation may not happen in our study, because the software
projects developed by a team of project designers are small to medium in term of LOC. A project
designer is relatively easy to make an accurate estimate before a software project starts. Therefore,
there are no additional members who are invited to a late project. For this variable, according to
the detailed information of the developed projects, we are in an easy position to collect the
number of project developers responsible for each project being developed.
5.5 Comp
This variable refers to the degree of program complexity designed. A thorough understanding of
the software development process improves the relationship between program complexity and
maintenance effort. That is, high complexity of software projects increase the difficulty of
project designers to quickly and accurately understand the programs before they are developed or
repaired [32]. The higher the level of complexity of a program is, the greater the effort required
by project designer [33]. Especially, when a program has highly interactive modules to
communicate not only within itself, but also with modules from other programs. This will
increase the time required by project designers in designing the software projects. In the study,
this variable is to measure and examine system specifications and design specifications prepared
by the company during analysis and design phases. Due to the characteristics of collected
software projects, they all are business oriented programs. The determination process for program
complexity is under the control of project designers. For this variable, the data is collected using a
five-point Liker-like scale ranging from (1) very low complexity to (5) very high complexity.
5.6 Edu_level
This variable is to measure the level of education that a project designer has acquired in related
field. Many companies prefer to recruit programmers who are equipped not only with extensive
working experience in industry but also those who have well training with at least a bachelor
6. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.1, January 2017
28
degree or higher in related field. Project designers with higher level of education usually can
solve programming problems more easily than those who don’t. To measure the factor, we use a
five-point Liker-like scale ranged from (1) very low level of education to (5) very high level of
education.
A linear regression model is hypothesized following discussion of the variables and it is shown in
the following equation.
Effort = α + β1Dev_kit + β2Designer_exp + β3No_pro + β4Comp + β5Edu_level
6. DATA ANALYSIS
The general descriptive statistics, mean and standard deviation, of the tested variables are
presented in Table 1. In an attempt to find out the design factors influencing the effort estimate.
We used SPSS to run a correlation test to identify any potentially useful relationships between
dependent variable and independent variables. The results are shown in Table 2 after conducting
multiple regression. There is an evidence revealing strong relationships between independent
variables and the dependent variable. Particularly, there is a strong correlation reported between
Dev_kit, No_pro and Comp and the dependent variable Effort. This is to imply the selected
independent variables having potential predictive capabilities for the dependent variable.
Table 1 Summary statistics
Variable Mean Std. Deviation
Effort 662.19 252.21
Dev_kit 3.88 1.07
Designer_exp 5.69 2.11
No_pro 3.08 1.16
Comp 3.50 1.14
Edu_level 4.08 1.35
Table 2 Pearson correlation coefficients
Variable Effort Dev_kit Designer_exp No_pro Comp
Dev_kit .720 - - - -
Designer_exp -.251* .115* - - -
No_pro .931 .550 -.061* - -
Comp .645 .539 .037* .556 -
Edu_level .243* .217* -.223* .305 -.064*
All are significant at 0.01 level, except for * significant at 0.05 level
From the results, implications have revealed design factors having strong influence on software
effort estimate. One of the major factors influences the effort estimate is the No_pro, it is
evidenced by the value of correlation (.93), indicating of 93% of the variance in effort estimate.
This is to emphasize that when more team designers are involved in designing a software project,
the overall accuracy of effort estimate can be improved substantially. Similarly, the value of
Dev_kit (.72) shows significantly that it has high impact on productivity of project designers.
7. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.1, January 2017
29
This is due to fact that a good development kit has a complete set of data, such as input and
output formats, system data and database structures, maintained in data repository, so that
designers can choose the data from the data repository. This enables designers contributing the
minimum time to design the data items from the very beginning. As a result, the time spent by
project designers in defining the data items is minimized significantly. Also, there is a close
relationship between Effort and Comp (.64). This factor explains more time is required for
development when internal complexity of a program is high. Thus, it is expected the effort
estimate can be affected by the complexity of programs, especially when programs are the highly
complex.
However, the correlation value of Edu_level (.-243) has low effect on the effort estimate. The
possible explanation is that project designers can acquire extensive programming knowledge by
themselves without attending a degree course in the modern days. Of course, some project
designers can solve complex problems more easily if they have a higher level qualification but
this is not true at all time. Similarly, the correlation value of Designer_exp (-.251) has totally no
significant influence on effort estimate. It is because all of the system projects were developed
using a single programming language. When a project designer is already familiar with a specific
programming language, he may be very productive for the rest of other programs. Besides, all
programs developed by the company are small to medium, extensive programming experience is
not that important. Therefore, it is in line with previous studies developed by [34].
7. DISCUSSIONS
The model proposed in this study may not be an accurate predictor for future effort estimate since
it is only suitable for the testing data from which it was constructed. In order to improve the
accuracy and reliability of the proposed model for future effort estimate. We used an accuracy
criterion for evaluation purpose. The accuracy and reliability of a model is highly important as it
directly affects the success or the failure of system effort prediction. Project scheduling,
controlling, coordination and staffing decisions are always depending on accurate estimation [13].
However, sometimes an inaccurate model can still be consistent if it uniformly misestimates
effort for only a set of software project data. If a model is not consistent, the management may
not rely much on the estimates. As a result, an inconsistent model may be of little use in practice.
The accuracy of a given software project estimate is calibrated by the magnitude of relative error
(MRE). The MRE is calculated for a given project by the following formula:
MRE = 100 | (Actual Effort – Estimated Effort) / Actual Effort |
Thus, the accuracy of prediction of a software project is proportional inversely to its MRE value.
Besides, mean MRE (MMRE) is the mean value for the indicator over all observed data in the
sample. A lower value of MMRE usually indicates a more accurate model [35]. The prediction at
a given level usually explains an indication of overall fit for a set of software projects, based on
the MRE values for each pair of data:
Pred(p) = i/n
where p is the selected threshold value for MRE, i refers to a set of software projects with a value
of MRE which is smaller than or equal to p, while n is the total number of software project data.
In an attempt to measure the accuracy and reliability of the proposed model, a holdout sample of
8. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.1, January 2017
30
20 software projects was drawn from the main sample. Upon validation, the accuracy of
predictive values for COCOMO model and the proposed model are shown in Table 3. By
comparing the MMRE values of these two models, the proposed model has higher predictive
capability than COCOMO model. In addition, the predicted results indicated explicitly that a
simple model can generate less error than a complex one, because the design factors were directly
extracted from the early stage of system analysis phase. Meanwhile, when comparing the PRED
values of are models, our model has a higher predicted value than COCOMO. In general, it is in
line with our expectation. Therefore, it is to conclude that the proposed model is more accurate
and reliable than COCOMO model.
Table 3 Indicators of model accuracy
Model MMRE PRED(0.3) PRED(0.2) PRED(0.1)
COCOMO .13 .76 .61 .22
New Model .08 .82 .65 .31
8. CONCLUSION AND FURTHER RESEARCH WORK
By drawing a conclusion, there are several points to note. The first point is to emphasize is that
the results of the proposed model are promising and fruitful. It is because the possibility of
developing a prediction model for application software using fourth generation programming
language. However, although the results are pleasing, it must be addressed that we are not
proposing a general model for all types of application software. The use of software projects are
mainly from a small company as a basis for prediction purpose. The suggested results may not
generalize to real environment.
The second point is that we avoided using COCOMO model or FPA method with two reasons.
First, we don’t know yet the total LOC of a program until it is fully developed. Second, the total
LOC counted is different from programs to programs, it is also varied from programming
language to programming language.
The third point is that it is possible to use design factors to estimate the total effort required for
software project development. Such design factors are easily collected at an early stage of system
development life cycle when the company has a good practice to manage system development
activities. By using simple regression model, a prediction software project of effort estimate with
accuracy of MMRE = 8% was constructed. And this level of accuracy was obtained relatively
easy without using complex model such as COCOMO. It is to believe that our results may
generalize to other software projects with similar characteristics.
9. LIMITATIONS
There are a couple of limitations: (1) the predicted results are mainly from those software projects
constructed by a single company, the area of application may be only restricted to those software
projects designed by that company and those software projects with similar characteristics; (2) the
sample size is only 60 software projects which are mainly from a single company. To be more
generalized, the sample size should be large enough and contain projects from more than one
company. So that the new model can be applied to other areas to ensure its validity. (3) Further
research is also needed to compare to FP Method or other methods.
9. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.1, January 2017
31
REFERENCES
[1] Fu, Ya-fang, Liu, Xiao-dong, Yang, Ren-nong, Du, Yi-lin and Li Yan-jie (2010), “A Software Size
Estimation Method Based on Improved FPA”, Second World Congress on Software Engineering,
Vol. 2, pp228-233.
[2] Hastings, T. E. & Sajeev, A. S. M. (2001), “A Vector-Based Approach to Software Size Measurement
and Effort Estimation”, IEEE Transactions on Software Engineering, Vol. 27, No. 4, pp.337-350.
[3] Norris, K. P. (1971), “The Accuracy of Project Cost and Duration Estimates in Industrial R&D”,
R&D Management, Vol. 2, No. 1, pp.25-36.
[4] Murmann, Philipp A. (1994), “Expected Development Time Reductions in the German Mechanical
Engineering Industry”, Journal of Product innovation Management, Vol. 11, pp.236-252.
[5] David Consulting Group (2012), “Project Estimating”, DCG Corporate Office, Paoli, 2007:
http:davidconsultinggroup.com/training/estimation.aspx (January, 2017)
[6] Boehm, Barry (1976), “Software Engineering”, IEEE Transactions on Computers, Vol. C-25, Issue 12,
pp1226-1241.
[7] Dreger, J. B. (1989), “Function Point Analysis”, Englewood Cliffs, NJ:Prentice-Hall.
[8] Smith, Randy K., Hale, Joanne E. & Parrish, Allen S. (2001), “An Empirical Study Using Task
Assignment Patterns to Improve the Accuracy of Software Effort Estimation”, IEEE Transactions on
Software Engineering, Vol. 27, No. 3, pp.264-271.
[9] Sataphthy, Shashank Mouli, Kumar, Mukesh & Rath, Santanu Kumar (2013), “Class Point Approach
for Software Effort Estimation Using Soft Computing Techniques, International Conference on
Advances in Computing, Communications and Informatics(ICACCI), p178-183.
[10] Tariq, Sidra, Usman, Muhammad, Wong, Raymond, Zhuang, Yan & Fong, Simon (2015), “On
Learning Software Effort Estimation”, 3rd International Symposium and Business Intelligence, P79-
84.
[11] Bhandari, Sangeeta (2016), “FCM Based Conceptual Framework for Software Effort Estimation”,
International Conference on Computing for Sustainable Global Development, pp2585-2588.
[12] Moharreri, Kayhan, Sapre, Alhad Vinayak, Ramanathan, Jayashree & Ramnath, Rajiv (2016), “Cost-
Effective Supervised Learning Models for Software Effort Estimation in Agile Environments”, IEEE
40th Annual Computer Software and Applications Conference, p135-140.
[13] Mukhopadhyay, Tridas & Kekre, Sunder. (1992), “Software Effort Models for Early Estimation of
Process Control Applications”, IEEE Transactions on Software Engineering, Vol. 18, No. 10, pp.915-
924.
[14] Boehm, Barry W. (1995), “Cost Models for Future Software Life Cycle Processes: COCOMO 2.0,”
Anals of Software Engineering Special Volume on Software Process and Product Measurement,
Science Publisher, Amsterdam, Netherlands, 1(3), p45-60.
[15] Srinivasan, Krishnamoorthy & Fisher, Douglas (1995), “Machine Learning Approaches to Estimating
Software Development Effort”, IEEE Transactions on Software Engineering, Vol. 21, No. 2, pp126-
137.
[16] Strike, Kevin, Emam, Khaled EI & Madhavji, Nazim (2001), “Software Cost Estimation with
Incomplete Data”, IEEE Transactions on Software Engineering, Vol. 27, No. 10, pp215-223.
[17] Putnam, Lawrence H. (1978), “A General Empirical Solution to the Macro Software Sizing and
Estimating Problem”, IEEE Transactions on Software Engineering, Vol. SE-4, No. 4, pp345-361.
[18] Boehm, Barry W. (1981), “Software Engineering Economics”, Englewood Cliffs, NJ:Prentice-Hall.
[19] Subramanian, Girish H. & Breslawski, Steven (1995), “An Empirical Analysis of Software Effort
Estimate Alternations”, Journal of Systems Software, Vol. 31, pp135-141.
[20] Boehm, Barry W. (1984), “Software Engineering Economics”, IEEE Transactions on Software
Engineering”, Vol. 10, pp4-21.
[21] Agrawal, Priya & Kumar, Shraddha (2016), “Early Phase Software Effort Estimation Model”,
Symposium on Colossal Data Analysis and Networking, pp1-8.
[22] Albrecht, Allen. J. (1979), “Measuring Application Development Productivity”, Proceedings of the
IBM Applications Development Symposium, pp83-92.
10. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.1, January 2017
32
[23] Albrecht, Allen J. & Gaffney. John E. (1983), “Software Function, Source Lines of Code, and
Development Effort Prediction: A Software Science Validation”, IEEE Transactions on Software
Engineering, Vol. 9, No.6, pp639-648.
[24] Hu, Qing, Plant, Robert & Hertz, David (1998), “Software Cost Estimation Using Economic
Production Models”, Journal of Management Information Systems, Vol. 15, No. 1, pp143-163.
[25] Bock D. B., & Klepper R. (1992). FP S: A Simplified Function Point Counting Method, “The Journal
of Systems Software”, 18:245 254.
[26] Kemerer, Chris F. (1993). “Reliability of Function Points Measurement: A Field Experiment”,
Communications of the ACM, 36(2):85 97.
[27] Lokan, Chris J. (2000). “An Empirical Analysis of Function Point Adjustment Factors”, Journal of
Information and Software Technology, vol. 42, pp649-660.
[28] Jeffery, J., Low, G. & Barnes, C. (1993), “Comparison of Function Point Counting Techniques”,
IEEE Transactions on Software Engineering, Vol. 19, No. 5, pp529-532.
[29] Misra, A. K. & Chaudhary, B. D. (1991), “An Interactive Structured Program Development Tool”,
IEEE Region 10 International Conference on EC3-Energy, Computer, Communication and Control
Systems, 3, 1-5.
[30] Kendall, K. E. & Kendall, J. E. (2005), “System Analysis and Design”, 6/e, Prentice-Hall.
[31] Brooks, F. (1975), “The Mythical Man-Month.” Addison-Wesley.
[32] Zhang, Xiaoni & Windsor, John (2003). “An Empirical Analysis of Software Volatility and Related
Factors”, Industrial Management & Data Systems, Vol. 103, No. 4, pp275-281.
[33] Kemerer, C. F. & Slaughter, S. (1997). “Determinants of Software Maintenance Profiles: An
Empirical Investigation”, Journal of Software Maintenance, Vol. 9, pp235-251.
[34] Krishnan, Mayuram S. (1998). “The Role of Team Factors in Software Cost and Quality”,
Information Technology & People, Vol. 11(1), pp20-35.
[35] MacDonell, S. G., Shepperd, M. J. & Sallis, P. (1997), “Metrics for Database Systems: An Empirical
Study”, Proceedings of the 4th International Software Metrics Symposium(Metrics 1997).
[36] MacDonell, S. G. (1994). “Comparative Review of Functional Complexity Assessment Methods for
Effort Estimation”, Software Engineering Journal, pp107-116.