The rise of the use of mobile technologies in the world, such as smartphones and tablets, connected to
mobile networks is changing old habits and creating new ways for the society to access information and
interact with computer systems. Thus, traditional information systems are undergoing a process of
adaptation to this new computing context. However, it is important to note that the characteristics of this
new context are different. There are new features and, thereafter, new possibilities, as well as restrictions
that did not exist before. Finally, the systems developed for this environment have different requirements
and characteristics than the traditional information systems. For this reason, there is the need to reassess
the current knowledge about the processes of planning and building for the development of systems in this
new environment. One area in particular that demands such adaptation is software estimation. The
estimation processes, in general, are based on characteristics of the systems, trying to quantify the
complexity of implementing them. Hence, the main objective of this paper is to present a proposal for an
estimation model for mobile applications, as well as discuss the applicability of traditional estimation
models for the purpose of developing systems in the context of mobile computing. Hence, the main objective
of this paper is to present an effort estimation model for mobile applications.
BUILDING INFORMATICS: REVIEW OF SELECTED INFORMATICS PLATFORM AND VALIDATING ...IAEME Publication
Automation has introduced new dimension to the advent of project and
construction execution in construction field. Virtually all aspect of construction is
being innovated with cutting edge technology. In this study cutting edge technologies
were evaluated and their various validation platforms were evaluated. The following
objectives were set and achieved in this study: Establishing different tests that could
be carried out to ascertain functionality of an informatics platform, review of features
present in available informatics platforms, exploratory study of platform validity
system through functionality tests and developing a semantic icon functionality test.
Ten (10) informatics platforms were selected for case study, while 40 structured
questionnaires was used to collate respondents data as related to on the critical
factors that influences the effective use of system usability test on ICT Informatics
platform and parameters for newly generated Icon functionality rating scale(IRS). A
new test protocol was designed that could be used for carrying out Icon functionality
rating evaluation tagged”IRS”.
Genetic fuzzy process metric measurement system for an operating systemijcseit
Operating system (Os) is the most essential software of the computer system,deprived ofit, the computer
system is totally useless. It is the frontier for assessing relevant computer resources. It performance greatly
enhances user overall objective across the system. Related literatures have try in different methods and
techniques to measure the process matric performance of the operating system but none has incorporated
the use of genetic algorithm and fuzzy logic in their varied techniques which indeed is a novel approach.
Extending the work of Michalis, this research focuses on measuring the process matrix performance of an
operating system utilizing set of operating system criteria’s while fusing fuzzy logic to handle
impreciseness and genetic for process optimization.
This document discusses directions for future research on information technology and educational management. It proposes researching: 1) different strategies for designing information systems and their impacts, 2) characteristics of effective systems and why users value them, and 3) factors influencing successful implementation like the organizational context. Future research should use longitudinal, empirical methods to study implementation processes, system usage levels, and impacts on effectiveness compared to pre-implementation baselines. The goal is to better understand relationships between design, use, and effects to overcome problems.
This document discusses diagnostics and fault detection methods for smart buildings. It begins by explaining the importance of diagnostics for quickly detecting and identifying faults in engineering systems. It then describes three main classes of fault detection methods for buildings: model-driven methods based on physical models, data-driven methods using machine learning on historical data, and hybrid methods. Several examples of automated fault detection methods are provided, such as for lighting systems and air conditioners. Current issues and the state of diagnostics in buildings are also summarized. In particular, it notes that while commercial products exist, widespread adoption has been limited and improvements are still needed in areas like sensor technology and handling multiple simultaneous faults.
Comparative analysis of augmented datasets performances of age invariant face...journalBEEI
The popularity of face recognition systems has increased due to their non-invasive method of image acquisition, thus boasting the widespread applications. Face ageing is one major factor that influences the performance of face recognition algorithms. In this study, the authors present a comparative study of the two most accepted and experimented face ageing datasets (FG-Net and morph II). These datasets were used to simulate age invariant face recognition (AIFR) models. Four types of noises were added to the two face ageing datasets at the preprocessing stage. The addition of noise at the preprocessing stage served as a data augmentation technique that increased the number of sample images available for deep convolutional neural network (DCNN) experimentation, improved the proposed AIFR model and the trait aging features extraction process. The proposed AIFR models are developed with the pre-trained Inception-ResNet-v2 deep convolutional neural network architecture. On testing and comparing the models, the results revealed that FG-Net is more efficient over Morph with an accuracy of 0.15%, loss function of 71%, mean square error (MSE) of 39% and mean absolute error (MAE) of -0.63%.
The document describes the methodology used for an IT capstone project, including requirements specification, analysis specification, design, and development and testing. It discusses the system development life cycle (SDLC) model used, including feasibility study, analysis, design, implementation, testing, and maintenance. For development and testing, it specifically discusses using a spiral lifecycle model with iterative prototyping. Each iteration involved gathering user data, planning the next iteration, and evaluating the design based on prior results before coding and testing the prototype.
Common protocol to support disparate communication types within industrial Et...Maurice Dawson
Owing to the increasing demand for reliable products built globally, and through the evolution of machine design, the need for improved and a common communications protocol in different geographical regions has intensified. In this paper, the goal is to reveal that the current protocols used to support disparate communication types in manufacturing have caused complexity in configurations and an increase in monetary overhead for industrial system designers and the end users. Through the simulation of an industrial network, the packet timing, and packet loss between peer-to-peer systems, similar protocol systems will be compared with two dissimilar protocols systems to establish the thesis. The internal validation research method used in this study will reveal the need for an all-inclusive protocol to eliminate the timing and packet loss issues, the systems’ configuration complexities, and the need to reduce the monetary overhead currently associated with the machine communications.
This document reviews and compares eight prominent models of user acceptance of information technology: the theory of reasoned action, technology acceptance model, motivational model, theory of planned behavior, combined TAM and TPB model, model of PC utilization, innovation diffusion theory, and social cognitive theory. It aims to empirically compare the models, formulate a unified model integrating elements of the eight models called UTAUT, and validate UTAUT using multiple data sets. The eight models are described and their constructs defined. Prior empirical comparisons of the models are discussed, noting limitations that the current study aims to address.
BUILDING INFORMATICS: REVIEW OF SELECTED INFORMATICS PLATFORM AND VALIDATING ...IAEME Publication
Automation has introduced new dimension to the advent of project and
construction execution in construction field. Virtually all aspect of construction is
being innovated with cutting edge technology. In this study cutting edge technologies
were evaluated and their various validation platforms were evaluated. The following
objectives were set and achieved in this study: Establishing different tests that could
be carried out to ascertain functionality of an informatics platform, review of features
present in available informatics platforms, exploratory study of platform validity
system through functionality tests and developing a semantic icon functionality test.
Ten (10) informatics platforms were selected for case study, while 40 structured
questionnaires was used to collate respondents data as related to on the critical
factors that influences the effective use of system usability test on ICT Informatics
platform and parameters for newly generated Icon functionality rating scale(IRS). A
new test protocol was designed that could be used for carrying out Icon functionality
rating evaluation tagged”IRS”.
Genetic fuzzy process metric measurement system for an operating systemijcseit
Operating system (Os) is the most essential software of the computer system,deprived ofit, the computer
system is totally useless. It is the frontier for assessing relevant computer resources. It performance greatly
enhances user overall objective across the system. Related literatures have try in different methods and
techniques to measure the process matric performance of the operating system but none has incorporated
the use of genetic algorithm and fuzzy logic in their varied techniques which indeed is a novel approach.
Extending the work of Michalis, this research focuses on measuring the process matrix performance of an
operating system utilizing set of operating system criteria’s while fusing fuzzy logic to handle
impreciseness and genetic for process optimization.
This document discusses directions for future research on information technology and educational management. It proposes researching: 1) different strategies for designing information systems and their impacts, 2) characteristics of effective systems and why users value them, and 3) factors influencing successful implementation like the organizational context. Future research should use longitudinal, empirical methods to study implementation processes, system usage levels, and impacts on effectiveness compared to pre-implementation baselines. The goal is to better understand relationships between design, use, and effects to overcome problems.
This document discusses diagnostics and fault detection methods for smart buildings. It begins by explaining the importance of diagnostics for quickly detecting and identifying faults in engineering systems. It then describes three main classes of fault detection methods for buildings: model-driven methods based on physical models, data-driven methods using machine learning on historical data, and hybrid methods. Several examples of automated fault detection methods are provided, such as for lighting systems and air conditioners. Current issues and the state of diagnostics in buildings are also summarized. In particular, it notes that while commercial products exist, widespread adoption has been limited and improvements are still needed in areas like sensor technology and handling multiple simultaneous faults.
Comparative analysis of augmented datasets performances of age invariant face...journalBEEI
The popularity of face recognition systems has increased due to their non-invasive method of image acquisition, thus boasting the widespread applications. Face ageing is one major factor that influences the performance of face recognition algorithms. In this study, the authors present a comparative study of the two most accepted and experimented face ageing datasets (FG-Net and morph II). These datasets were used to simulate age invariant face recognition (AIFR) models. Four types of noises were added to the two face ageing datasets at the preprocessing stage. The addition of noise at the preprocessing stage served as a data augmentation technique that increased the number of sample images available for deep convolutional neural network (DCNN) experimentation, improved the proposed AIFR model and the trait aging features extraction process. The proposed AIFR models are developed with the pre-trained Inception-ResNet-v2 deep convolutional neural network architecture. On testing and comparing the models, the results revealed that FG-Net is more efficient over Morph with an accuracy of 0.15%, loss function of 71%, mean square error (MSE) of 39% and mean absolute error (MAE) of -0.63%.
The document describes the methodology used for an IT capstone project, including requirements specification, analysis specification, design, and development and testing. It discusses the system development life cycle (SDLC) model used, including feasibility study, analysis, design, implementation, testing, and maintenance. For development and testing, it specifically discusses using a spiral lifecycle model with iterative prototyping. Each iteration involved gathering user data, planning the next iteration, and evaluating the design based on prior results before coding and testing the prototype.
Common protocol to support disparate communication types within industrial Et...Maurice Dawson
Owing to the increasing demand for reliable products built globally, and through the evolution of machine design, the need for improved and a common communications protocol in different geographical regions has intensified. In this paper, the goal is to reveal that the current protocols used to support disparate communication types in manufacturing have caused complexity in configurations and an increase in monetary overhead for industrial system designers and the end users. Through the simulation of an industrial network, the packet timing, and packet loss between peer-to-peer systems, similar protocol systems will be compared with two dissimilar protocols systems to establish the thesis. The internal validation research method used in this study will reveal the need for an all-inclusive protocol to eliminate the timing and packet loss issues, the systems’ configuration complexities, and the need to reduce the monetary overhead currently associated with the machine communications.
This document reviews and compares eight prominent models of user acceptance of information technology: the theory of reasoned action, technology acceptance model, motivational model, theory of planned behavior, combined TAM and TPB model, model of PC utilization, innovation diffusion theory, and social cognitive theory. It aims to empirically compare the models, formulate a unified model integrating elements of the eight models called UTAUT, and validate UTAUT using multiple data sets. The eight models are described and their constructs defined. Prior empirical comparisons of the models are discussed, noting limitations that the current study aims to address.
The document summarizes Tamara Lopez's PhD research proposal on reasoning about flaws in software design. The research aims to analyze software failures by taking a situational approach between the broad scope of systemic analyses and narrow focus of means analyses. It will apply qualitative methods to examine how failures manifest and are addressed in software development. The goal is to better understand why some software fails and other succeeds.
Software plays a critical role in businesses, governments, and societies. To improve
performance and quality of the software are important goals of software engineering. Mining
data has recently emerged as a promising means to meet this goal due to two main trends:
The increasing abundance of such data and its demonstrated helpfulness in solving numerous
real-world problems. Poor performance costs the software industry millions of money
annually in the form of lost revenue, hardware costs, damaged customer relations and
decreased productivity. Performance analysis and evaluation through data mining technique
will result performance improvement suggestions for software developers.
This document summarizes a technical report on an empirical study of design pattern evolution in 39 open source Java projects. The study analyzed a total of 428 software releases from the projects to identify 10 common design patterns. A total of 27,855 instances of the patterns were found. The data collected includes the number of each pattern identified in each project release. This dataset will be further analyzed to understand how design patterns evolve over the lifetime of a software system.
Analysis of the User Acceptance for Implementing ISO/IEC 27001:2005 in Turkis...IJMIT JOURNAL
This study aims to develop a model for the user acceptance for implementing the information security standard (i.e. ISO 27001) in Turkish public organizations. The results of the surveys performed in Turkey reveal that the legislation on information security public which organizations have to obey is significantly related with the user acceptance during ISO 27001 implementation process. The fundamental components of our user acceptance model are perceived usefulness, attitude towards use, social norms, and performance expectancy.
The proposed methodology allows a comprehensive assessment of the various types of value generated by a PSI e-infrastructure for each stakeholder group, and also the interconnections among them.
Extract Business Process Performance using Data MiningIJERA Editor
This paper aimed to analyze the performance of the business process using process mining. The performance is very important especially in large systems .the process of repairing devices was used as case study and the Fuzzy Miner Algorithm used to analyze the process model performance
Ijartes v2-i1-001Evaluation of Changeability Indicator in Component Based Sof...IJARTES
The maintaining of software system is a major
cost concern. The maintaining of a software system depends
on how the changes made to it. The maintainability of a system
depending on the folw of software, its design pattern and
CBSS. In Maintainability phase of a sotware system there are
4 parts, like analyzing, testing, stability, and changes made to
it. In some side areas, these systems emerged very rapidly.
There are many companies which purchase software instead
of developing it .These companies do not have any interst in
the testing of the system but wants to like smoothness in the
flow of the system during changes.
Changeability is one of the characteristics of maintainability.
Software changeability is associated with refactoring which
makes code simpler and easier to maintain (enable all
programmers to improve their code).Factors that affect
changeability include coupling between the modules, lack of
code comments, naming of functions and variables.
Basically,”changeabilty” is the ability of a product or software
to be able to change the structure of the program. It is the rate
the product allows the modification to its components.
In this paper changeability based cost estimation is done.
Initially we take four components; these components are
evaluated based on the coupling, cohesion and Interface
metrix. Next some changes are made to the existing
components and than again these components are evaluated.
Now, on the basis of these two evaluations some conclusion is
made for changeability cost.
Flexibility a key factor to testability ijseajournal
Testability is an important software quality factor that is ineffective if it is not available at an early stage in
the development life-cycle. It becomes more essential in the case of object oriented design. Flexibility is an
important key factor to testability analysis and measurement for delivering high class testable and
maintainable software. Flexibility is a criterion of crucial significance to software developers, designers
and the quality controllers. It constantly guides and supports to avoid wastage of resources as well as
enable the designers for continuous improvement in the development process. Flexibility is concerned with
building high quality and reliable software within the constraints of cost and time. It greatly influences
cost, quality and reliability at software evolution process. Despite the fact flexibility is vital and highly
significant aspect for software development processes, it is poorly managed. This paper focuses the need
and importance of flexibility early at design phase. A model has been proposed for flexibility measurement
of object oriented design by establishing multiple linear regression. Finally the proposed model has been
validated using experimental tryout.
Secured cloud support for global softwareijseajournal
This document summarizes a research paper that proposes a methodology called TSPS (Theory/SWEBOK/Project Security) to improve software engineering education. The methodology aims to collaborate between academic and industrial practices. It involves students working on projects with guidance from both mentors and industry practitioners. Data from literature reviews on software security engineering education is analyzed. A cloud-based system is developed to securely store project documents by encrypting and splitting files across multiple cloud nodes. The methodology and secure cloud storage approach are concluded to provide strategies to mitigate risks in software projects and benefit both education and industry.
A model for run time software architecture adaptationijseajournal
Since the global demand for software systems and constantly changing environments and systems is
increasing, the adaptability of software systems is of significant importance. Due to the architecture of
software system is a high-level view of the system and makes the modifiability possible at an overall level,
the adaptability of the software can be considered an effective approach to adapt software systems by
changing architecture configuration. In this study, the architecture configuration is modified through xADL
language which is a software architecture description language with a high flexibility. Software
architecture reconfiguration is done based on existing rules of rule-based system, which are written with
respect to three strategies of load balancing, fixed bandwidth and fixed latency. The proposed model of the
study is simulated based on samples of client-server system, video conferencing system and students’
grading system. The proposed model can be used in all types of architecture, include Client Server
Architecture, Service Oriented Architecture and etc.
FAST TRANSIENT RESPONSE LOW DROPOUT VOLTAGE REGULATORijseajournal
This paper presents the design of Low Drop-Out (LDO) voltage regulator has fast transient response and which exploits a few current else low quiescent current in the operational amplifier PMOS type. We use band-gap reference for eliminate the temperature dependence. The proposed LDO voltage regulator implemented in 0.18-μm CMOS technology, we use Folded cascode CMOS amplifiers high performance in the stability , provide fast transient response which explains a fast settling, the LDO itself should provide in the output regulator voltages at t equal 2ps with transient variation of the voltage less than 170mV. High accuracy in the DC response terms, the simulation results show that the accuracy of the output regulator voltages is 1.54±0.009V, and power consumption of 1.51 mW.
DETECTION AND REFACTORING OF BAD SMELL CAUSED BY LARGE SCALEijseajournal
Bad smells are signs of potential problems in code. Detecting bad smells, however, remains time
consuming for software engineers despite proposals on bad smell detection and refactoring tools. Large
Class is a kind of bad smells caused by large scale, and the detection is hard to achieve automatically. In
this paper, a Large Class bad smell detection approach based on class length distribution model and
cohesion metrics is proposed. In programs, the lengths of classes are confirmed according to the certain
distributions. The class length distribution model is generalized to detect programs after grouping.
Meanwhile, cohesion metrics are analyzed for bad smell detection. The bad smell detection experiments of
open source programs show that Large Class bad smell can be detected effectively and accurately with this
approach, and refactoring scheme can be proposed for design quality improvements of programs.
A maintainability enhancement procedureijseajournal
In mobile communications age, environment changes rapidly, the requirements change is the software
project must face challenge. Able to overcome the impact of requirements change, software development
risk can be effectively decreased. In order to reduce software requirements change risk, the paper
investigates the major software development models and recommends the adaptable requirements change
software development. Agile development applied the Iterative and Incremental Development (IID)
approach, focuses on workable software and client communication. In software development, agile
development is a very suitable approach to handle the requirements change. However, agile development
maintenance existed many defects that include development documents control, user story inspection and
CM system. The maintenance defects of agile development should be improved. Analysing and collecting
the critical quality factors of agile development maintainability, in this paper proposes the Agile
Development Maintainability Measurement (ADMM) model. Based on ADMM model, the Agile
Development Maintainability Enhancement (ADME) procedure can be defined and deployed for reducing
the risk of requirements change.
A novel approach for clone group mappingijseajournal
Clone group mapping has a very important significance in the evolution of code clone. The topic modeling
techniques were applied into code clone firstly and a new clone group mapping method was proposed. The
method is very effective for not only Type-1 and Type-2 clone but also Type-3 clone .By making full use of
the source text and structure information, topic modeling techniques transform the mapping problem of
high-dimensional code space into a low-dimensional topic space, the goal of clone group mapping was
indirectly reached by mapping clone group topics. Experiments on four open source software show that the
recall and precision are up to 0.99, thus the method can effectively and accurately reach the goal of clone
group mapping.
MULTIVIEW SOA : EXTENDING SOA USING A PRIVATE CLOUD COMPUTING AS SAAS AND DAASijseajournal
This work is based on two major areas, the Multiview Service Oriented Architecture and the combination between the computing cloud and MV-SOA. Thus, it is suggested to extend firstly the service oriented architecture (SOA) into an architecture called MV-SOA by adding two components, the Multiview service generator, whose role is to transform the classic service into Multiview service, and the data base, this component seeks to stock all of consumer service information. It is also suggested to combine the computing cloud and Multiview Service Oriented Architecture MVSOA. To reach such combination, the
MVSOA architecture was taken and we added to the client-side a private cloud in SaaS and DaaS.
Integrating profiling into mde compilersijseajournal
Scientific computation requires more and more performance in its algorithms. New massively parallel
architectures suit well to these algorithms. They are known for offering high performance and power
efficiency. Unfortunately, as parallel programming for these architectures requires a complex distribution
of tasks and data, developers find difficult to implement their applications effectively. Although approaches
based on source-to-source intends to provide a low learning curve for parallel programming and take
advantage of architecture features to create optimized applications, programming remains difficult for
neophytes. This work aims at improving performance by returning to the high-level models, specific
execution data from a profiling tool enhanced by smart advices computed by an analysis engine. In order to
keep the link between execution and model, the process is based on a traceability mechanism. Once the
model is automatically annotated, it can be re-factored aiming better performances on the re-generated
code. Hence, this work allows keeping coherence between model and code without forgetting to harness the
power of parallel architectures. To illustrate and clarify key points of this approach, we provide an
experimental example in GPUs context. The example uses a transformation chain from UML-MARTE
models to OpenCL code.
Requirement analysis method of e commerce websites development for small-medi...ijseajournal
Along with the growth of the Internet, the trend shows that e-commerce have been growing significantly in
the last several years. This means business opportunities for small-medium enterprises (SMEs), which are
recognized as the backbone of the economy. SMEs may develop and run small to medium size of particular
e-commerce websites as the solution of specific business opportunities. Certainly, the websites should be
developed accordingly to support business success. In developing the websites, key elements of e-commerce
business model that are necessary to ensure the success should be resolved at the requirement stage of the
development. In this paper, we propose an enhancement of requirement analysis method found in
literatures such that it includes activities to resolve the key elements. The method has been applied in three
case studies based on Indonesia situations and we conclude that it is suitable to be adopted by SMEs.
ESTIMATING THE EFFORT OF MOBILE APPLICATION DEVELOPMENTcsandit
The rise of the use of mobile technologies in the world, such as smartphones and tablets,
connected to mobile networks is changing old habits and creating new ways for the society to
access information and interact with computer systems. Thus, traditional information systems
are undergoing a process of adaptation to this new computing context. However, it is important
to note that the characteristics of this new context are different. There are new features and,
thereafter, new possibilities, as well as restrictions that did not exist before. Finally, the systems
developed for this environment have different requirements and characteristics than the
traditional information systems. For this reason, there is the need to reassess the current
knowledge about the processes of planning and building for the development of systems in this
new environment. One area in particular that demands such adaptation is software estimation.
The estimation processes, in general, are based on characteristics of the systems, trying to
quantify the complexity of implementing them. Hence, the main objective of this paper is to
present a proposal for an estimation model for mobile applications, as well as discuss the
applicability of traditional estimation models for the purpose of developing systems in the
context of mobile computing. Hence, the main objective of this paper is to present an effort
estimation model for mobile applications.
LEAN THINKING IN SOFTWARE ENGINEERING: A SYSTEMATIC REVIEWijseajournal
The field of Software Engineering has suffered considerable transformation in the last decades due to the influence of the philosophy of Lean Thinking. The purpose of this systematic review is to identify practices and approaches proposed by researchers in this area in the last 5 years, who have worked under the influence of this thinking. The search strategy brought together 549 studies, 80 of which were classified as
relevant for synthesis in this review. Seventeen tools of Lean Thinking adapted to Software Engineering were catalogued, as well as 35 practices created for the development of software that has been influenced by this philosophy. The study rovides a roadmap of results with the current state of the art and the identification of gaps pointing to opportunities for further esearch.
BEHAVIOR-BASED SECURITY FOR MOBILE DEVICES USING MACHINE LEARNING TECHNIQUESijaia
The goal of this research project is to design and implement a mobile application and machine learning techniques to solve problems related to the security of mobile devices. We introduce in this paper a behavior-based approach that can be applied in a mobile environment to capture and learn the behavior of
mobile users. The proposed system was tested using Android OS and the initial experimental results show that the proposed technique is promising, and it can be used effectively to solve the problem of anomaly detection in mobile devices.
The document discusses using data mining techniques to analyze crime data and predict crime trends. It describes collecting crime reports from various sources to create a database. Machine learning algorithms would then be applied to the crime data to discover patterns and relationships between different crimes. This analysis could help police identify crime hotspots and determine if a crime was committed in a known location. The proposed system aims to forecast crimes and trends based on past crime data, date and location to help prevent crimes. It discusses implementing the system using Python and testing it with sample input data.
David vernon software_engineering_notesmitthudwivedi
This document provides an overview of the Software Engineering 2 course, including its aims, objectives, course contents, and recommended textbooks. The course aims to provide knowledge of techniques for estimating, designing, building, and ensuring quality in software projects. The objectives cover understanding software metrics, estimating project costs and schedules, quality assurance attributes and standards, and software analysis and design techniques. The course content includes topics like software metrics, estimation models, quality assurance, and object-oriented analysis and design. The document also summarizes several software engineering process models and risk management approaches.
The document summarizes Tamara Lopez's PhD research proposal on reasoning about flaws in software design. The research aims to analyze software failures by taking a situational approach between the broad scope of systemic analyses and narrow focus of means analyses. It will apply qualitative methods to examine how failures manifest and are addressed in software development. The goal is to better understand why some software fails and other succeeds.
Software plays a critical role in businesses, governments, and societies. To improve
performance and quality of the software are important goals of software engineering. Mining
data has recently emerged as a promising means to meet this goal due to two main trends:
The increasing abundance of such data and its demonstrated helpfulness in solving numerous
real-world problems. Poor performance costs the software industry millions of money
annually in the form of lost revenue, hardware costs, damaged customer relations and
decreased productivity. Performance analysis and evaluation through data mining technique
will result performance improvement suggestions for software developers.
This document summarizes a technical report on an empirical study of design pattern evolution in 39 open source Java projects. The study analyzed a total of 428 software releases from the projects to identify 10 common design patterns. A total of 27,855 instances of the patterns were found. The data collected includes the number of each pattern identified in each project release. This dataset will be further analyzed to understand how design patterns evolve over the lifetime of a software system.
Analysis of the User Acceptance for Implementing ISO/IEC 27001:2005 in Turkis...IJMIT JOURNAL
This study aims to develop a model for the user acceptance for implementing the information security standard (i.e. ISO 27001) in Turkish public organizations. The results of the surveys performed in Turkey reveal that the legislation on information security public which organizations have to obey is significantly related with the user acceptance during ISO 27001 implementation process. The fundamental components of our user acceptance model are perceived usefulness, attitude towards use, social norms, and performance expectancy.
The proposed methodology allows a comprehensive assessment of the various types of value generated by a PSI e-infrastructure for each stakeholder group, and also the interconnections among them.
Extract Business Process Performance using Data MiningIJERA Editor
This paper aimed to analyze the performance of the business process using process mining. The performance is very important especially in large systems .the process of repairing devices was used as case study and the Fuzzy Miner Algorithm used to analyze the process model performance
Ijartes v2-i1-001Evaluation of Changeability Indicator in Component Based Sof...IJARTES
The maintaining of software system is a major
cost concern. The maintaining of a software system depends
on how the changes made to it. The maintainability of a system
depending on the folw of software, its design pattern and
CBSS. In Maintainability phase of a sotware system there are
4 parts, like analyzing, testing, stability, and changes made to
it. In some side areas, these systems emerged very rapidly.
There are many companies which purchase software instead
of developing it .These companies do not have any interst in
the testing of the system but wants to like smoothness in the
flow of the system during changes.
Changeability is one of the characteristics of maintainability.
Software changeability is associated with refactoring which
makes code simpler and easier to maintain (enable all
programmers to improve their code).Factors that affect
changeability include coupling between the modules, lack of
code comments, naming of functions and variables.
Basically,”changeabilty” is the ability of a product or software
to be able to change the structure of the program. It is the rate
the product allows the modification to its components.
In this paper changeability based cost estimation is done.
Initially we take four components; these components are
evaluated based on the coupling, cohesion and Interface
metrix. Next some changes are made to the existing
components and than again these components are evaluated.
Now, on the basis of these two evaluations some conclusion is
made for changeability cost.
Flexibility a key factor to testability ijseajournal
Testability is an important software quality factor that is ineffective if it is not available at an early stage in
the development life-cycle. It becomes more essential in the case of object oriented design. Flexibility is an
important key factor to testability analysis and measurement for delivering high class testable and
maintainable software. Flexibility is a criterion of crucial significance to software developers, designers
and the quality controllers. It constantly guides and supports to avoid wastage of resources as well as
enable the designers for continuous improvement in the development process. Flexibility is concerned with
building high quality and reliable software within the constraints of cost and time. It greatly influences
cost, quality and reliability at software evolution process. Despite the fact flexibility is vital and highly
significant aspect for software development processes, it is poorly managed. This paper focuses the need
and importance of flexibility early at design phase. A model has been proposed for flexibility measurement
of object oriented design by establishing multiple linear regression. Finally the proposed model has been
validated using experimental tryout.
Secured cloud support for global softwareijseajournal
This document summarizes a research paper that proposes a methodology called TSPS (Theory/SWEBOK/Project Security) to improve software engineering education. The methodology aims to collaborate between academic and industrial practices. It involves students working on projects with guidance from both mentors and industry practitioners. Data from literature reviews on software security engineering education is analyzed. A cloud-based system is developed to securely store project documents by encrypting and splitting files across multiple cloud nodes. The methodology and secure cloud storage approach are concluded to provide strategies to mitigate risks in software projects and benefit both education and industry.
A model for run time software architecture adaptationijseajournal
Since the global demand for software systems and constantly changing environments and systems is
increasing, the adaptability of software systems is of significant importance. Due to the architecture of
software system is a high-level view of the system and makes the modifiability possible at an overall level,
the adaptability of the software can be considered an effective approach to adapt software systems by
changing architecture configuration. In this study, the architecture configuration is modified through xADL
language which is a software architecture description language with a high flexibility. Software
architecture reconfiguration is done based on existing rules of rule-based system, which are written with
respect to three strategies of load balancing, fixed bandwidth and fixed latency. The proposed model of the
study is simulated based on samples of client-server system, video conferencing system and students’
grading system. The proposed model can be used in all types of architecture, include Client Server
Architecture, Service Oriented Architecture and etc.
FAST TRANSIENT RESPONSE LOW DROPOUT VOLTAGE REGULATORijseajournal
This paper presents the design of Low Drop-Out (LDO) voltage regulator has fast transient response and which exploits a few current else low quiescent current in the operational amplifier PMOS type. We use band-gap reference for eliminate the temperature dependence. The proposed LDO voltage regulator implemented in 0.18-μm CMOS technology, we use Folded cascode CMOS amplifiers high performance in the stability , provide fast transient response which explains a fast settling, the LDO itself should provide in the output regulator voltages at t equal 2ps with transient variation of the voltage less than 170mV. High accuracy in the DC response terms, the simulation results show that the accuracy of the output regulator voltages is 1.54±0.009V, and power consumption of 1.51 mW.
DETECTION AND REFACTORING OF BAD SMELL CAUSED BY LARGE SCALEijseajournal
Bad smells are signs of potential problems in code. Detecting bad smells, however, remains time
consuming for software engineers despite proposals on bad smell detection and refactoring tools. Large
Class is a kind of bad smells caused by large scale, and the detection is hard to achieve automatically. In
this paper, a Large Class bad smell detection approach based on class length distribution model and
cohesion metrics is proposed. In programs, the lengths of classes are confirmed according to the certain
distributions. The class length distribution model is generalized to detect programs after grouping.
Meanwhile, cohesion metrics are analyzed for bad smell detection. The bad smell detection experiments of
open source programs show that Large Class bad smell can be detected effectively and accurately with this
approach, and refactoring scheme can be proposed for design quality improvements of programs.
A maintainability enhancement procedureijseajournal
In mobile communications age, environment changes rapidly, the requirements change is the software
project must face challenge. Able to overcome the impact of requirements change, software development
risk can be effectively decreased. In order to reduce software requirements change risk, the paper
investigates the major software development models and recommends the adaptable requirements change
software development. Agile development applied the Iterative and Incremental Development (IID)
approach, focuses on workable software and client communication. In software development, agile
development is a very suitable approach to handle the requirements change. However, agile development
maintenance existed many defects that include development documents control, user story inspection and
CM system. The maintenance defects of agile development should be improved. Analysing and collecting
the critical quality factors of agile development maintainability, in this paper proposes the Agile
Development Maintainability Measurement (ADMM) model. Based on ADMM model, the Agile
Development Maintainability Enhancement (ADME) procedure can be defined and deployed for reducing
the risk of requirements change.
A novel approach for clone group mappingijseajournal
Clone group mapping has a very important significance in the evolution of code clone. The topic modeling
techniques were applied into code clone firstly and a new clone group mapping method was proposed. The
method is very effective for not only Type-1 and Type-2 clone but also Type-3 clone .By making full use of
the source text and structure information, topic modeling techniques transform the mapping problem of
high-dimensional code space into a low-dimensional topic space, the goal of clone group mapping was
indirectly reached by mapping clone group topics. Experiments on four open source software show that the
recall and precision are up to 0.99, thus the method can effectively and accurately reach the goal of clone
group mapping.
MULTIVIEW SOA : EXTENDING SOA USING A PRIVATE CLOUD COMPUTING AS SAAS AND DAASijseajournal
This work is based on two major areas, the Multiview Service Oriented Architecture and the combination between the computing cloud and MV-SOA. Thus, it is suggested to extend firstly the service oriented architecture (SOA) into an architecture called MV-SOA by adding two components, the Multiview service generator, whose role is to transform the classic service into Multiview service, and the data base, this component seeks to stock all of consumer service information. It is also suggested to combine the computing cloud and Multiview Service Oriented Architecture MVSOA. To reach such combination, the
MVSOA architecture was taken and we added to the client-side a private cloud in SaaS and DaaS.
Integrating profiling into mde compilersijseajournal
Scientific computation requires more and more performance in its algorithms. New massively parallel
architectures suit well to these algorithms. They are known for offering high performance and power
efficiency. Unfortunately, as parallel programming for these architectures requires a complex distribution
of tasks and data, developers find difficult to implement their applications effectively. Although approaches
based on source-to-source intends to provide a low learning curve for parallel programming and take
advantage of architecture features to create optimized applications, programming remains difficult for
neophytes. This work aims at improving performance by returning to the high-level models, specific
execution data from a profiling tool enhanced by smart advices computed by an analysis engine. In order to
keep the link between execution and model, the process is based on a traceability mechanism. Once the
model is automatically annotated, it can be re-factored aiming better performances on the re-generated
code. Hence, this work allows keeping coherence between model and code without forgetting to harness the
power of parallel architectures. To illustrate and clarify key points of this approach, we provide an
experimental example in GPUs context. The example uses a transformation chain from UML-MARTE
models to OpenCL code.
Requirement analysis method of e commerce websites development for small-medi...ijseajournal
Along with the growth of the Internet, the trend shows that e-commerce have been growing significantly in
the last several years. This means business opportunities for small-medium enterprises (SMEs), which are
recognized as the backbone of the economy. SMEs may develop and run small to medium size of particular
e-commerce websites as the solution of specific business opportunities. Certainly, the websites should be
developed accordingly to support business success. In developing the websites, key elements of e-commerce
business model that are necessary to ensure the success should be resolved at the requirement stage of the
development. In this paper, we propose an enhancement of requirement analysis method found in
literatures such that it includes activities to resolve the key elements. The method has been applied in three
case studies based on Indonesia situations and we conclude that it is suitable to be adopted by SMEs.
ESTIMATING THE EFFORT OF MOBILE APPLICATION DEVELOPMENTcsandit
The rise of the use of mobile technologies in the world, such as smartphones and tablets,
connected to mobile networks is changing old habits and creating new ways for the society to
access information and interact with computer systems. Thus, traditional information systems
are undergoing a process of adaptation to this new computing context. However, it is important
to note that the characteristics of this new context are different. There are new features and,
thereafter, new possibilities, as well as restrictions that did not exist before. Finally, the systems
developed for this environment have different requirements and characteristics than the
traditional information systems. For this reason, there is the need to reassess the current
knowledge about the processes of planning and building for the development of systems in this
new environment. One area in particular that demands such adaptation is software estimation.
The estimation processes, in general, are based on characteristics of the systems, trying to
quantify the complexity of implementing them. Hence, the main objective of this paper is to
present a proposal for an estimation model for mobile applications, as well as discuss the
applicability of traditional estimation models for the purpose of developing systems in the
context of mobile computing. Hence, the main objective of this paper is to present an effort
estimation model for mobile applications.
LEAN THINKING IN SOFTWARE ENGINEERING: A SYSTEMATIC REVIEWijseajournal
The field of Software Engineering has suffered considerable transformation in the last decades due to the influence of the philosophy of Lean Thinking. The purpose of this systematic review is to identify practices and approaches proposed by researchers in this area in the last 5 years, who have worked under the influence of this thinking. The search strategy brought together 549 studies, 80 of which were classified as
relevant for synthesis in this review. Seventeen tools of Lean Thinking adapted to Software Engineering were catalogued, as well as 35 practices created for the development of software that has been influenced by this philosophy. The study rovides a roadmap of results with the current state of the art and the identification of gaps pointing to opportunities for further esearch.
BEHAVIOR-BASED SECURITY FOR MOBILE DEVICES USING MACHINE LEARNING TECHNIQUESijaia
The goal of this research project is to design and implement a mobile application and machine learning techniques to solve problems related to the security of mobile devices. We introduce in this paper a behavior-based approach that can be applied in a mobile environment to capture and learn the behavior of
mobile users. The proposed system was tested using Android OS and the initial experimental results show that the proposed technique is promising, and it can be used effectively to solve the problem of anomaly detection in mobile devices.
The document discusses using data mining techniques to analyze crime data and predict crime trends. It describes collecting crime reports from various sources to create a database. Machine learning algorithms would then be applied to the crime data to discover patterns and relationships between different crimes. This analysis could help police identify crime hotspots and determine if a crime was committed in a known location. The proposed system aims to forecast crimes and trends based on past crime data, date and location to help prevent crimes. It discusses implementing the system using Python and testing it with sample input data.
David vernon software_engineering_notesmitthudwivedi
This document provides an overview of the Software Engineering 2 course, including its aims, objectives, course contents, and recommended textbooks. The course aims to provide knowledge of techniques for estimating, designing, building, and ensuring quality in software projects. The objectives cover understanding software metrics, estimating project costs and schedules, quality assurance attributes and standards, and software analysis and design techniques. The course content includes topics like software metrics, estimation models, quality assurance, and object-oriented analysis and design. The document also summarizes several software engineering process models and risk management approaches.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Using Fuzzy Clustering and Software Metrics to Predict Faults in large Indust...IOSR Journals
This document describes a study that uses fuzzy clustering and software metrics to predict faults in large industrial software systems. The study uses fuzzy c-means clustering to group software components into faulty and fault-free clusters based on various software metrics. The study applies this method to the open-source JEdit software project, calculating metrics for 274 classes and identifying faults using repository data. The results show 88.49% accuracy in predicting faulty classes, demonstrating that fuzzy clustering can be an effective technique for fault prediction in large software systems.
The document proposes a Requirement Opinions Mining Method (ROM) to mine user requirements from software review data. It first defines requirement opinions, functional requirement opinions, and non-functional requirement opinions. It then uses deep learning models to classify reviews into functional and non-functional categories. Functional reviews are further classified into three categories and sequence labeling is used to identify functional requirements. Non-functional reviews are clustered using K-means clustering with word vectors. Finally, specific requirements are extracted from the clusters using TF-IDF and syntactic analysis to realize requirement opinion mining from software review data. A case study is conducted on reviews from a Chinese mobile application platform.
REGULARIZED FUZZY NEURAL NETWORKS TO AID EFFORT FORECASTING IN THE CONSTRUCTI...ijaia
Predicting the time to build software is a very complex task for software engineering managers. There are complex factors that can directly interfere with the productivity of the development team. Factors directly related to the complexity of the system to be developed drastically change the time necessary for the completion of the works with the software factories. This work proposes the use of a hybrid system based on artificial neural networks and fuzzy systems to assist in the construction of an expert system based on rules to support in the prediction of hours destined to the development of software according to the complexity of the elements present in the same. The set of fuzzy rules obtained by the system helps the management and control of software development by providing a base of interpretable estimates based on fuzzy rules. The model was submitted to tests on a real database, and its results were promissory in the construction of an aid mechanism in the predictability of the software construction
GENETIC-FUZZY PROCESS METRIC MEASUREMENT SYSTEM FOR AN OPERATING SYSTEMijcseit
This document presents a genetic-fuzzy system for measuring the performance of an operating system's processes. It develops a model using 7 key operating system process parameters and fuzzy logic to handle imprecision. A genetic algorithm is used to optimize the generated membership functions. Rules are created relating parameter combinations to performance classifications. The system was tested on sample data and the genetic algorithm was able to optimize the membership functions over 4 generations to best classify performance. The system brings an optimal and precise approach to measuring operating system process performance by combining genetic algorithms and fuzzy logic.
GENETIC-FUZZY PROCESS METRIC MEASUREMENT SYSTEM FOR AN OPERATING SYSTEMijcseit
Operating system (Os) is the most essential software of the computer system,deprived ofit, the computer system is totally useless. It is the frontier for assessing relevant computer resources. It performance greatly
enhances user overall objective across the system. Related literatures have try in different methods and techniques to measure the process matric performance of the operating system but none has incorporated the use of genetic algorithm and fuzzy logic in their varied techniques which indeed is a novel approach. Extending the work of Michalis, this research focuses on measuring the process matrix performance of an
operating system utilizing set of operating system criteria’s while fusing fuzzy logic to handle impreciseness and genetic for process optimization.
An interactive approach to requirements prioritization using quality factorsijfcstjournal
As the prevalence of software increases, so does the complexity and the number of requirements assoc
iated
to the software project. This presents a dilemma for the developers to clearly identify and prioriti
ze the
most important requirements in order to del
iver the project in given amount of resources and time.
A
number of prioritization methods have been proposed which provide consistent results, but they are v
ery
difficult and complex to implement in practical scenarios as well as lack proper structure to
analyze the
requirements properly. In this study, the users can provide their requirements in two forms: text ba
sed
story form and use case form.
Moreover, the existing prioritization techniques have a very little or no
interaction with the users. So, in t
his paper an attempt has been made to make the prioritization process
user interactive by adding a second level of prioritization where after the developer has properly a
nalyzed
and ranked the requirements on the basis of quality attributes in the first le
vel, takes the opinion of distinct
user’s about the requirements priority sequence. The developer then calculates the disagreement valu
e
associated with each user sequence in order to find out the final priority sequence.
STATE-OF-THE-ART IN EMPIRICAL VALIDATION OF SOFTWARE METRICS FOR FAULT PRONEN...IJCSES Journal
With the sharp rise in software dependability and failure cost, high quality has been in great demand.However, guaranteeing high quality in software systems which have grown in size and complexity coupled with the constraints imposed on their development has become increasingly difficult, time and resource consuming activity. Consequently, it becomes inevitable to deliver software that have no serious faults. In
this case, object-oriented (OO) products being the de facto standard of software development with their unique features could have some faults that are hard to find or pinpoint the impacts of changes. The earlier faults are identified, found and fixed, the lesser the costs and the higher the quality. To assess product quality, software metrics are used. Many OO metrics have been proposed and developed. Furthermore,
many empirical studies have validated metrics and class fault proneness (FP) relationship. The challenge is which metrics are related to class FP and what activities are performed. Therefore, this study bring together the state-of-the-art in fault prediction of FP that utilizes CK and size metrics. We conducted a systematic literature review over relevant published empirical validation articles. The results obtained are
analysed and presented. It indicates that 29 relevant empirical studies exist and measures such as complexity, coupling and size were found to be strongly related to FP.
This document summarizes a research paper on developing a feature-based product recommendation system. It begins by introducing recommender systems and their importance for e-commerce. It then describes how the proposed system takes basic product descriptions as input, recognizes features using association rule mining and k-nearest neighbor algorithms, and outputs recommended additional features to improve the product profile. The paper evaluates the system's performance on recommending antivirus software features. In under 3 sentences.
In the present paper, applicability and
capability of A.I techniques for effort estimation prediction has
been investigated. It is seen that neuro fuzzy models are very
robust, characterized by fast computation, capable of handling
the distorted data. Due to the presence of data non-linearity, it is
an efficient quantitative tool to predict effort estimation. The one
hidden layer network has been developed named as OHLANFIS
using MATLAB simulation environment.
Here the initial parameters of the OHLANFIS are
identified using the subtractive clustering method. Parameters of
the Gaussian membership function are optimally determined
using the hybrid learning algorithm. From the analysis it is seen
that the Effort Estimation prediction model developed using
OHLANFIS technique has been able to perform well over normal
ANFIS Model.
LINKING SOFTWARE DEVELOPMENT PHASE AND PRODUCT ATTRIBUTES WITH USER EVALUATIO...csandit
This paper presents an evaluation methodology to reveal the relationships between the
attributes of software products, practices applied during the development phase and the user
evaluation of the products. For the case study, the games sector has been chosen due to easy
access to the user evaluation of this type of software products. Product attributes and practices
applied during the development phase have been collected from the developers via
questionnaires. User evaluation results were collected from a group of independent evaluators.
Two bipartite networks were created using the gathered data. The first network maps software
products to the practices applied during the development phase and the second network maps
the products to the product attributes. According to the links, similarities were determined and
subgroups of products were obtained according to selected development phase practices. By
this way, the effect of development phase on the user evaluation has been investigated.
Human safety in the Middle East is a crucial aspect especially when working on critical mission systems. Any trivial error may result in inevitable dangerous causalities that lead to loss of innocent souls. The main objective of this paper is to introduce a complete study of a system that automates the currently adopted manual process of having dedicated men to control the barriers at the railway crossings when trains pass, the main objective is to reduce the possible human errors resulting from manual control. This study aims to provide a robust solution that adheres to a formal, systematic and new procedure to enhance the overall quality of requirements gathered for critical systems. In addition, it reflects how effective is the usage of goal oriented modelling in requirements elicitation stage for critical systems to define a clear scope and validate requirements against any missing, inconsistent or vague requirements at early stage.
The document describes a new methodology for eliciting software requirements for smart handheld devices. The methodology is focused on users' work processes. It involves observing users' activities, identifying stakeholders, discussing requirements with stakeholders, finding inspiration from other software, and brainstorming user needs and goals. As an example, the methodology is applied to develop an iPad-based software for improving the learning performance of playgroup students.
HW/SW Partitioning Approach on Reconfigurable Multimedia System on ChipCSCJournals
Due to the complexity and the high performance requirement of multimedia applications, the design of embedded systems is the subject of different types of design constraints such as execution time, time to market, energy consumption, etc. Some approaches of joint software/hardware design (Co-design) were proposed in order to help the designer to seek an adequacy between applications and architecture that satisfies the different design constraints. This paper presents a new methodology for hardware/software partitioning on reconfigurable multimedia system on chip, based on dynamic and static steps. The first one uses the dynamic profiling and the second one uses the design trotter tools. The validation of our approach is made through 3D image synthesis.
Review on Algorithmic and Non Algorithmic Software Cost Estimation Techniquesijtsrd
Effective software cost estimation is the most challenging and important activities in software development. Developers want a simple and accurate method of efforts estimation. Estimation of the cost before starting of work is a prediction and prediction always not accurate. Software effort estimation is a very critical task in the software engineering and to control quality and efficiency a suitable estimation technique is crucial. This paper gives a review of various available software effort estimation methods, mainly focus on the algorithmic model and non algorithmic model. These existing methods for software cost estimation are illustrated and their aspect will be discussed. No single technique is best for all situations, and thus a careful comparison of the results of several approaches is most likely to produce realistic estimation. This paper provides a detailed overview of existing software cost estimation models and techniques. This paper presents the strength and weakness of various cost estimation methods. This paper focuses on some of the relevant reasons that cause inaccurate estimation. Pa Pa Win | War War Myint | Hlaing Phyu Phyu Mon | Seint Wint Thu "Review on Algorithmic and Non-Algorithmic Software Cost Estimation Techniques" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26511.pdfPaper URL: https://www.ijtsrd.com/engineering/-/26511/review-on-algorithmic-and-non-algorithmic-software-cost-estimation-techniques/pa-pa-win
Similar to Management of time uncertainty in agile (20)
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providers
Management of time uncertainty in agile
1. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.4, July 2014
DOI : 10.5121/ijsea.2014.5405 63
MEFFORTMOB: A EFFORT SIZE MEASUREMENT
FOR MOBILE APPLICATION DEVELOPMENT
Laudson Silva de Souza1
and Gibeon Soares de Aquino Jr.
1
1
Department of Informatics and Applied Mathematics,
Federal University of Rio Grande do Norte, Natal, Brazil
ABSTRACT
The rise of the use of mobile technologies in the world, such as smartphones and tablets, connected to
mobile networks is changing old habits and creating new ways for the society to access information and
interact with computer systems. Thus, traditional information systems are undergoing a process of
adaptation to this new computing context. However, it is important to note that the characteristics of this
new context are different. There are new features and, thereafter, new possibilities, as well as restrictions
that did not exist before. Finally, the systems developed for this environment have different requirements
and characteristics than the traditional information systems. For this reason, there is the need to reassess
the current knowledge about the processes of planning and building for the development of systems in this
new environment. One area in particular that demands such adaptation is software estimation. The
estimation processes, in general, are based on characteristics of the systems, trying to quantify the
complexity of implementing them. Hence, the main objective of this paper is to present a proposal for an
estimation model for mobile applications, as well as discuss the applicability of traditional estimation
models for the purpose of developing systems in the context of mobile computing. Hence, the main objective
of this paper is to present an effort estimation model for mobile applications.
KEYWORDS
Software Engineering, Software Quality, Estimating Software, Systematic Review, Mobile Applications,
Mobile Computing
1. INTRODUCTION
The ITU (International Telecommunication Union) estimates that there are more than 6 (six)
billion mobile clients worldwide. According to Gartner, 1.75 billion people own mobile phones
with advanced capabilities; he also foresees further growth in the use of this technology in the
upcoming years [Error! Reference source not found.]. There is a global trend towards the
increase of the number of users connected to the network via mobile devices which,
consequently, will create an increasing demand for information, applications and content for such
equipments. New ways to use existing information systems are emerging. In particular, systems
that were once accessed via web interfaces through personal computers physically located in
offices, universities or homes are providing new ways to access from mobile devices which, in
turn, have different requirements and capabilities than the personal computers.
Thus, we realize that traditional information systems are undergoing a process of adaptation to
this new computing context. Current developments, including the increase of the computational
power of these new devices, in addition to the integration of multiple devices on a single one and
lined up with the change of the users' behavior, actually create a new environment for the
development of computing solutions. However, it is important to note that the characteristics of
2. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.4, July 2014
64
this new context are different. They present new resources and, thereafter, new possibilities
[Error! Reference source not found.], [Error! Reference source not found.], [Error!
Reference source not found.] and [Error! Reference source not found.], as well as introduce
non-existing restrictions in conventional systems [Error! Reference source not found.] and
[Error! Reference source not found.].
The fact is that this new technological scenario that is emerging with new requirements and
restrictions requires a reevaluation of current knowledge about the processes of planning and
building software systems. These new systems have different characteristics and, therefore, an
area in particular that demands such adaptation is software estimation. The estimation processes,
in general, are based on characteristics of the systems, trying to quantify the complexity of
implementing them. For this reason, it is important to analyze the methods currently proposed for
software projects estimation and evaluate their applicability to this new context of mobile
computing.
Hence, the main objective of this paper is to present a proposal for an estimation model for
mobile applications, as well as discuss the applicability of traditional models used in estimation of
information systems for the purpose of the development of systems in the context of mobile
computing. In this work, the main estimation methods that exist now will be analyzed, the
specific characteristics of mobile systems will be identified and an adaptation of a estimation
method that exists in this context will be proposed.
2. ESTIMATION METHODS
In order to identify how the traditional estimation methods could address the characteristics of the
systems, a literature review on the main estimation methods was performed. The methods
identified in the survey can be seen in Table 1.
3. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.4, July 2014
65
Table 1. Main Estimation Methods.
Year Method Author
1979 Function Point Analysis (FPA) Albrecht [11]
1981 COnstructive COst MOdel (COCOMO) Barry W. Boehm 's [12]
1982 DeMarco’s Bang Metrics Tom DeMarco [13]
1986 Feature Points Jones [14]
1988 Mark II FPA Charles Symons [14]
1989 Data Points Harry Sneed [15]
1990
Netherlands Software Metrics Users Association
(NESMA) FPA
The Netherlands Software Metrics
Users Association [16]
1990
Analytical Software Size Estimation Technique-
Real-Time (ASSET-R)
Reifer [17]
1992 3-D Function Points Whitmire [18]
1993 Use Case Points UCP Gustav Karner [19]
1994 Object Points Banker et al. [20]
1994
Function Points by Matson, Barret and
Mellichamp
Matson, Barret e Mellichamp [21]
1997 Full Function Points (FFP)
University of Quebec in
cooperation with the Software
Engineering Laboratory in Applied
Metrics [18]
1997 Early FPA (EFPA) Meli, Conte et al. [22]
1998 Object Oriented Function Points – (OOFPs) Caldiera et al. [23]
1999 Predictive Object Points – (POPs) Teologlou [24]
1999
Common Software Measurement International
Consortium (COSMIC) FFP
Common Software Measurement
International Consortium
(COSMIC) [25]
2000
Early & Quick COSMIC-Full Function Points
(E&Q COSMIC FFP)
Meli et al. [26]
2000 Kammelar’s Component Object Points Kammelar [27]
2001
Object Oriented Method Function Points –
(OOmFP)
Pastor and his colleagues [28]
2004 Finnish Software Metrics Association FSM
The Finnish Software Metrics
Association (FiSMA) [29]
Table 1 displays in chronological order the main estimation methods, showing the year of
creation, the name of the method and the author of it. At first glance, one realizes that the main
existing methods were not designed to consider the requirements of mobile applications. Indeed,
the very creation of most of them precedes the emergence of mobile devices as we know today.
This suggests that the use of these methods to estimate the effort of the development of projects
involving systems or applications for mobile devices would cause a possible failure to quantify
the complexity of some features and, therefore, would not produce adequate estimates.
3. CHARACTERISTICS OF MOBILE APPLICATIONS
4. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.4, July 2014
66
In order to identify characteristics that are inherent to systems and mobile applications, a
surveying of the characteristics of these types of software was accomplished through a systematic
review. Conducting a systematic review is relevant because most searches begin with some kind
of review of the literature, and a systematic review summarizes the existing work fairly, without
inclinations. So the surveys were conducted according to a predefined search strategy, in which
the search strategy should allow the integrity of the research to be evaluated. The planning and
accomplishment of the methodology discussed were directed by Procedures for Performing
Systematic Reviews [Error! Reference source not found.].
3.1. Planning The Systematic Review
In the context of research questions, the following research question was formulated: “What are
the characteristics of Mobile Applications?”, based on the issue about the proposed study.
Search Strategies - The search strategy was divided into three parts: sources, keywords and search
strings.
Sources - the researches were directed to the following databases:
• ACM DL Digital Library (http://dl.acm.org/)
• Google Scholar (http://scholar.google.com.br/)
• IEEE Xplore Digital Library (http://ieeexplore.ieee.org/Xplore/).
Keywords - the keywords were defined and based on the research question elicited previously and
on their synonyms, as follows:
• Mobile; Applications; Computing; Features; Characteristics; Attribute; Aspect; Property;
Factors; Individuality; Differential; Detail; Software; System;
•
Search string - based on the keywords defined previously and according to the sources to be used,
the following search string was prepared:
• “((“Mobile Applications”) OR (“Mobile Computing”) OR (“Mobile System”) OR
(“Mobile Software”)) AND (Features OR Characteristics OR Attribute OR Aspect OR
Property OR Factors OR Individuality OR Differential OR Detail)”.
•
The results obtained through the researches made with the string search string defined previously
in the three databases mentioned above were analyzed according to the following criteria:
• Inclusion Criteria:
o The returned result should be available in English or Portuguese;
o The returned result should be available in PDF or HTML format;
o The returned result should answer the research question directly;
• Exclusion Criteria:
o The returned result has already been found in previous research;
o The returned result has not been published in conferences, books, newspapers or
magazines;
o The returned result has no relation to the research question;
o The access to the result is not available through agreements with CAPES or UFRN;
5. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.4, July 2014
67
o The returned result was not published between 2002 and 2013;
Procedures for The Evaluation of the Articles: the articles will be analyzed considering its
relation with the issues addressed in the research questions, inclusion criteria and exclusion
criteria, and their respective situation will be assigned with either “Accepted” or “Rejected”. The
evaluation will follow the following procedure: read the title and abstract and, should it be related
with the research question, also read the whole article.
3.2. Implementation of the Systematic Review
The implementation of the systematic review was performed almost in line with its planning,
except for the need to adjust the syntax of the proposed search string due to the particularities of
the research bases. 234 articles were analyzed, of which 40 were selected and considered
“Accepted” according to the inclusion criteria; 194 were considered “Rejected” according to the
exclusion criteria. The list with all the articles can be accessed at the following address:
http://www.laudson.com/sr-articles.pdf. The 40 articles that were accepted were fully read, thus
performing the data extraction. All the characteristics found during this extraction phase were
described in the following subsection.
3.3. Completion of Systematic Review
Given the results extracted from the systematic review, it's is possible to identify 29 kinds of
characteristics in 100% of the articles evaluated and considered accepted in accordance with the
inclusion criteria. However some of these are a mixture of characteristics of mobile devices and
characteristics of mobile applications, such as the characteristic called “Limited Energy”, which
is a characteristic of the device and not the application, however the articles that mention this type
of characteristic emphasize that in the development of a mobile application, this “limitation” must
be taken into account since all the mobile devices are powered by batteries, which have a limited
life, depending completely on what the user operates daily. Applications requiring more hardware
or software resources will consume more energy. In Figure 1, the 23 types of characteristics
mentioned the most in the selected articles can be observed.
6. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.4, July 2014
68
Figure 1. Characteristics that were mentioned the most on accepted papers
The other six types of characteristics identified are mentioned only three times, which are:
“Complex integration of tasks in real time” and “Constant interruption of activities”; and, finally,
are mentioned only once, which are: “Functional area”, “Price”, “Target public” and “Type of
provider Type”.
Following, there is a description of each characteristic identified in the review:
• Limited energy: every mobile device is powered by battery and, because of this, it has a
certain lifetime period [Error! Reference source not found.].
• Small screen: mobile device screens are pretty small and, because of this, interface design is
limited [Error! Reference source not found.].
• Limited performance: due to its size and technological advancement all mobile devices, even
the most advanced in its class, have limitations of specific resources such as processing
power, memory and connectivity. Because of this, the performance is limited [Error!
Reference source not found.].
• Bandwidth: given an application that requires the maximum, the minimum or a reasonable
bandwidth, one must consider its enormous variation [Error! Reference source not found.].
• Change of context: the change of context occurs in accordance with the environment [Error!
Reference source not found.].
• Reduced memory: due to its size and technological advancement, all mobile devices, even the
most advanced in its class, have limitations of specific resources, including the size of its
memory [Error! Reference source not found.].
• Connectivity: the kind of connectivity that the application will use, such as 3G, bluetooth,
infrared and Wi-Fi [Error! Reference source not found.].
• Interactivity: what will be the type of input that the user will use to interact with the
application [Error! Reference source not found.].
7. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.4, July 2014
69
• Storage: the applications have to take into consideration how it is going to be done [Error!
Reference source not found.].
• Software portability: the application should be performed on all types of operating systems
[Error! Reference source not found.].
• Hardware portability: the application should be performed on all types of devices [Error!
Reference source not found.].
• Usability: is a set of attributes which affect the effort needed for the use, in which it must be
intuitive and as natural as possible to make or receive a call or text message [Error!
Reference source not found.].
• 24/7 availability: the application must be available to access anywhere, anytime [Error!
Reference source not found.].
• Security: must prevent accidental or deliberate unauthorized access to the applications and
data [Error! Reference source not found.].
• Reliability: is a set of attributes which affect the application's ability to maintain its level of
performance under stated conditions for a stated period of time [Error! Reference source
not found.].
• Efficiency: is a set of attributes that relate to the relationship between the application's level
of performance and the amount of resources used, under stated conditions [Error! Reference
source not found.].
• Native vs. Web Mobile: it must be defined if the application will be designed to be installed
on the device itself, which is known as native applications, or used on the web [Error!
Reference source not found.].
• Interoperability: the application should be able to interact with other specific systems. In
other words, it must have interoperability with other services [Error! Reference source not
found.].
• Response time: the applications must be initialized and finalized immediately [Error!
Reference source not found.].
• Privacy: the application must demonstrate to the user how his or her personal information are
being collected, used and shared, and let the user exercise his or her choice and control over
their use [Error! Reference source not found.].
• Short term activities: activities in mobile applications tend to have a short duration, ranging
from several seconds to several minutes [Error! Reference source not found.].
• Data integrity: making sure that in an accidental shutdown of the application or of the device
itself, the application will ensures data integrity [Error! Reference source not found.].
• Key characteristics: mobile applications tend to be more focused or, in other words, they have
specific key characteristics rather than offer the exploratory environment commonly used
[Error! Reference source not found.].
• Complex integration of real-time tasks: mobile applications should provide integration
between application of different sources (native or web) [Error! Reference source not
found.].
• Constant interruption of activities: when using a mobile application, the activities are
constantly interrupted, like when you receive a call, lose connection or have a low battery,
which are examples of such interruptions [Error! Reference source not found.].
• Functional area: data, collaboration and communication services, information services and
productivity services such as business and office applications [Error! Reference source not
found.].
• Price: free, less than five euros and more than five euros [Error! Reference source not
found.].
• Target audience: applications for final private consumer or business applications [Error!
Reference source not found.].
8. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.4, July 2014
70
• Provider type: businesses, professionals or other service providers [Error! Reference source
not found.].
After this survey, a refinement was made and a mix of characteristics was elicited with the
purpose of defining which characteristics would be emphasized. Of a total of 23 types of
characteristics that were most mentioned in the selected articles, a common denominator of 13
characteristics was reached, some of which had their names redefined, like “Interactivity”, which
became “Input Interface”.
4. CHARACTERISTICS OF MOBILE APPLICATIONS SURVEY
With the conclusion of the systematic review, a survey was carried out among experts in mobile
development with the purpose of ratifying the characteristics previously raised and to prove their
respective influence on mobile development. The disclosure of the survey was conducted in more
than 70 locations, among them universities and businesses, through e-mails, study groups and
social groups.
In general, of all 117 feedbacks received through the survey, 100% of the experts confirmed the
characteristics; among them, an average of 72% indicated a greater effort and complexity
regarding the characteristics during development, an average of 12% indicated less effort and
complexity and, finally, an average of 16% indicated they did not perceive any difference in
mobile development, even though they confirmed the presence of the characteristics.
5. PROBLEM ADDRESSED
As noted in Section II, there is no estimation method developed for mobile applications projects.
Moreover, some of the characteristics elicited in Section III aggravate the complexity and,
thereafter, the effort in the development of mobile applications.
From the analysis that follows, with the characteristics of applications on mobile devices elicited
in Section III, it is clear that they are different from the characteristics of traditional systems and
directly influence its development. A clear example, which is different from the information or
desktop systems, is the characteristic that the mobile devices have “Limited Energy”. As mobile
devices are powered by battery, which have a limited lifetime period, the applications must be
programmed to require the minimal amount of hardware resources possible, since the more
resources consumed, the greater amount of energy expended. This characteristic makes it
necessary for the solution project to address this concern, generating a higher complexity of
development and, thereafter, a greater effort and cost.
Another specific characteristic of this context is the “Graphical Interface”. Due to the reduced
screen size, the interface design is limited. Therefore, a greater complexity and, thereafter, a
larger effort is required in the development of the graphical interface. Another characteristic
related to the screen is the “Input Interface”, which defines how the user will interact with the
application, in other words, if the user will interact via keypad, stylus, touch screen or voice and
image recognition. The latter makes the task of developing applications that offers all these
interaction options more complex, thus requiring a bigger effort.
Regarding connectivity, the characteristic “Bandwidth” was identified, wherein a mobile
application might have the maximum band at times and the minimum in other moments. Some
types of applications need to realize this and act differently in each situation. Another related
feature is the “Connectivity Type”. Mobile applications can be developed to support different
types of connectivity such as 3G, bluetooth, infrared, Wi-Fi, Wireless, NFC and others. In
9. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.4, July 2014
71
addition, a single application can support multiple types of connectivity simultaneously. These
behaviors directly affect the complexity of the software and therefore require a larger
development effort.
The “Change in Context” is also another characteristic inherent in mobile applications, which
should take into account not only the data entries explicitly provided by users, but also the
implicit entries concerning the physical and computational context of the users and the
environments that surround them. In addition, the “Constant Interruption of Activities” is a much
more common characteristic in this context, as well as the need for some applications to be
developed to work offline and therefore be able to synchronize. Mobile applications should be
prepared for different scenarios because the activities are interrupted constantly. Receiving a call,
lack of connection and low battery are examples of such interruptions, which makes the
applications become much more complex.
Despite the advances related to the computational ability of these devices, their hardware must
still be considered as limited, especially when compared to desktops and servers. Two
characteristics related to this issue are “Limited Performance” and “Reduced Memory”. Besides
these, a characteristic inherent to the use of mobile devices is the “Response Time”, that is
directly related to the power of “Processing”. Mobile applications must be initialized and
finalized immediately, in other words, any development should be focused in the time variable.
These characteristics require the applications to be developed with a possible resource
optimization for a better efficiency and response time, requiring more effort.
The “Portability” is also a required characteristic of these applications. It can be divided into two
characteristics: the “Hardware Portability” and the “Software Portability”. Regarding the first
one, nowadays there is a large number of different devices with different capabilities and
resources. A mobile application should be able to run on the largest number of devices possible.
This requires an increased effort in the development. Moreover, a greater effort in testing this
kind of portability is required. Regarding “Software Portability”, it is necessary to develop
specific applications for each existing platform should the application be native. With this, more
effort is required for replications of the same software product, including the tests.
Finally, mobile applications can be separated into two types: “Native or Web Mobile”. The first
one has higher performance and easiness in accessing the hardware, while the second has lower
performance since it is web, but it is easier to achieve portability. In addition, there are some
applications that are considered hybrids. Depending on the type of application, the issues that
must be considered and the complexity can be different, requiring different development efforts.
From the survey of the most popular estimation methods cited in Section III, it was found that
these characteristics are not covered by the current estimation methods for two explicit reasons:
first, none of the existing methods was designed to perform project estimation in mobile
applications development; and second, all the characteristics discussed in this section are
exclusive to mobile applications, with direct interference in their development, thereby generating
a greater complexity and, thereafter, a greater effort. However, to consider any of the existing
estimation methods to apply to the process of development of mobile applications is to assume
that this kind of development is no different than the project of developing desktop applications,
in other words, an eminent risk is assumed.
6. PROPOSAL: Estimation in Mobile Application Development Project
10. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.4, July 2014
72
A solution to solve this problem would be to create a new estimation method or even to adapt
some existing estimation method, in which would be added all of the characteristics identified
that directly affect the mobile application development project, taking into account whatever is
needed to reach the minimum efficiency in the estimates.
The approached proposed is an adaptation of an existing method, based exclusively on methods
recognized as international standards by ISO. Among the most popular estimation methods
mentioned in Section III, the method used to base the proposal below on is known as “Finnish
Software Metrics Association (FiSMA)”. The model is one of the five methods for measuring
software that complies with the ISO/IEC 14143-1 standard, is accepted as an international
standard for software measuring [Error! Reference source not found.] and nowadays over 750
software projects are completed being estimated by FISMA. However, the difference between
this and other methods that are in accordance with the above standard, which are the Common
Software Measurement International Consortium Function Points (COSMIC FP) [Error!
Reference source not found.], the International Function Point Users Group (IFPUG) FPA
[Error! Reference source not found.], MarkII FPA [Error! Reference source not found.] and
the Netherlands Software Metrics Association (NESMA) WSF [Error! Reference source not
found.], is that the method used is based in functionality but is service-oriented. It also proposes
in its definition that it can be applied to all types of software, but this statement is lightly wrong
since in its application, the method does not take into account the characteristics elicited in
Section IV.
The COMISC FP [Error! Reference source not found.], the MarkII FPA [Error! Reference
source not found.] and the NESMA [Error! Reference source not found.] were created based
on the FPA [Error! Reference source not found.], in other words, they assume the counting of
Function Point (FP), but considering the implemented functionality from the user's point of view.
With this, it is clear that the methods mentioned above do not take into account the characteristics
of mobile applications because they are not noticed by the user. The methods are independent of
the programming language or technology used. And, unlike FISMA, they do not bring in their
literature the information that they can be applied to all types of software.
Overall, the FISMA method proposes that all services provided by the application are identified.
It previously defines some services, among which stands out the user's interactive navigation,
consulting services, user input interactive services, interface services for other applications, data
storage services, algorithmic services and handling services. Finally, after identifying all the
services, the size of each service is calculated using the same method and thus obtaining a total
functional size of the application by adding the size of each service found [Error! Reference
source not found.].
6.1. Approaching the Chosen Model
The FiSMA method in its original usage proposes a structure of seven classes of the Base
Functional Component or BFC (Base Functional Component) type, which is defined as a basic
component of functional requirement. The seven classes used to account for the services during
the application of the method are [Error! Reference source not found.]:
• Interactive navigation of the end user and query services (q):
o specify all parts of the interactive user interface where there is no maintenance of
persistent data storage(s) of the system. Maintenance refers to any service where
data is changed as a result of the service and includes creating, updating or
deleting. Amount of size units of navigation and query functions depends on the
11. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.4, July 2014
73
number of different data items of the BFC measured and the number of needed
reading references to entity types.
• Interactive input services from end users (i):
o specify all parts of the interactive user interface where there is maintenance of
persistent data storage(s) of the system. Data storages consist of entities (data
records). Maintaining refers to any service where data is changed as a result of
the service and includes creating, updating and deleting. From a user’s point of
view, interactive end-user services are used to perform those business tasks
which change the data contents of the system. From the information system point
of view end-users manipulate system data using interactive end-user services.
Amount of size units of input functions depends on the number of different data
items of the BFC measured, the number of needed reading references and the
number of needed writing references to entity types.
• Non-interactive outbound services for the end user (o):
o specify all parts of the user interface which are non-interactive and do not
maintain persistent data storage(s) of the system. Amount of size units of output
functions depends on the number of different data items of the BFC measured
and the number of needed reading references to entity types.
• Interface services for another application (t):
o specify all automatized data transfer functions moving data groups from the
measured piece of software to any other application or any device. Amount of
size units of outbound interface functions depends on the number of different
data items of the BFC measured.
• Interface services for other applications (f):
o specify all automatized data transfer functions, receiving data groups provided
and sent by any other application or any device. Amount of size units of inbound
interface functions depends on the number of different data items of the BFC
measured, the number of reading references and the number of writing references
to entities.
• Data storage services (d):
o specify a group or collection of related and self-contained data in the real world
about which the user requires the software to provide persistent storage. Data
storage services are functional services provided by the piece of software to
satisfy these data storage requirements. These “groups or collections of related
and self-contained data” are often called as entity types, data groups, data classes
or objects of interest, depending on the terminology used in the development
environment. Data storage services store data persistently and make it available
for maintenance, inquiry, or output. Data storage services are typically
implemented as tables in relational databases, or as records in data fi les in
general. Amount of size units of data storage services depends on the number of
different data items i.e. the number of attributes related together in the self-
contained group or collection.
• Algorithmic manipulation services (a):
o Definition of BFC class: Algorithms are user-defined, independent data
manipulation routines. Independence means here that the functionality of the
routine is not included in the normal functionality of any function of any other
BFC type. Algorithmic manipulation may consist of arithmetic and/or logical
operations. Amount of size units of algorithmic and manipulation services
depends on the number of different operations performed and the number of
different variables needed.
12. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.4, July 2014
74
The identification for each class name BFC previously mentioned, with a letter in parenthesis, is
used to facilitate the application of the method during the counting process, because each of the
seven classes BFCs are composed of other BFC classes which, at the time of calculating, these
BFCs “daughter” classes are identified by the letter of their BFC “mother” class followed by a
numeral, as can be seen in Figure 2.
Figure 2. Types of BFCs classes of the base model [Error! Reference source not found.]
The unit of measurement is the point of function with the letter “F” added to its nomenclature to
identify the “FiSMA”, resulting in FfP (FiSMA Function Point) or Ffsu (FiSMA functional size
unit). The measurement process generally consists of measuring the services and end-user
interface and the services considered indirect [Error! Reference source not found.], as can be
seen in Figure 3.
13. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.4, July 2014
75
Figure 3. Representation of the measurement process of the base model [Error! Reference source not
found.]
Figure 3 shows the process of measuring the base model, in which it defines each step and sum of
each BFC class of the model. Briefly, the process of counting should be done as follows. Identify:
1-How many types of BFCs does the software have? 2-Which are they? (identify all) 3-What are
they? (provide details of each BFC identified)
After doing this, it is necessary to add each BFC root using the formulas pre-defined by the
method and their assignments. Finally, the formula of the final result of the sum is the general
sum of all the BFCs classes.
6.2. Applying the Chosen Model
The FiSMA method can be applied manually or with the aid of the Experience Service1
tool,
which was the case, provided by FiSMA itself through contact made with senior consultant Pekka
Forselius and with the chairman of the board Hannu Lappalainen.
When using the tool, it is necessary to perform all the steps of the previous subsection to obtain
the functional size. Figure 4 shows the final report after the implementation of the FiSMA on a
real system, the Management of Academic Activities Integrated System (Sigaa) in its Mobile
version, developed by the Superintendence of Computing (SINFO) of the Federal University of
Rio Grande do Norte (UFRN).
Figure 4. Final Report of FiSMA applied to Sigaa Mobile
After the application of FiSMA, the functional size of the software is obtained and from this it is
possible to find the effort using the formula: Estimated effort (h) = size (fp) x reuse x rate of
delivery (h/fp) x project status; the latter is related to productivity factors that are taken into
account for the calculation of the effort. However, of the factors predefined by the FiSMA
regarding the product, only 6 (six) are proposed, in which the basic idea of the evaluation is that
1
http://www.experiencesaas.com/
14. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.4, July 2014
76
“the better the circumstances of the project, the more positive the assessment”. The weighting
goes from - - to + +, as follows:
Caption:
• “+ +” = [1.10] Excellent situation, much better circumstances than in the average case
• “+” = [1.05] Good situation, better circumstances than in the average case
• “+ / -” = [1.0] Normal situation
• “-” = [0.95] Bad situation, worse circumstances than in the average case
• “- -” = [0.90] Very bad situation, much worse circumstances than in the average case
Productivity factors:
Functionality requirements: compatibility with the needs of the end user, the complexity of the
requirements.
• (- -) Complex and critical application area (thousands of FPs), multiple users and
multicultural system.
• ( - ) Interoperable application area with some complex characteristics, requiring special
understanding from users and developers.
• (+ / -) Partly automated, integrated application area and a medium size application (between
600 and 1000 FPs) with standard security requirements.
• ( + ) Application area mostly automated and application with less than 5 interfaces with other
systems; there are specific security requirements.
• (+ +) Very mature application area, simple and easy, a small stand-alone application (less
than 200 FPs) for a small group of users.
Reliability requirements: maturity, tolerance to faults and recovery for different types of use
cases.
• (- -) Malfunctions may put in danger human lives and cause significant economic or
environmental losses.}
• ( - ) The software is part of a large real-time system where all the failures of operation will
cause problems to many other applications.}
• (+ / -) Not more than 2 hours of downtime is acceptable, but the system recovery routines are
appropriate.
• ( + ) Need for non-continuous operation, but daily.
• (+ +) Need for periodic operation. Pausing for a few days will not cause any damage to the
organization.
Usability requirements: understandability and easiness to learn the user interface and workflow
logic.
• (- -) A large number of different types of end users around the world.
• ( - ) 2 or 3 different types of users with different skills.
• (+ / -) A large number of end users with equal abilities.
• ( + ) No more than tens or hundreds of homogeneous users in perhaps more than one location.
• (+ +) Only a few users, all located on one site.
Efficiency requirements: effective use of resources and adequate performance in each use case
and under a reasonable workload.
• (- -) Complex database with millions of data records and transactions per day, thousands of
simultaneous end users.
• ( - ) Large database, hundreds of simultaneous end users, critical response most of the time.
• (+ / -) Large database, less than millions of data records and less than hundreds of
simultaneous end users.
15. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.4, July 2014
77
• ( + ) Medium database in volume and structure, simple and predictable data requests from
some simultaneous end users.
• (+ +) Simple and small database without simultaneous end users or complex data requests.
Maintainability requirements: lifetime of the application, criticality of fault diagnosis and test
performance.
• (- -) Very large strategic software (over 20 years of lifetime) in a volatile area of business,
with frequent changes in laws, regulations and business rules.
• ( - ) Large software (10-20 years of lifetime), and frequent changes in laws, regulations and
business rules.
• (+ / -) Medium size software (5-10 years of lifetime), monthly changes in laws, regulations
and business rules.
• ( + ) Small software, rarely changes (2 to 5 years of lifetime).
• (+ +) Temporary software (less than 2 years of lifetime), without modifications.
Portability requirements: adaptability and instability to different environments, to the architecture
and to structural components.
• (- -) Software users are located in many types of organizations, with various platforms
(hardware, browsers, operating systems, middleware, protocols, etc), various versions and
various update frequencies.
• ( - ) The software must operate on some different platforms (hardware, browsers, operating
systems, middleware, protocols, etc) and in various versions of each of them.
• (+ / -) Each version of the software must run on multiple versions of a given platform
(hardware, browser, operating system, middleware, protocols, etc), and the frequencies of
update of the users are quite predictable.
• ( + ) The software must run on a given platform (hardware, browser, operating system,
middleware, protocols, etc), but the use of system-level services is limited because the
upgrade process is partial.
• (+ +) Software must be run on a particular platform (hardware, browser, operating system,
middleware, protocols, etc), but the upgrade process is completely controllable.
Among the productivity factors mentioned above, only the “Portability Requirement” factor fits
in harmony with the “Portability” characteristic regarding both hardware and software. However,
none of the other factors discusses the characteristics of mobile application, in other words, after
obtaining the functional size of the software and applying the productivity factors related to the
product to estimate the effort, this estimate ignores all of the characteristics of mobile
applications, judging that the estimate of traditional information systems is equal to the mobile
application. However, with the proposal of the creation of new productivity factors, which would
be the specific characteristics of mobile applications, this problem will be solved, as presented
below.
Performance Factor:
• ( - ) The application should be concerned with the optimization of resources for a better
efficiency and response time.
• (+ / -) Resource optimization for better efficiency and response time may or may not exist.
• ( + ) Resource optimization for better efficiency and response time should not be taken into
consideration.
Power Factor:
• ( - ) The application should be concerned with the optimization of resources for a lower
battery consumption.
16. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.4, July 2014
78
• (+ / -) Resource optimization for lower battery consumption may or may not exist.
• ( + ) Resource optimization for a lower battery consumption should not be taken into
consideration.
Band Factor:
• ( - ) The application shall require the maximum bandwidth.
• (+ / -) The application shall require reasonable bandwidth.
• ( + ) The application shall require a minimum bandwidth.
Connectivity Factor:
• ( - ) The application must have the maximum willingness to use connections such as 3G, Wi-
fi, Wireless, Bluetooth, Infrared and others.
• (+ / -) The application must have reasonable predisposition to use connections such as 3G,
Wi-Fi and Wireless.
• ( + ) The application must have only a predisposition to use connections, which can be: 3G,
Wi-fi, Wireless, Bluetooth, Infrared or others.
Context Factor:
• ( - ) The application should work offline and synchronize.
• (+ / -) The application should work offline and it is not necessary to synchronize.
• ( + ) The application should not work offline.
Graphic Interface Factor:
• ( - ) The application has limitations due to the screen size because it will be mainly used by
cell phone users.
• (+ / -) The application has reasonable limitation due to the screen size because it will be used
both by cell phone and tablet users.
• ( + ) The application has little limitation due to the screen size because it will be mainly used
by tablet users.
Input Interface Factor:
• ( - ) The application must have input interfaces for touch screen, voice, video, keyboard and
others.
• (+ / -) The application must have standard input interfaces for keyboard.
• ( + ) The application must have any one of the types of interfaces, such as: touch screen,
voice, video, keyboard or others.
The proposed factors take into account the same weighting proposed by FiSMA, but only ranging
from - to +, in other words:
• “+” = [1.05] Good situation, better circumstances than in the average case
• “+ / -” = [1.0] Normal Situation
• “-” = [0.95] Bad situation, worse circumstances than in the average case
•
17. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.4, July 2014
79
The functional size remains the same, thus affecting only the formula used to obtain the effort,
which will now consider in its “project situation” variable the new productivity factors specific
for mobile applications.
7. CONCLUSION
Given the results presented, based on the literature review of estimation methods and on the
systematic review of the characteristics of mobile applications, it was observed that this sub-area
of software engineering still falls short. Basically, it's risky to use any existing estimation method
in development projects for mobile applications, as much as there are some models already
widespread in industry, such as the Function Point Analysis, the Mark II and the COSMIC-FFP,
which are even approved by ISO as international standards. They all fall short by not taking into
account the particularities of mobile applications, which makes the method partially ineffective in
this situation.
With the common emergence of new systems, experts always find a barrier when using one of the
current methods of software measurement. This barrier can be on the effectiveness of the method,
on what type of method should be used, when it comes to a software that is considered
unconventional and, mostly, when it is required to apply it in completely atypical scenarios. This
whole situation is aggravated further when it comes to mobile applications.
Based on this study, it is concluded that the proposal presented in this work is entirely appropriate
and viable and that this proposal should take into account all the peculiarities of such
applications, finally creating a belief that there actually are considerable differences in the
development project for mobile applications.
REFERENCES
[1] L. Naismith, M. Sharples, G. Vavoula, P. Lonsdale et al., “Literature review in mobile technologies
and learning,” 2004.
[2] G. Macario, M. Torchiano, and M. Violante, “An in-vehicle infotainment software architecture based
on google android,” in Industrial Embedded Systems, 2009. SIES ’09. IEEE International Symposium
on, 2009, pp. 257–260.
[3] T. Liu, H. Wang, J. Liang, T.-W. Chan, H. Ko, and J. Yang, “Wireless and mobile technologies to
enhance teaching and learning,” Journal of Computer Assisted Learning, vol. 19, no. 3, pp. 371–382,
2003.
[4] I. GARTNER. (2013) Gartner says worldwide mobile phone sales declined 1.7 percent in 2012.
egham, uk: Gartner, 2013. [Online]. Available: http://¬www.gartner.com/¬newsroom/¬id/¬2335616
[5] C.-C. Yang, H.-W. Yang, and H.-C. Huang, “A robust and secure data transmission scheme based on
identity-based cryptosystem for ad hoc networks,” in Proceedings of the 6th International Wireless
Communications and Mobile Computing Conference, ser. IWCMC ’10. New York, NY, USA: ACM,
2010, pp. 1198–1202, aCM. [Online]. Available: http://¬doi.acm.org/¬10.1145/¬1815396.1815670
[6] I. Ketykó, K. De Moor, T. De Pessemier, A. J. Verdejo, K. Vanhecke, W. Joseph, L. Martens, and L.
De Marez, “Qoe measurement of mobile youtube video streaming,” in Proceedings of the 3rd
workshop on Mobile video delivery, ser. MoViD ’10. New York, NY, USA: ACM, 2010, pp. 27–32.
[Online]. Available: http://¬doi.acm.org/¬10.1145/¬1878022.1878030
[7] S.-Y. Yang, D. liang Lee, and K.-Y. Chen, “A new ubiquitous information agent system for cloud
computing - example on gps and bluetooth techniques in google android platform,” in Electric
Information and Control Engineering (ICEICE), 2011 International Conference on, 2011, pp. 1929–
1932.
[8] R. Lowe, P. Mandl, and M. Weber, “Context directory: A context-aware service for mobile context-
aware computing applications by the example of google android,” in Pervasive Computing and
18. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.4, July 2014
80
Communications Workshops (PERCOM Workshops), 2012 IEEE International Conference on, 2012,
pp. 76–81.
[9] N. Husted, H. Sadi, and A. Gehani, “Smartphone security limitations: conflicting traditions,” in
Proceedings of the 2011 Workshop on Governance of Technology, Information, and Policies, ser.
GTIP ’11. New York, NY, USA: ACM, 2011, pp. 5–12. [Online]. Available:
http://¬doi.acm.org/¬10.1145/-2076496.2076497
[10] A. Shabtai, Y. Fledel, U. Kanonov, Y. Elovici, S. Dolev, and C. Glezer, “Google android: A
comprehensive security assessment,” Security Privacy, IEEE, vol. 8, no. 2, pp. 35–44, 2010.
[11] S. Oligny, J.-M. Desharnais, and A. Abran, “A method for measuring the functional size of embedded
software,” in 3rd International Conference on Industrial Automation, 1999, pp. 7–9.
[12] B. Boehm, R. Valerdi, J. Lane, and A. Brown, “Cocomo suite methodology and evolution,”
CrossTalk, vol. 18, no. 4, pp. 20–25, 2005.
[13] C. Jones and T. C. Jones, Estimating software costs. McGraw-Hill New York, 1998, vol. 3.
[14] C. Symons, “Come back function point analysis (modernized)–all is forgiven!),” in Proc. of the 4th
European Conference on Software Measurement and ICT Control, FESMA-DASMA, 2001, pp. 413–
426.
[15] M. Lother and R. Dumke, “Points metrics-comparison and analysis,” in International Workshop on
Software Measurement (IWSM 2001), Montréal, Québec, 2001, pp. 155–172.
[16] J. Engelhart, P. Langbroek et al., Function Point Analysis (FPA) for Software Enhancement. NESMA,
2001.
[17] D. J. Reifer, “Asset-r: A function point sizing tool for scientific and real-time systems,” Journal of
Systems and Software, vol. 11, no. 3, pp. 159–171, 1990.
[18] M. Maya, A. Abran, S. Oligny, D. St-Pierre, and J.-M. Desharnais, “Measuring the functional size of
real-time software,” in Proc. of 1998 European Software Control and Metrics Conference, Maastricht,
The Netherlands, 1998, pp. 191–199.
[19] S. Kusumoto, F. Matukawa, K. Inoue, S. Hanabusa, and Y. Maegawa, “Estimating effort by use case
points: method, tool and case study,” in Software Metrics, 2004. Proceedings. 10th International
Symposium on, 2004, pp. 292–299.
[20] R. Banker, R. Kauffman, C. Wright, and D. Zweig, “Automating output size and reuse metrics in a
repository-based computer-aided software engineering (case) environment,” Software Engineering,
IEEE Transactions on, vol. 20, no. 3, pp. 169–187, 1994.
[21] J. Matson, B. Barrett, and J. Mellichamp, “Software development cost estimation using function
points,” Software Engineering, IEEE Transactions on, vol. 20, no. 4, pp. 275–287, 1994.
[22] R. Meli, “Early and extended function point: a new method for function points estimation,” in
Proceedings of the IFPUG-Fall Conference, 1997, pp. 15–19.
[23] M. Morisio, I. Stamelos, V. Spahos, and D. Romano, “Measuring functionality and productivity in
web-based applications: a case study,” in Software Metrics Symposium, 1999. Proceedings. Sixth
International, 1999, pp. 111–118.
[24] G. Caldiera, G. Antoniol, R. Fiutem, and C. Lokan, “Definition and experimental evaluation of
function points for object-oriented systems,” in Software Metrics Symposium, 1998. Metrics 1998.
Proceedings. Fifth International, 1998, pp. 167–178.
[25] C.-C. S. M. I. Consortium et al., “The cosmic functional size measurement method-version 3.0
measurement manual (the cosmic implementation guide for iso/iec 19761: 2003),” 2007.
[26] R. Meli, A. Abran, V. T. Ho, and S. Oligny, “On the applicability of cosmic-ffp for measuring
software throughout its life cycle,” in Proceedings of the 11th European Software Control and Metrics
Conference, 2000, pp. 18–20.
[27] J. Kammelar, “A sizing approach for oo-environments,” in Proceedings of the 4th International
ECOOP Workshop on Quantitative Approaches in Object-Oriented Software Engineering, 2000.
[28] S. Abrahão, G. Poels, and O. Pastor, “A functional size measurement method for object-oriented
conceptual schemas: design and evaluation issues,” Software & Systems Modeling, vol. 5, no. 1, pp.
48–71, 2006.
[29] P. Forselius, “Finnish software measurement association (fisma), fsm working group: Fisma
functional size measurement method v. 1.1.” 2004.
[30] B. Kitchenham, “Procedures for performing systematic reviews,” Keele, UK, Keele University, vol.
33, p. 2004, 2004.
[31] J.-H. Sohn, J.-H. Woo, M.-W. Lee, H.-J. Kim, R. Woo, and H.-J. Yoo, “A 50 mvertices/s graphics
processor with fixed-point programmable vertex shader for mobile applications,” in Solid-State
19. International Journal of Software Engineering & Applications (IJSEA), Vol.5, No.4, July 2014
81
Circuits Conference, 2005. Digest of Technical Papers. ISSCC. 2005 IEEE International, 2005, pp.
192–592 Vol. 1, google.
[32] H. Mukhtar, D. Belad, and G. Bernard, “A model for resource specification in mobile services,” in
Proceedings of the 3rd international workshop on Services integration in pervasive environments, ser.
SIPE ’08. New York, NY, USA: ACM, 2008, pp. 37–42, aCM. [Online]. Available:
http://¬doi.acm.org/-10.1145/¬1387309.1387318
[33] H. Feng, “A literature analysis on the adoption of mobile commerce,” in Grey Systems and Intelligent
Services, 2009. GSIS 2009. IEEE International Conference on, 2009, pp. 1353–1358, iEEE.
[34]A. Kumar Maji, K. Hao, S. Sultana, and S. Bagchi, “Characterizing failures in mobile oses: A case
study with android and symbian,” in Software Reliability Engineering (ISSRE), 2010 IEEE 21st
International Symposium on, 2010, pp. 249–258, iEEE.
[35] J. Al-Jaroodi, A. Al-Dhaheri, F. Al-Abdouli, and N. Mohamed, “A survey of security middleware for
pervasive and ubiquitous systems,” in Network-Based Information Systems, 2009. NBIS ’09.
International Conference on, 2009, pp. 188–193, iEEE.
[36] K. Hameed et al., “Mobile applications and systems,” 2010, google.
[37] I. R. D. E. A. G. Alekhya Mandadi, Deepti Mudegowder, “Mobile applications: Characteristics &
group project summary,” Mobile Application Development, 2009, google.
[38] M. Hayenga, C. Sudanthi, M. Ghosh, P. Ramrakhyani, and N. Paver, “Accurate system-level
performance modeling and workload characterization for mobile internet devices,” in Proceedings of
the 9th workshop on MEmory performance: DEaling with Applications, systems and architecture, ser.
MEDEA ’08. New York, NY, USA: ACM, 2008, pp. 54–60, aCM. [Online]. Available:
http://¬doi.acm.org/¬10.1145/-1509084.1509092
[39] A. Giessmann, K. Stanoevska-Slabeva, and B. de Visser, “Mobile enterprise applications–current
state and future directions,” in System Science (HICSS), 2012 45th Hawaii International Conference
on, 2012, pp. 1363–1372, google.
[40] C. Gencel, R. Heldal, and K. Lind, “On the conversion between the sizes of software products in the
life cycle.”
[41] F. S. M. A. FiSMA. (2004) Fisma functional size measurement method version 1-1. [Online].
Available: http://¬www.fisma.fi/¬in-english/¬methods/¬
Authors
Laudson Silva de Souza is masters student the Department of Informatics and Applied
Mathematics, Federal University of Rio Grande do Norte, Brazil.
Gibeon Soares de Aquino Jr. is PhD teacher the Department of Informatics and Applied
Mathematics, Federal University of Rio Grande do NorteBrazil.