The term Cross Browser Incompatibilities (XBI) stands for compatibility issues which can be observed while rendering a web application in different browsers, such as: Microsoft Edge, Mozilla Firefox, Google Chrome, among others. In order to detect Cross-Browser Incompatibilities - XBIs, Web developers rely on manual inspection of every web page of their applications, rendering them in various configuration environments (considering multiple OS platforms and browser versions). These manual inspections are error-prone and increase the cost of the Web engineering process. This paper has the goal of identify- ing techniques for automatic XBI detection, conducting a Systematic Literature Review (SLR) with focus on studies which report XBI automatic detection strategies. The selected studies were compared accord- ing to the different technological solutions they implemented and were used to elaborate a XBI detection framework which can organize and classify the state-of-the-art techniques and may be used to guide future research activities about this topic.
This document summarizes a literature review on software architectures that support human-computer interaction (HCI) analysis. It outlines the introduction, research questions, methodology, results, analysis, and conclusions. The research questions examine the current state of software architectures for HCI analysis, trends in supporting this through software engineering, and applications to eLearning. The methodology describes searches of databases to identify relevant papers. Sixteen key papers were found covering a range of HCI contexts and aspects of software design. While some papers addressed communication between architecture components and HCI measurement, standards were often lacking. Limited work was found specifically applying such architectures to eLearning through analyzing student interaction with online content.
Search Interface Feature Evaluation in BiosciencesZanda Mark
Read more here: http://pingar.com/
This paper reports findings on desirable interface features for different
search tasks in the biomedical domain. We conducted a user study where
we asked bioscientists to evaluate the usefulness of autocomplete, query
expansions, faceted refinement, related searches and results preview
implementations in new pilot interfaces and publicly available systems
while using baseline and their own queries. Our evaluation reveals that
there is a preference for certain features depending on the search task.
In addition, we touch on the current pain point of faceted search: the
acquisition of faceted subject metadata for unstructured documents.
We found a strong preference for prototypes displaying just a few facets
generated based on either the query or the matching documents.
Our evaluation reveals that there is a preference for certain features depending on the search task. In addition, we touch on the current pain point of faceted search: the acquisition of faceted subject metadata for unstructured documents. We found a strong preference for prototypes displaying just a few facets generated based on either the query or the matching documents.
This document summarizes a research paper on developing a feature-based product recommendation system. It begins by introducing recommender systems and their importance for e-commerce. It then describes how the proposed system takes basic product descriptions as input, recognizes features using association rule mining and k-nearest neighbor algorithms, and outputs recommended additional features to improve the product profile. The paper evaluates the system's performance on recommending antivirus software features. In under 3 sentences.
An effective search on web log from most popular downloaded contentijdpsjournal
A Web page recommender system effectively predicts the best related web page to search. While search
ing
a word from search engine it may display some unnecessary links and unrelated data’s to user so to a
void
this problem, the con
ceptual prediction model combines both the web usage and domain knowledge. The
proposed conceptual prediction model automatically generates a semantic network of the semantic Web
usage knowledge, which is the integration of domain knowledge and web usage i
nformation. Web usage
mining aims to discover interesting and frequent user access patterns from web browsing data. The
discovered knowledge can then be used for many practical web applications such as web
recommendations, adaptive web sites, and personali
zed web search and surfing
A DATA EXTRACTION ALGORITHM FROM OPEN SOURCE SOFTWARE PROJECT REPOSITORIES FO...ijseajournal
This document presents an algorithm to extract data from open source software project repositories on GitHub for building duration estimation models. The algorithm extracts data on contributors, commits, lines of code added and removed, and active days for each contributor within a release period. This data is then used to build linear regression models to estimate project duration based on the number of commits by contributor type (full-time, part-time, occasional). The algorithm is tested on data extracted from 21 releases of the WordPress project hosted on GitHub.
A DATA EXTRACTION ALGORITHM FROM OPEN SOURCE SOFTWARE PROJECT REPOSITORIES FO...ijseajournal
Software project estimation is important for allocating resources and planning a reasonable work
schedule. Estimation models are typically built using data from completed projects. While organizations
have their historical data repositories, it is difficult to obtain their collaboration due to privacy and
competitive concerns. To overcome the issue of public access to private data repositories this study
proposes an algorithm to extract sufficient data from the GitHub repository for building duration
estimation models. More specifically, this study extracts and analyses historical data on WordPress
projects to estimate OSS project duration using commits as an independent variable as well as an
improved classification of contributors based on the number of active days for each contributor within a
release period. The results indicate that duration estimation models using data from OSS repositories
perform well and partially solves the problem of lack of data encountered in empirical research in software
engineering
Query Recommendation by using Collaborative Filtering ApproachIRJET Journal
This document proposes a system called QDMiner to mine query facets from the top search results for a query. It uses collaborative filtering techniques to recommend the top-k results that are most relevant to a user's interests.
QDMiner first retrieves the top search results from a search engine. It then mines frequent lists from the HTML tags and free text within the results to identify query facets. It groups common lists and ranks the facets and items based on their appearances. QDMiner represents the search results in two models: the Unique Website Model and Context Similarity Model, to order the query facets.
To recommend results, QDMiner uses collaborative filtering techniques including item-based and user-based
This document summarizes a literature review on software architectures that support human-computer interaction (HCI) analysis. It outlines the introduction, research questions, methodology, results, analysis, and conclusions. The research questions examine the current state of software architectures for HCI analysis, trends in supporting this through software engineering, and applications to eLearning. The methodology describes searches of databases to identify relevant papers. Sixteen key papers were found covering a range of HCI contexts and aspects of software design. While some papers addressed communication between architecture components and HCI measurement, standards were often lacking. Limited work was found specifically applying such architectures to eLearning through analyzing student interaction with online content.
Search Interface Feature Evaluation in BiosciencesZanda Mark
Read more here: http://pingar.com/
This paper reports findings on desirable interface features for different
search tasks in the biomedical domain. We conducted a user study where
we asked bioscientists to evaluate the usefulness of autocomplete, query
expansions, faceted refinement, related searches and results preview
implementations in new pilot interfaces and publicly available systems
while using baseline and their own queries. Our evaluation reveals that
there is a preference for certain features depending on the search task.
In addition, we touch on the current pain point of faceted search: the
acquisition of faceted subject metadata for unstructured documents.
We found a strong preference for prototypes displaying just a few facets
generated based on either the query or the matching documents.
Our evaluation reveals that there is a preference for certain features depending on the search task. In addition, we touch on the current pain point of faceted search: the acquisition of faceted subject metadata for unstructured documents. We found a strong preference for prototypes displaying just a few facets generated based on either the query or the matching documents.
This document summarizes a research paper on developing a feature-based product recommendation system. It begins by introducing recommender systems and their importance for e-commerce. It then describes how the proposed system takes basic product descriptions as input, recognizes features using association rule mining and k-nearest neighbor algorithms, and outputs recommended additional features to improve the product profile. The paper evaluates the system's performance on recommending antivirus software features. In under 3 sentences.
An effective search on web log from most popular downloaded contentijdpsjournal
A Web page recommender system effectively predicts the best related web page to search. While search
ing
a word from search engine it may display some unnecessary links and unrelated data’s to user so to a
void
this problem, the con
ceptual prediction model combines both the web usage and domain knowledge. The
proposed conceptual prediction model automatically generates a semantic network of the semantic Web
usage knowledge, which is the integration of domain knowledge and web usage i
nformation. Web usage
mining aims to discover interesting and frequent user access patterns from web browsing data. The
discovered knowledge can then be used for many practical web applications such as web
recommendations, adaptive web sites, and personali
zed web search and surfing
A DATA EXTRACTION ALGORITHM FROM OPEN SOURCE SOFTWARE PROJECT REPOSITORIES FO...ijseajournal
This document presents an algorithm to extract data from open source software project repositories on GitHub for building duration estimation models. The algorithm extracts data on contributors, commits, lines of code added and removed, and active days for each contributor within a release period. This data is then used to build linear regression models to estimate project duration based on the number of commits by contributor type (full-time, part-time, occasional). The algorithm is tested on data extracted from 21 releases of the WordPress project hosted on GitHub.
A DATA EXTRACTION ALGORITHM FROM OPEN SOURCE SOFTWARE PROJECT REPOSITORIES FO...ijseajournal
Software project estimation is important for allocating resources and planning a reasonable work
schedule. Estimation models are typically built using data from completed projects. While organizations
have their historical data repositories, it is difficult to obtain their collaboration due to privacy and
competitive concerns. To overcome the issue of public access to private data repositories this study
proposes an algorithm to extract sufficient data from the GitHub repository for building duration
estimation models. More specifically, this study extracts and analyses historical data on WordPress
projects to estimate OSS project duration using commits as an independent variable as well as an
improved classification of contributors based on the number of active days for each contributor within a
release period. The results indicate that duration estimation models using data from OSS repositories
perform well and partially solves the problem of lack of data encountered in empirical research in software
engineering
Query Recommendation by using Collaborative Filtering ApproachIRJET Journal
This document proposes a system called QDMiner to mine query facets from the top search results for a query. It uses collaborative filtering techniques to recommend the top-k results that are most relevant to a user's interests.
QDMiner first retrieves the top search results from a search engine. It then mines frequent lists from the HTML tags and free text within the results to identify query facets. It groups common lists and ranks the facets and items based on their appearances. QDMiner represents the search results in two models: the Unique Website Model and Context Similarity Model, to order the query facets.
To recommend results, QDMiner uses collaborative filtering techniques including item-based and user-based
LEAN THINKING IN SOFTWARE ENGINEERING: A SYSTEMATIC REVIEWijseajournal
The field of Software Engineering has suffered considerable transformation in the last decades due to the influence of the philosophy of Lean Thinking. The purpose of this systematic review is to identify practices and approaches proposed by researchers in this area in the last 5 years, who have worked under the influence of this thinking. The search strategy brought together 549 studies, 80 of which were classified as
relevant for synthesis in this review. Seventeen tools of Lean Thinking adapted to Software Engineering were catalogued, as well as 35 practices created for the development of software that has been influenced by this philosophy. The study rovides a roadmap of results with the current state of the art and the identification of gaps pointing to opportunities for further esearch.
Management of time uncertainty in agileijseajournal
The rise of the use of mobile technologies in the world, such as smartphones and tablets, connected to
mobile networks is changing old habits and creating new ways for the society to access information and
interact with computer systems. Thus, traditional information systems are undergoing a process of
adaptation to this new computing context. However, it is important to note that the characteristics of this
new context are different. There are new features and, thereafter, new possibilities, as well as restrictions
that did not exist before. Finally, the systems developed for this environment have different requirements
and characteristics than the traditional information systems. For this reason, there is the need to reassess
the current knowledge about the processes of planning and building for the development of systems in this
new environment. One area in particular that demands such adaptation is software estimation. The
estimation processes, in general, are based on characteristics of the systems, trying to quantify the
complexity of implementing them. Hence, the main objective of this paper is to present a proposal for an
estimation model for mobile applications, as well as discuss the applicability of traditional estimation
models for the purpose of developing systems in the context of mobile computing. Hence, the main objective
of this paper is to present an effort estimation model for mobile applications.
mobile networks is changing old habits and creating new ways for the society to access information and
interact with computer systems. Thus, traditional information systems are undergoing a process of
adaptation to this new computing context. However, it is important to note that the characteristics of this
new context are different. There are new features and, thereafter, new possibilities, as well as restrictions
that did not exist before. Finally, the systems developed for this environment have different requirements
and characteristics than the traditional information systems. For this reason, there is the need to reassess
the current knowledge about the processes of planning and building for the development of systems in this
new environment. One area in particular that demands such adaptation is software estimation. The
estimation processes, in general, are based on characteristics of the systems, trying to quantify the
complexity of implementing them. Hence, the main objective of this paper is to present a proposal for an
estimation model for mobile applications, as well as discuss the applicability of traditional estimation
models for the purpose of developing systems in the context of mobile computing. Hence, the main objective
of this paper is to present an effort estimation model for mobile applications.
This document summarizes a technical report on an empirical study of design pattern evolution in 39 open source Java projects. The study analyzed a total of 428 software releases from the projects to identify 10 common design patterns. A total of 27,855 instances of the patterns were found. The data collected includes the number of each pattern identified in each project release. This dataset will be further analyzed to understand how design patterns evolve over the lifetime of a software system.
Vertical intent prediction approach based on Doc2vec and convolutional neural...IJECEIAES
Vertical selection is the task of selecting the most relevant verticals to a given query in order to improve the diversity and quality of web search results. This task requires not only predicting relevant verticals but also these verticals must be those the user expects to be relevant for his particular information need. Most existing works focused on using traditional machine learning techniques to combine multiple types of features for selecting several relevant verticals. Although these techniques are very efficient, handling vertical selection with high accuracy is still a challenging research task. In this paper, we propose an approach for improving vertical selection in order to satisfy the user vertical intent and reduce user’s browsing time and efforts. First, it generates query embeddings vectors using the doc2vec algorithm that preserves syntactic and semantic information within each query. Secondly, this vector will be used as input to a convolutional neural network model for increasing the representation of the query with multiple levels of abstraction including rich semantic information and then creating a global summarization of the query features. We demonstrate the effectiveness of our approach through comprehensive experimentation using various datasets. Our experimental findings show that our system achieves significant accuracy. Further, it realizes accurate predictions on new unseen data.
A Survey on Design Pattern Detection ApproachesCSCJournals
Design patterns play a key role in software development process. The interest in extracting design pattern instances from object-oriented software has increased tremendously in the last two decades. Design patterns enhance program understanding, help to document the systems and capture design trade-offs.
This paper provides the current state of the art in design patterns detection. The selected approaches cover the whole spectrum of the research in design patterns detection. We noticed diverse accuracy values extracted by different detection approaches. The lessons learned are listed at the end of this paper, which can be used for future research directions and guidelines in the area of design patterns detection.
The document proposes a Requirement Opinions Mining Method (ROM) to mine user requirements from software review data. It first defines requirement opinions, functional requirement opinions, and non-functional requirement opinions. It then uses deep learning models to classify reviews into functional and non-functional categories. Functional reviews are further classified into three categories and sequence labeling is used to identify functional requirements. Non-functional reviews are clustered using K-means clustering with word vectors. Finally, specific requirements are extracted from the clusters using TF-IDF and syntactic analysis to realize requirement opinion mining from software review data. A case study is conducted on reviews from a Chinese mobile application platform.
The document describes an automated process for bug triage that uses text classification and data reduction techniques. It proposes using Naive Bayes classifiers to predict the appropriate developers to assign bugs to by applying stopword removal, stemming, keyword selection, and instance selection on bug reports. This reduces the data size and improves quality. It predicts developers based on their history and profiles while tracking bug status. The goal is to more efficiently handle software bugs compared to traditional manual triage processes.
Software Defect Prediction Using Radial Basis and Probabilistic Neural NetworksEditor IJCATR
This document discusses using neural networks for software defect prediction. It examines the effectiveness of using a radial basis function neural network and a probabilistic neural network on prediction accuracy and defect prediction compared to other techniques. The key findings are that neural networks provide an acceptable level of accuracy for defect prediction but perform poorly at actual defect prediction. Probabilistic neural networks performed consistently better than other techniques across different datasets in terms of prediction accuracy and defect prediction ability. The document recommends using an ensemble of different software defect prediction models rather than relying on a single technique.
COST-SENSITIVE TOPICAL DATA ACQUISITION FROM THE WEBIJDKP
The cost of acquiring training data instances for induction of data mining models is one of the main concerns in real-world problems. The web is a comprehensive source for many types of data which can be used for data mining tasks. But the distributed and dynamic nature of web dictates the use of solutions which can handle these characteristics. In this paper, we introduce an automatic method for topical data acquisition from the web. We propose a new type of topical crawlers that use a hybrid link context extraction method for topical crawling to acquire on-topic web pages with minimum bandwidth usage and with the lowest cost. The new link context extraction method which is called Block Text Window (BTW), combines a text window method with a block-based method and overcomes challenges of each of these methods using the advantages of the other one. Experimental results show the predominance of BTW in comparison with state of the art automatic topical web data acquisition methods based on standard metrics.
Tourism Based Hybrid Recommendation SystemIRJET Journal
This paper proposes a hybrid tourism recommendation system that combines collaborative filtering, content-based filtering, and aspect-based sentiment analysis to improve accuracy and address cold start problems. The system analyzes user ratings and reviews to predict ratings for other tourism packages. It stores ratings, reviews, and sentiment information in a database to enhance recommendations. Results showed the hybrid approach increased efficiency over conventional methods. Future work could include testing on additional datasets and expanding the system.
In the present paper, applicability and
capability of A.I techniques for effort estimation prediction has
been investigated. It is seen that neuro fuzzy models are very
robust, characterized by fast computation, capable of handling
the distorted data. Due to the presence of data non-linearity, it is
an efficient quantitative tool to predict effort estimation. The one
hidden layer network has been developed named as OHLANFIS
using MATLAB simulation environment.
Here the initial parameters of the OHLANFIS are
identified using the subtractive clustering method. Parameters of
the Gaussian membership function are optimally determined
using the hybrid learning algorithm. From the analysis it is seen
that the Effort Estimation prediction model developed using
OHLANFIS technique has been able to perform well over normal
ANFIS Model.
This document compares two solutions for filtering hierarchical data sets: Solution A uses MySQL and Python, while Solution B uses MongoDB and C++. Both solutions were tested on a 2011 MeSH data set using various filtering methods and thresholds. Solution A generally had faster execution times at lower thresholds, while Solution B scaled better to higher thresholds. However, the document concludes that neither solution is clearly superior, and further study is needed to evaluate their performance for real-world human users.
World Wide Web is a huge repository of information and there is a tremendous increase in the volume of
information daily. The number of users are also increasing day by day. To reduce users browsing time lot
of research is taken place. Web Usage Mining is a type of web mining in which mining techniques are
applied in log data to extract the behaviour of users. Clustering plays an important role in a broad range
of applications like Web analysis, CRM, marketing, medical diagnostics, computational biology, and many
others. Clustering is the grouping of similar instances or objects. The key factor for clustering is some sort
of measure that can determine whether two objects are similar or dissimilar. In this paper a novel
clustering method to partition user sessions into accurate clusters is discussed. The accuracy and various
performance measures of the proposed algorithm shows that the proposed method is a better method for
web log mining.
An Extensible Web Mining Framework for Real KnowledgeIJEACS
With the emergence of Web 2.0 applications that bestow rich user experience and convenience without time and geographical restrictions, web usage logs became a goldmine to researchers across the globe. User behavior analysis in different domains based on web logs has its utility for enterprises to have strategic decision making. Business growth of enterprises depends on customer-centric approaches that need to know the knowledge of customer behavior to succeed. The rationale behind this is that customers have alternatives and there is intense competition. Therefore business community needs business intelligence to have expert decisions besides focusing customer relationship management. Many researchers contributed towards this end. However, the need for a comprehensive framework that caters to the needs of businesses to ascertain real needs of web users. This paper presents a framework named eXtensible Web Usage Mining Framework (XWUMF) for discovering actionable knowledge from web log data. The framework employs a hybrid approach that exploits fuzzy clustering methods and methods for user behavior analysis. Moreover the framework is extensible as it can accommodate new algorithms for fuzzy clustering and user behavior analysis. We proposed an algorithm known as Sequential Web Usage Miner (SWUM) for efficient mining of web usage patterns from different data sets. We built a prototype application to validate our framework. Our empirical results revealed that the framework helps in discovering actionable knowledge.
Automatic detection of online abuse and analysis of problematic users in wiki...Melissa Moody
For their 2019 capstone project, DSI Master of Science in Data Science students Charu Rawat, Arnab Sarkar, and Sameer Singh proposed a framework to understand and detect such abuse in the English Wikipedia community.
Rawat, Sarkar, and Singh received the award for Best Paper in the Data Science for Society category at the 2019 Systems & Information Design Symposium (SIEDS). In "Automatic Detection of Online Abuse and Analysis of Problematic Users in Wikipedia," the team presented an analysis of user misconduct in Wikipedia and a system for the automated early detection of inappropriate behavior.
IRJET-Computational model for the processing of documents and support to the ...IRJET Journal
This document proposes a computational model for processing documents and supporting decision making in information retrieval systems. The model includes five main components: 1) a tracking and indexing component to crawl the web and store document metadata, 2) an information processing component to categorize documents and define user profiles, 3) a decision support component to analyze stored information and generate statistical reports, 4) a display component to provide search interfaces and visualization tools, and 5) specialized roles to administer the system. The goal of the model is to provide a framework for developing large-scale search engines.
A NOVEL APPROACH TO ERROR DETECTION AND CORRECTION OF C PROGRAMS USING MACHIN...IJCI JOURNAL
There has always been a struggle for programmers to identify the errors while executing a program- be it
syntactical or logical error. This struggle has led to a research in identification of syntactical and logical
errors. This paper makes an attempt to survey those research works which can be used to identify errors as
well as proposes a new model based on machine learning and data mining which can detect logical and
syntactical errors by correcting them or providing suggestions. The proposed work is based on use of
hashtags to identify each correct program uniquely and this in turn can be compared with the logically
incorrect program in order to identify errors.
This document provides background information on web analytics and summarizes the materials used in a study comparing hosted and server-side web analytics solutions. The study examines Google Analytics, a hosted solution, and AWStats Analytics, a server-side solution, to analyze metrics for the metkaweb.fi website over three months. Key metrics like unique visits, page views, and time on site are compared between the two tools to evaluate their results and insights. The case study aims to identify best practices for selecting analytics tools based on target metrics and capabilities.
This document summarizes research on automatically classifying and extracting non-functional requirements (NFRs) from text files using supervised machine learning. The researchers created a dataset of NFR keywords by analyzing NFR catalogs identified through a systematic mapping study. The keywords were categorized into security, performance, and usability. They then tested a supervised learning approach on a existing dataset containing 625 software specifications. The approach achieved accuracy rates between 85-98% for classifying NFRs into the security, performance, and usability categories. The research thus provides a way to generate labeled datasets for training machine learning models to automatically classify NFRs mentioned in text documents.
Rainfall intensity duration frequency curve statistical analysis and modeling...bijceesjournal
Using data from 41 years in Patna’ India’ the study’s goal is to analyze the trends of how often it rains on a weekly, seasonal, and annual basis (1981−2020). First, utilizing the intensity-duration-frequency (IDF) curve and the relationship by statistically analyzing rainfall’ the historical rainfall data set for Patna’ India’ during a 41 year period (1981−2020), was evaluated for its quality. Changes in the hydrologic cycle as a result of increased greenhouse gas emissions are expected to induce variations in the intensity, length, and frequency of precipitation events. One strategy to lessen vulnerability is to quantify probable changes and adapt to them. Techniques such as log-normal, normal, and Gumbel are used (EV-I). Distributions were created with durations of 1, 2, 3, 6, and 24 h and return times of 2, 5, 10, 25, and 100 years. There were also mathematical correlations discovered between rainfall and recurrence interval.
Findings: Based on findings, the Gumbel approach produced the highest intensity values, whereas the other approaches produced values that were close to each other. The data indicates that 461.9 mm of rain fell during the monsoon season’s 301st week. However, it was found that the 29th week had the greatest average rainfall, 92.6 mm. With 952.6 mm on average, the monsoon season saw the highest rainfall. Calculations revealed that the yearly rainfall averaged 1171.1 mm. Using Weibull’s method, the study was subsequently expanded to examine rainfall distribution at different recurrence intervals of 2, 5, 10, and 25 years. Rainfall and recurrence interval mathematical correlations were also developed. Further regression analysis revealed that short wave irrigation, wind direction, wind speed, pressure, relative humidity, and temperature all had a substantial influence on rainfall.
Originality and value: The results of the rainfall IDF curves can provide useful information to policymakers in making appropriate decisions in managing and minimizing floods in the study area.
More Related Content
Similar to TOWARDS CROSS-BROWSER INCOMPATIBILITIES DETECTION: ASYSTEMATIC LITERATURE REVIEW
LEAN THINKING IN SOFTWARE ENGINEERING: A SYSTEMATIC REVIEWijseajournal
The field of Software Engineering has suffered considerable transformation in the last decades due to the influence of the philosophy of Lean Thinking. The purpose of this systematic review is to identify practices and approaches proposed by researchers in this area in the last 5 years, who have worked under the influence of this thinking. The search strategy brought together 549 studies, 80 of which were classified as
relevant for synthesis in this review. Seventeen tools of Lean Thinking adapted to Software Engineering were catalogued, as well as 35 practices created for the development of software that has been influenced by this philosophy. The study rovides a roadmap of results with the current state of the art and the identification of gaps pointing to opportunities for further esearch.
Management of time uncertainty in agileijseajournal
The rise of the use of mobile technologies in the world, such as smartphones and tablets, connected to
mobile networks is changing old habits and creating new ways for the society to access information and
interact with computer systems. Thus, traditional information systems are undergoing a process of
adaptation to this new computing context. However, it is important to note that the characteristics of this
new context are different. There are new features and, thereafter, new possibilities, as well as restrictions
that did not exist before. Finally, the systems developed for this environment have different requirements
and characteristics than the traditional information systems. For this reason, there is the need to reassess
the current knowledge about the processes of planning and building for the development of systems in this
new environment. One area in particular that demands such adaptation is software estimation. The
estimation processes, in general, are based on characteristics of the systems, trying to quantify the
complexity of implementing them. Hence, the main objective of this paper is to present a proposal for an
estimation model for mobile applications, as well as discuss the applicability of traditional estimation
models for the purpose of developing systems in the context of mobile computing. Hence, the main objective
of this paper is to present an effort estimation model for mobile applications.
mobile networks is changing old habits and creating new ways for the society to access information and
interact with computer systems. Thus, traditional information systems are undergoing a process of
adaptation to this new computing context. However, it is important to note that the characteristics of this
new context are different. There are new features and, thereafter, new possibilities, as well as restrictions
that did not exist before. Finally, the systems developed for this environment have different requirements
and characteristics than the traditional information systems. For this reason, there is the need to reassess
the current knowledge about the processes of planning and building for the development of systems in this
new environment. One area in particular that demands such adaptation is software estimation. The
estimation processes, in general, are based on characteristics of the systems, trying to quantify the
complexity of implementing them. Hence, the main objective of this paper is to present a proposal for an
estimation model for mobile applications, as well as discuss the applicability of traditional estimation
models for the purpose of developing systems in the context of mobile computing. Hence, the main objective
of this paper is to present an effort estimation model for mobile applications.
This document summarizes a technical report on an empirical study of design pattern evolution in 39 open source Java projects. The study analyzed a total of 428 software releases from the projects to identify 10 common design patterns. A total of 27,855 instances of the patterns were found. The data collected includes the number of each pattern identified in each project release. This dataset will be further analyzed to understand how design patterns evolve over the lifetime of a software system.
Vertical intent prediction approach based on Doc2vec and convolutional neural...IJECEIAES
Vertical selection is the task of selecting the most relevant verticals to a given query in order to improve the diversity and quality of web search results. This task requires not only predicting relevant verticals but also these verticals must be those the user expects to be relevant for his particular information need. Most existing works focused on using traditional machine learning techniques to combine multiple types of features for selecting several relevant verticals. Although these techniques are very efficient, handling vertical selection with high accuracy is still a challenging research task. In this paper, we propose an approach for improving vertical selection in order to satisfy the user vertical intent and reduce user’s browsing time and efforts. First, it generates query embeddings vectors using the doc2vec algorithm that preserves syntactic and semantic information within each query. Secondly, this vector will be used as input to a convolutional neural network model for increasing the representation of the query with multiple levels of abstraction including rich semantic information and then creating a global summarization of the query features. We demonstrate the effectiveness of our approach through comprehensive experimentation using various datasets. Our experimental findings show that our system achieves significant accuracy. Further, it realizes accurate predictions on new unseen data.
A Survey on Design Pattern Detection ApproachesCSCJournals
Design patterns play a key role in software development process. The interest in extracting design pattern instances from object-oriented software has increased tremendously in the last two decades. Design patterns enhance program understanding, help to document the systems and capture design trade-offs.
This paper provides the current state of the art in design patterns detection. The selected approaches cover the whole spectrum of the research in design patterns detection. We noticed diverse accuracy values extracted by different detection approaches. The lessons learned are listed at the end of this paper, which can be used for future research directions and guidelines in the area of design patterns detection.
The document proposes a Requirement Opinions Mining Method (ROM) to mine user requirements from software review data. It first defines requirement opinions, functional requirement opinions, and non-functional requirement opinions. It then uses deep learning models to classify reviews into functional and non-functional categories. Functional reviews are further classified into three categories and sequence labeling is used to identify functional requirements. Non-functional reviews are clustered using K-means clustering with word vectors. Finally, specific requirements are extracted from the clusters using TF-IDF and syntactic analysis to realize requirement opinion mining from software review data. A case study is conducted on reviews from a Chinese mobile application platform.
The document describes an automated process for bug triage that uses text classification and data reduction techniques. It proposes using Naive Bayes classifiers to predict the appropriate developers to assign bugs to by applying stopword removal, stemming, keyword selection, and instance selection on bug reports. This reduces the data size and improves quality. It predicts developers based on their history and profiles while tracking bug status. The goal is to more efficiently handle software bugs compared to traditional manual triage processes.
Software Defect Prediction Using Radial Basis and Probabilistic Neural NetworksEditor IJCATR
This document discusses using neural networks for software defect prediction. It examines the effectiveness of using a radial basis function neural network and a probabilistic neural network on prediction accuracy and defect prediction compared to other techniques. The key findings are that neural networks provide an acceptable level of accuracy for defect prediction but perform poorly at actual defect prediction. Probabilistic neural networks performed consistently better than other techniques across different datasets in terms of prediction accuracy and defect prediction ability. The document recommends using an ensemble of different software defect prediction models rather than relying on a single technique.
COST-SENSITIVE TOPICAL DATA ACQUISITION FROM THE WEBIJDKP
The cost of acquiring training data instances for induction of data mining models is one of the main concerns in real-world problems. The web is a comprehensive source for many types of data which can be used for data mining tasks. But the distributed and dynamic nature of web dictates the use of solutions which can handle these characteristics. In this paper, we introduce an automatic method for topical data acquisition from the web. We propose a new type of topical crawlers that use a hybrid link context extraction method for topical crawling to acquire on-topic web pages with minimum bandwidth usage and with the lowest cost. The new link context extraction method which is called Block Text Window (BTW), combines a text window method with a block-based method and overcomes challenges of each of these methods using the advantages of the other one. Experimental results show the predominance of BTW in comparison with state of the art automatic topical web data acquisition methods based on standard metrics.
Tourism Based Hybrid Recommendation SystemIRJET Journal
This paper proposes a hybrid tourism recommendation system that combines collaborative filtering, content-based filtering, and aspect-based sentiment analysis to improve accuracy and address cold start problems. The system analyzes user ratings and reviews to predict ratings for other tourism packages. It stores ratings, reviews, and sentiment information in a database to enhance recommendations. Results showed the hybrid approach increased efficiency over conventional methods. Future work could include testing on additional datasets and expanding the system.
In the present paper, applicability and
capability of A.I techniques for effort estimation prediction has
been investigated. It is seen that neuro fuzzy models are very
robust, characterized by fast computation, capable of handling
the distorted data. Due to the presence of data non-linearity, it is
an efficient quantitative tool to predict effort estimation. The one
hidden layer network has been developed named as OHLANFIS
using MATLAB simulation environment.
Here the initial parameters of the OHLANFIS are
identified using the subtractive clustering method. Parameters of
the Gaussian membership function are optimally determined
using the hybrid learning algorithm. From the analysis it is seen
that the Effort Estimation prediction model developed using
OHLANFIS technique has been able to perform well over normal
ANFIS Model.
This document compares two solutions for filtering hierarchical data sets: Solution A uses MySQL and Python, while Solution B uses MongoDB and C++. Both solutions were tested on a 2011 MeSH data set using various filtering methods and thresholds. Solution A generally had faster execution times at lower thresholds, while Solution B scaled better to higher thresholds. However, the document concludes that neither solution is clearly superior, and further study is needed to evaluate their performance for real-world human users.
World Wide Web is a huge repository of information and there is a tremendous increase in the volume of
information daily. The number of users are also increasing day by day. To reduce users browsing time lot
of research is taken place. Web Usage Mining is a type of web mining in which mining techniques are
applied in log data to extract the behaviour of users. Clustering plays an important role in a broad range
of applications like Web analysis, CRM, marketing, medical diagnostics, computational biology, and many
others. Clustering is the grouping of similar instances or objects. The key factor for clustering is some sort
of measure that can determine whether two objects are similar or dissimilar. In this paper a novel
clustering method to partition user sessions into accurate clusters is discussed. The accuracy and various
performance measures of the proposed algorithm shows that the proposed method is a better method for
web log mining.
An Extensible Web Mining Framework for Real KnowledgeIJEACS
With the emergence of Web 2.0 applications that bestow rich user experience and convenience without time and geographical restrictions, web usage logs became a goldmine to researchers across the globe. User behavior analysis in different domains based on web logs has its utility for enterprises to have strategic decision making. Business growth of enterprises depends on customer-centric approaches that need to know the knowledge of customer behavior to succeed. The rationale behind this is that customers have alternatives and there is intense competition. Therefore business community needs business intelligence to have expert decisions besides focusing customer relationship management. Many researchers contributed towards this end. However, the need for a comprehensive framework that caters to the needs of businesses to ascertain real needs of web users. This paper presents a framework named eXtensible Web Usage Mining Framework (XWUMF) for discovering actionable knowledge from web log data. The framework employs a hybrid approach that exploits fuzzy clustering methods and methods for user behavior analysis. Moreover the framework is extensible as it can accommodate new algorithms for fuzzy clustering and user behavior analysis. We proposed an algorithm known as Sequential Web Usage Miner (SWUM) for efficient mining of web usage patterns from different data sets. We built a prototype application to validate our framework. Our empirical results revealed that the framework helps in discovering actionable knowledge.
Automatic detection of online abuse and analysis of problematic users in wiki...Melissa Moody
For their 2019 capstone project, DSI Master of Science in Data Science students Charu Rawat, Arnab Sarkar, and Sameer Singh proposed a framework to understand and detect such abuse in the English Wikipedia community.
Rawat, Sarkar, and Singh received the award for Best Paper in the Data Science for Society category at the 2019 Systems & Information Design Symposium (SIEDS). In "Automatic Detection of Online Abuse and Analysis of Problematic Users in Wikipedia," the team presented an analysis of user misconduct in Wikipedia and a system for the automated early detection of inappropriate behavior.
IRJET-Computational model for the processing of documents and support to the ...IRJET Journal
This document proposes a computational model for processing documents and supporting decision making in information retrieval systems. The model includes five main components: 1) a tracking and indexing component to crawl the web and store document metadata, 2) an information processing component to categorize documents and define user profiles, 3) a decision support component to analyze stored information and generate statistical reports, 4) a display component to provide search interfaces and visualization tools, and 5) specialized roles to administer the system. The goal of the model is to provide a framework for developing large-scale search engines.
A NOVEL APPROACH TO ERROR DETECTION AND CORRECTION OF C PROGRAMS USING MACHIN...IJCI JOURNAL
There has always been a struggle for programmers to identify the errors while executing a program- be it
syntactical or logical error. This struggle has led to a research in identification of syntactical and logical
errors. This paper makes an attempt to survey those research works which can be used to identify errors as
well as proposes a new model based on machine learning and data mining which can detect logical and
syntactical errors by correcting them or providing suggestions. The proposed work is based on use of
hashtags to identify each correct program uniquely and this in turn can be compared with the logically
incorrect program in order to identify errors.
This document provides background information on web analytics and summarizes the materials used in a study comparing hosted and server-side web analytics solutions. The study examines Google Analytics, a hosted solution, and AWStats Analytics, a server-side solution, to analyze metrics for the metkaweb.fi website over three months. Key metrics like unique visits, page views, and time on site are compared between the two tools to evaluate their results and insights. The case study aims to identify best practices for selecting analytics tools based on target metrics and capabilities.
This document summarizes research on automatically classifying and extracting non-functional requirements (NFRs) from text files using supervised machine learning. The researchers created a dataset of NFR keywords by analyzing NFR catalogs identified through a systematic mapping study. The keywords were categorized into security, performance, and usability. They then tested a supervised learning approach on a existing dataset containing 625 software specifications. The approach achieved accuracy rates between 85-98% for classifying NFRs into the security, performance, and usability categories. The research thus provides a way to generate labeled datasets for training machine learning models to automatically classify NFRs mentioned in text documents.
Similar to TOWARDS CROSS-BROWSER INCOMPATIBILITIES DETECTION: ASYSTEMATIC LITERATURE REVIEW (20)
Rainfall intensity duration frequency curve statistical analysis and modeling...bijceesjournal
Using data from 41 years in Patna’ India’ the study’s goal is to analyze the trends of how often it rains on a weekly, seasonal, and annual basis (1981−2020). First, utilizing the intensity-duration-frequency (IDF) curve and the relationship by statistically analyzing rainfall’ the historical rainfall data set for Patna’ India’ during a 41 year period (1981−2020), was evaluated for its quality. Changes in the hydrologic cycle as a result of increased greenhouse gas emissions are expected to induce variations in the intensity, length, and frequency of precipitation events. One strategy to lessen vulnerability is to quantify probable changes and adapt to them. Techniques such as log-normal, normal, and Gumbel are used (EV-I). Distributions were created with durations of 1, 2, 3, 6, and 24 h and return times of 2, 5, 10, 25, and 100 years. There were also mathematical correlations discovered between rainfall and recurrence interval.
Findings: Based on findings, the Gumbel approach produced the highest intensity values, whereas the other approaches produced values that were close to each other. The data indicates that 461.9 mm of rain fell during the monsoon season’s 301st week. However, it was found that the 29th week had the greatest average rainfall, 92.6 mm. With 952.6 mm on average, the monsoon season saw the highest rainfall. Calculations revealed that the yearly rainfall averaged 1171.1 mm. Using Weibull’s method, the study was subsequently expanded to examine rainfall distribution at different recurrence intervals of 2, 5, 10, and 25 years. Rainfall and recurrence interval mathematical correlations were also developed. Further regression analysis revealed that short wave irrigation, wind direction, wind speed, pressure, relative humidity, and temperature all had a substantial influence on rainfall.
Originality and value: The results of the rainfall IDF curves can provide useful information to policymakers in making appropriate decisions in managing and minimizing floods in the study area.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
TOWARDS CROSS-BROWSER INCOMPATIBILITIES DETECTION: ASYSTEMATIC LITERATURE REVIEW
1. International Journal of Software Engineering & Applications (IJSEA), Vol.10, No.6, November 2019
DOI: 10.5121/ijsea.2019.10602 17
TOWARDS CROSS-BROWSER INCOMPATIBILITIES
DETECTION: ASYSTEMATIC LITERATURE REVIEW
Willian Massami Watanabe1
, Fagner Christian Paes2
and Daiany Silva3
1,2,3
Universidade Tecnologica Federal Do Parana, Brazil
ABSTRACT
The term Cross Browser Incompatibilities (XBI) stands for compatibility issues which can be observed
while rendering a web application in different browsers, such as: Microsoft Edge, Mozilla Firefox, Google
Chrome, among others. In order to detect Cross-Browser Incompatibilities - XBIs, Web developers rely on
manual inspection of every web page of their applications, rendering them in various configuration
environments (considering multiple OS platforms and browser versions). These manual inspections are
error-prone and increase the cost of the Web engineering process. This paper has the goal of identify- ing
techniques for automatic XBI detection, conducting a Systematic Literature Review (SLR) with focus on
studies which report XBI automatic detection strategies. The selected studies were compared accord- ing to
the different technological solutions they implemented and were used to elaborate a XBI detection
framework which can organize and classify the state-of-the-art techniques and may be used to guide future
research activities about this topic.
1. INTRODUCTION
Web applications are built on top of a client-server architecture, in which the application is partially
executed in the server-side through a variety of programming languages, while other components
are executed in the client-side and can be rendered in different web browsers, such as: Internet
Explorer, Microsoft Edge, Mozilla Firefox, Opera, Google Chrome, and others [1]. Thus, in order
to execute the client-side components, users can choose from a broad variety of distinct browser
implementations. While this flexibility work towards the portability of web applications, it also
increases the cost and effort spent in the life cycle of the software development.The consistent
rendering of a web application in different browsers is an essential attribute for high profile web-
sites [2]. Every element of a web application should be correctly rendered and present the same
behavior, regardless of the user’s web browser implementation, version, or OS [3]. In this con-
text, recent studies [1, 4] have referred to the incompatibilities observed in web applications when
rendered by different browsers as XBI (Cross-Browser Issues/Incompatibilities).
To detect XBIs, developers must load and render web applications in multiple browsers, then man-
ually inspect their behavior. Currently, there are commercial tools which can be used to alleviate
the costs of this manual ad-hoc inspection activity, such as Microsoft Expression Web1 and Adobe
Lab2. These tools automatically generate screenshots of a web application as rendered by multiple
browsers. However, even though they make the process of identifying XBIs easier, developers still
have to rely on manual inspection of the screenshots.
According to Choudhary et al., XBIs can be classified in two groups [1]:
Layout Issues: caused by differences in how web browsers initially render a webpage; this type
of XBI is immediately visible to the user, appearing as different positioning, size, visibility or
general appearance of elements of a webpage; and
2. International Journal of Software Engineering & Applications (IJSEA), Vol.10, No.6, November 2019
18
Functionality Issues: associated to the functionality/behavior of a web application and the way it
is executed in different browsers.
The state-of-the-art research in this topic reports on various experimental tools to automatically
detect XBIs, such as WebDiff [1], CrossCheck [4], X-Pert [2], WebMate [3] and Browserbite [5].
These tools employ different techniques to implement a XBI detection solution, and were
separately developed with the objective of reducing false positives (XBIs not actually observed in
the web application) and increasing the number of detectable incompatibilities (associated to
Layout and Funcionality Issues).
This paper presents a SLR (Systematic Literature Review) [6] to identify different approaches to
XBI automatic detection. The results show the evolution of these approaches and classify the
studies’ different techniques, comparing them and synthesizing contributions in this topic to
hopefully guide future efforts in automatic XBI detection. The contributions associated to this
study are: (1) the elaboration of a XBI detection framework for organizing and classifying the
technical solutions used in the studies; (2) a comparison between the different technical solutions
employed to alleviate XBIs detection costs throughout the Software Engineering process; and (3)
a SLR that can be referred to again in the future so as to evaluate the evolution of research in the
area of automatic XBI detection.
This paper is structured as follows: Section 2 presents the process of Systematic Literature Review
and its protocol; Section 3 presents the results of the SLR, with the classification of the XBI
automatic detection approaches and description of the proposed framework; Section 4 presents
related work and a discussion on this paper contributions; finally, Section 5 presents final remarks
and suggests guidelines for future works.
2. SYSTEMATIC LITERATURE REVIEW
A SLR, also known as Systematic Review, is a secondary study which aims to identify, access,
evaluate, and interpret all of the relevant studies in an attempt to answer certain Research Questions
(RQs), shedding lights on topics or phenomena of interest [6]. SLRs are focused on gathering and
synthesizing evidence regarding pre established research topic; their results are supposed to
uncover the most current, state-of-the-art research on said topic.
According to Kitchenham et al., the SLR process is three-fold [6]:
• Review planning: clarification of key factors. Research questions and a detailed review
protocol are produced. The review protocol should establish the goal, search method, search
strings, and inclusion/exclusion criteria;
• Review conduction: search at selected sources, selection of studies based on the inclu-
sion/exclusion criteria and data extraction;
• Review documentation: interpretation and reporting of results, in order to present them to
interested parties. This can involve descriptive statistics, demographic information, visual
information, and several discussions.
This SLR study identified automated approaches to XBI detection. We hereby present a compila-
tion of our SLR results.
3. International Journal of Software Engineering & Applications (IJSEA), Vol.10, No.6, November 2019
19
2.1. Review Protocol
This section presents the Review protocol which was used to conduct the SLR. The protocol
attributes are presented next:
Objective: This study’s goal was to identify approaches for the automated detection of XBIs,
highlighting their strengths and weakness. Additionally, our results support a wide analysis of the
evolution of automated XBI detection approaches, providing guidelines for future research on the
topic.
Research Question (RQ): What techniques have been employed by XBI detection approaches
and how do they compare to one another?
Search Method: The research method adopted by this study consisted of using a generic search
string for searching different indexed research databases available online. This generic search
string was defined as follows:
• Keywords: ((“cross-browser" OR “cross browser") AND (compatibility OR incompat-
ibility OR testing OR test))
We performed searches on three different scientific databases: ACM Digital Library3, IEEE
Xplore4, and Springerlink5. The search string was calibrated to each library according to the
particular rules of its search engine.
We also emplyed a forward snowball technique, which consists of searching for articles that cite
the review’s previously selected articles [7]. This was done through Google Scholar6 and
IEEEXplorer.
Inclusion/Exclusion criteria: Aiming at answering our RQs and based on the scope of our re-
search, we formulated the following inclusion/exclusion criteria:
• Inclusion criteria 1 (IC1): studies addressing the problem of automated detection of
XBIs which employed Generation 3 - Crawl and capture approaches [2] were in-
cluded in the SLR.
• Exclusion criteria 1 (EC1): studies addressing approaches for preventing XBIs during
the coding phase of the development process were excluded;
• Exclusion criteria 2 (EC2): repeated entry (seen in more than one database) were
excluded; and
• Exclusion criteria 3 (EC3): studies which did not mention XBI detection were ex-
cluded.
2.2. Conducting The Review
The review process is shown in Figure 1.
4. International Journal of Software Engineering & Applications (IJSEA), Vol.10, No.6, November 2019
20
Figure 1. SLR process with the initial selection and forward snowballing results.
Our generic search string was adapted to execute properly in the search engine of the selected
databases. We have conducted a two-step search: our initial search was executed on September
2015, and updated on August 2016. This returned a total of 953 studies (167 from ACM, 183 from
IEEE Xplore, and 603 from Springer). After applying the inclusion and exclusion criteria to the
studies, 16 studies were seleted.
After initial selection, the forward snowballing technique was performed. The search for studies
which referenced the previously selected 16 studies was conducted in August 2017, and returned
482 studies (416 from Google Scholar and 66 from IEEE Xplore). These 482 studies were sub-
jected to the same inclusion, exclusion criteria; this resulted in the selection of 7 more studies, in
a total of 23.
The first selected studies in the SLR were published in 2010, while the year which presents the
highest number of publications was 2014. Only 4 studies from 2017, were selected. However, it is
unlikely that all publications from 2017 were included in the search, considering the last search
update was performed in August 2017.
The next section presents the results of the SLR, with a detailed analysis of the identified auto-
mated approaches.
3. RESULTS
The 23 selected articles in the SLR reported 8 different tools for detecting XBIs. There were mul-
tiple articles which reported different experiments and evaluations conducted using the same tool,
but there were also studies that, although describing technological solutions for the identification
of XBIs, did not named their approach.
The technological approaches identified in this review were: Structural DOM analysis, Screen-
shot comparison, Isomorphism in graphs, Machine learning, Relative layout, Adaptive Ran-
dom Tests, and Record-n-play. Figure 2 illustrates the number of articles which used each tech-
nological approach.
5. International Journal of Software Engineering & Applications (IJSEA), Vol.10, No.6, November 2019
21
Figure 2. Number of articles which implement each technological approach to detect XBIs.
3.1. XBI Detection Framework
Based on the XBI detection approaches identified in this paper, we propose a general frame- work
for classifying and organizing XBI detection approaches in the state-of-the-art and future
researches in this area. Our proposed framework is illustrated in Figure 3. Design decisions made
for modeling the framework were based on the analysis of XBI detection approaches proposed in
the selected papers. The components of the framework were categorized according to two aspects:
the type of the technical solution used and the XBI detection task.
In the framework we categorize XBI detection approaches in two main groups: DOM-based ap-
Figure 3. The XBI detection framework composed by webpage segmentation, content comparison and
behavior analysis techniques.
proaches and Computer-vision approaches. The DOM-based group of approaches involves detec-
tion mechanisms which use the DOM structure of a web application for identifying XBIs. These
approaches are mainly based on the Selenium API for navigating the DOM tree, collecting DOM
attributes such as position and size of elements, and rendering the web application in multiple
browsers. Computer-vision approaches are based mainly on image processing techniques applied
to the complete screenshot or to screenshots of an specific region or element of the web applica-
tion.
6. International Journal of Software Engineering & Applications (IJSEA), Vol.10, No.6, November 2019
22
Many of the reviewed studies relied on DOM-based approaches (such as [8] which reported the
use DOM elements’ attributes for detecting XBIs) or Computer-vision approaches (such as [5, 9–
11] which developed image processing algorithms for detecting XBIs). However, there were also
hybrid approaches, which combined DOM-based and Computer-vision approaches (such as [4, 12]
which used the DOM attributes alongside screenshot comparison for detectingXBIs).
In the studies, we observed the use of different approaches for detecting Layout and Functionality
incompatibilities. In the framework, we propose decomposing the XBI detection process in three
separate tasks: Webpage segmentation, which breaks the web application in smaller units of con-
tent before analyzing them for XBI detection; Content comparison, which employs techniques for
comparing units of content as rendered by different browsers, detecting Layout incompatibilities;
and Behavior analysis, which employs techniques for searching for FunctionalXBIs.
The Webpage segmentation task involved two main approaches, as observed in the reviewd stud-
ies. The first one was based on segmenting web applications according to their DOM structure,
having as units of content each constituent DOM element. The second approach was based on
segmentation using Regions-Of-Interest (ROI) identified from the complete screenshot of the web
application. It is worth noting that the use of a Webpage segmentation technique also implies the
use of a segment matching technique for comparing a segment as rendered by different browsers.
Both approaches present specific segment matching strategies, based on X-Path, DOM attributes,
position, size, and other characteristics extracted from the DOM structure or the screenshot im-
age. There were also studies which did not report using any techniques associated to the Webpage
segmentation task; in these cases, the XBI detection mechanism was based on the complete screen-
shot of the web application, such as [10], or the segmentation task was not the focus of the study
[13, 14]. DOM-based webpage segmentation also allows the use of the crawling approach pro-
posed in [15, 16]. In this approach, crawling process starts from leaf nodes of the DOMstructure
and hiding elements after collecting their data. Paes (2017) suggests this approach would increase
the precision of the XBI detection approach. Furthermore, DOM-based Webpage segmentation is
also present in other works associated to the automatic correction of XBIs, using search-based
approaches [17].
The Content comparison task has the goal of identifying mainly Layout XBIs by comparing two or
more corresponding elements/regions as rendered by different browsers. Our framework presents
the techniques and characteristics which could be used for comparing these elements/regions:
• DOM-based
DOM Attributes, position, size, text content, extracted from the DOM structure [1, 18].
Relative position of parent and sibling nodes; building alignment graphs and compar- ing
the same graph rendered in different browsers [2, 12, 15, 16, 19,20].
• Computer-vision
Image features extracted from screenshots of the segments which compose the web- page
[5, 9, 11].
Image Diff metric based on counting the number of dissimilar pixels between segments [15,
16].
EMD and χ2 distance of the histograms [4].
7. International Journal of Software Engineering & Applications (IJSEA), Vol.10, No.6, November 2019
23
pHash and Perceptual Image Diff algorithms [10].
ART for optimizing the image comparison performance [13].
IPH and SCS for using specific segment screenshot features for identifying incompat-
ibilities [14].
In regards to the Content comparison task, several studies have reported using many of the previ-
ously defined techniques concurrently, in order to improve the effectiveness of their approaches.
In these studies, Supervised Learning Classification algorithms were employed for merging the
results of the different techniques into a single output, and classify the segments as containing
XBIs or not.
The last part of the proposed framework is the Behavior analysis task. Components for detect- ing
Functional XBIs were mapped. All approaches identified by the review to execute this task
propose building a navigation/interaction model for the webpage, similar to a finite state machine
graph [3, 4, 8, 19–23], then comparing these models as rendered by different browsers, using a
graph isomorphism algorithm strategy. According to our framework, four technical approaches
were used for generating these models: Crawljax [4, 8], Webmate [3, 22, 23], Record’n’play
[19, 20], and WebSPHINX [24].
The proposed framework presents all technological solutions employed in the state-of-the-art in
XBI detection approaches. It could be used to organize XBI detection solutions, experiment with
multiple combinations of techniques and guide future research in the area of XBI detection.
3.2. Technical Solutions For XBI Detection
The technical solutions hereby presented are chronologically ordered according to when they were
first proposed in the literature. A low-level individual analysis of these technical solutions pro-
vide insights on specific configuration scenarios and types of XBIs which were identified by each
approach, highlighting the advantages of using each technique separately, in an attempt to answer
RQ.
3.2.1. Structural Dom Analysis
The DOM (Document Object Model) defines a logical structure for HTML and XML documents,
in order to allow programs and scripts to dynamically access and update the content7. The DOM
represents HTML and XML documents as tree structures, and browsers frequently provide an
API(Application Programming Interface) to be used by web applications. Many studies selected
in the SLR process used the DOM structure of a webpage to detect XBIs. These studies compared
attributes and elements of DOM representations as rendered by differentbrowsers.
Initially, multiple DOM representations of a single webpage are captured. Then, these DOM repre-
sentations are compared, in order to identify attributes and elements which do not present the same
value [1–4, 8, 12, 18, 21–23, 25, 26]. In these approaches, if an element presents incompatible
attribute values in different browsers, it can be marked as a XBI and later reported todevelopers.
Most studies used a selected group of attributes verifications in thisstep.
The first article to use this approach was [1], in the WebDiff tool. It was further developed in
other articles, such as [2, 4, 8]. While the studies evolved through time, it was observed that
8. International Journal of Software Engineering & Applications (IJSEA), Vol.10, No.6, November 2019
24
simply comparing the attributes of the DOM structure as rendered by different browsers leads to
a high number of false positives, representing XBIs detected by the approaches which were not
observable in the web application decreasing the precision of the approach. Thus, studies using
this technique advanced, employing other strategies such as attribute value normalization [4],
machine learning [4, 15, 16], among other techniques which are described in the next sections of
this paper.
Still in respect to structural DOM analysis, the studies [1, 2, 4] detalied solutions for matching el-
ements from different DOM representations. DOM representations rendered by multiple browsers
might present structural differences which might make it difficult to compare attributes and ele-
ments which can be miss placed depending on the browser in which it was rendered. Therefore,
these studies presented a technique based on using attributes such as tagName, id, xpath and a
hash mapping of the inner content (innerHTML), in order to match similar elements from different
DOM representations.
Paes (2017) also proposed a specific DOM crawling technique, in which the DOM is crawled
starting by the leaf nodes and then moving upward in the DOM tree. After collecting featuresfor
analysis of XBIs presence in a target DOM element (such as position and screenshot), that DOM
element is hidden in the web application, so that analysis of other DOM elements is not impacted
by the visual characteristics of the previously collected ones [15, 16].
3.2.2. Screenshot Comparison
DOM structure information can be used to assist the process of identifying XBIs. However, the
DOM does not present information about each element’s exact appearance on the screen [1]. Thus,
in regards to the identification of Layout XBIs, many studies reported the use of screenshot com-
parison techniques [1, 2, 4, 5, 9, 10, 12, 13, 18, 25].
The primary method for comparing images is direct matching histograms. However, using arigid
comparison of histograms with no consideration for global features (such as symmetry, kurtosis
and number of peaks) might raise many false positives [9]. In this context, image comparison
techniques were used in the studies: Earth Mover’s Distance (EMD) [1, 18, 25], χ2 Distance [2, 4,
12], Image Segmentation with correlation-based image comparison [5, 9], pHash and Perceptual
Image Diff [10], Iterative Perceptual Hash, and Structure-Color-Saliency[14].
WebDiff employed EMD to individually compare screenshots taken in different browsers for each
element of the DOM structure of a webpage [1, 18, 25]. In the WebDiff tool, there was no fixed
threashold to determine whether two element screenshots were different or not [1]. Instead, the
threshold was set relative to the size and the color density of eachscreenshot.
The χ2 distance image comparison technique is adapted from the statistical χ2 hypothesis test,
which is used to compare the dispersion between two nominal variables, evaluating the correlation
between qualitative variables. The studies [2, 4, 12] used the χ2 distance method to compare
screenshots of elements of the webpage. In [2, 12], the comparison was carried out between all
elements of a webpage. Then, in a later study [4], the χ2 distance method was used to compare
only the leaf nodes of the DOM tree structure. This strategy was used to reduce XBI false positives.
In [2, 12], the authors also reported that the χ2 is less expensive computationally and more precise
than EMD.
The studies [5, 9], which presented the Browserbite tool, reported segmenting screenshots of web
pages as rendered by different browsers into small rectangles, each identified as a Region Of
Interest (ROI) based on the image’s borders and color changes. Screenshots of the ROI as captured
9. International Journal of Software Engineering & Applications (IJSEA), Vol.10, No.6, November 2019
25
by different browsers were then compared. These studies employed a correlation-based image
comparison technique [5].
Sanchez and Aquino Jr. used two approaches for screenshot comparison: pHash8 and Perceptual
Image Diff9 [10]. The pHash algorithm is an implementation of the Perceptual Hash concept,
defined as a fingerprint hash of a multimedia file derived from various features of its content10.
Perceptual Image Diff, on the other hand, is an algorithm to identify every dissimilar pixel between
two images. Firstly, pHash was applied to screenshots captured from different browsers, in order
to generate a hash code for each image. Subsequently, the hash code of each image was compared
using the Hamming distance algorithm. The result was used alongside the Perceptual Image Diff
algorithm to detect XBIs.
In [14], two screenshot comparison algorithms were proposed: IPH - Iterative Perceptual Hash
and SCS - Structure-color-saliency. IPH uses global and local content discrepancies to avoid under
or overestimation of changes between screenshots. SCS considers changes in color, structure,
saliency and location sensitivity to compose a unique index. These approaches were used to
analyze differences between screenshots of parts of the webpage. In the experiment conducted by
the authors, IPH outperformed other previously used techniques, such as EMD (CrossCheck [4])
and χ2 (X-Pert [12]).
3.2.3. Graphs Isomorphism
The studies [2–4, 8, 12, 21–23, 26] reported the use of different approaches which focused on the
identification of Functionality XBIs. These approaches consisted of simulating user interac- tion
scenarios, mainly click events on elements of the webpage, to trigger changes in the DOM
representation of the web application in different browsers. The These studies mapped the DOM
representation and click event simulations into a graph, and then used a Graph Isomorphism algo-
rithm to identify functionalities which were not available depending on the user’s browser.
The first study to use this approach was in CrossT [8]. This study extended an Ajax application
crawler tool, named Crawljax [27]. CrossT employed the Crawljax for representing the Navigation
model of the application. Then, the Navigation models rendered by different browsers were com-
pared and differences in these graphs were reported as Functional XBIs, i.e., click functionalities
which were observed in one browser and missed in another [8].
After CrossT, other XBI detection approaches implemented similar strategies, such as [2–4, 12,
19–23, 26]. Certain approaches reported using an equivalent strategy for generating the models
[2, 4, 12], other reported doing so through events besides click [3, 21–23] and Record’n’play
strategies for also mapping non-deterministic events [19, 20]. In these approaches, the studies
evolved with the goal of increasing the number of DOM structure states analysed and coveredby
their XBI detection technique.
3.2.4. Machine Learning
Machine learning is an area of Artificial Intelligence which has the objective of developingcom-
putational techniques capable of automatically acquiring knowledge on how to perform a specific
task without being explicitly programmed to do so. All XBI detection studies which reported the
use of machine learning used supervised learning techniques. In supervised learning, concepts are
induced into the system from pre-classified examples. Studies employed different character- istics
of web applications as features in the learning phase and also different machine learning
algorithms.
10. International Journal of Software Engineering & Applications (IJSEA), Vol.10, No.6, November 2019
26
The first studies which used machine learning techniques for XBI detection were [2, 4, 12], in
CrossCheck and X-Pert. These studies reported the use of the decision-tree machine learning
technique. A decision-tree is a simple representation of a classifier used by many machine learning
systems, such as C4.5 [28]. These studies used a decision tree to classify every element of the
DOM structure as rendered by different browsers and determine if it was a XBI or not. The
decision tree, in these articles, was modelled with the following features: Size difference ratio,
differences in the size of elements as rendered by different browsers; Displacement, the Euclidean
distance between the position of elements as rendered by different browsers; Area, the smaller
area of elements as rendered by different browsers; Leaf Node Text Differences, a boolean value
identifying whether there are differences between the texts of elements as rendered by different
browsers; and Image distance, the χ2 distance screenshot comparison of elements as rendered by
different browsers. A simplified classification model of DOM elements was reported in [15, 16],
making use of the following features: Displacement, Relative position of the element in regards to
its parent position, Size differences, and image distance of the screenshots.
After [2, 4, 12], in the Browserbite tool, the authors reported the use of decision trees and neural
networks to detect XBIs in web applications [5, 9]. Neural networks are mathematical models
inspired by organic neural structures, which acquire knowledge through experience [29]. The
validation of Browserbite consisted of an experiment to compare how both machine learning ap-
proaches performed when analyzing a group of websites. Moreover, Browserbite uses the concept
of regions of interest to identify XBIs [5, 9], differently from other approaches, which employed
DOM elements comparison. Browserbite’s neural network approach was applied to each region of
interest as rendered by two browsers using the following features: 10 histogram bins (h0, h1, ...,
h9) representing the pixel intensity distribution of the region of interest in the baseline browser;
Correlation-based image comparison between screenshots of region of interest as rendered
by two browsers; Horizontal and vertical position of the region of interest as rendered by two
browsers; Height/Width of the region of interest as rendered by two browsers; Configuration
index indicating the browsers which were used to render the regions of interest; and Mismatch
density metric, calculated as a coefficient of the number of regions of interest which presented
differences in the correlation-based image comparison approach and the total number of regions
of interest in the screenshot of the webpage.
Decision trees and neural network machine learning techniques were used to reduce the number of
false positive. When evaluating CrossCheck and X-Pert, the authors reported that the use of deci-
sion trees improved the results of their XBI detection approach. In the evaluation of Browserbite,
neural networks performed better in comparison to decision trees [5, 9]. However, it is difficult to
establish a comparison between [2, 4, 12] and [5, 9], since these studies used a different set of
features and training data.
3.2.5. Relative Layout Comparison
The WebMate tool compared elements considering their relative positions [3, 22, 23]. HTML el-
ements present a hierarchical structure for rendering the layout of a webpage; hence, if a parent
element is misaligned, its child elements will also present the same misaligned absolute position-
ing. Thus, XBI detection approaches using absolute positioning to identify XBIs, might report
that child elements represent XBIs, when only the parent element was actually misaligned.
X-Pert also explored the relative layout position evaluation strategy [2, 12]. The tool modeled the
positioning of elements in a webpage as a graph, titled Alignment Graph. The Alignment Graph
represents the relationships between parent, child and siblings nodes, represented as nodes in the
graph, and their relative position in regards to one another, represented as transitions between each
element in the graph. Two alignment graphs extracted from two different browsers can then be
11. International Journal of Software Engineering & Applications (IJSEA), Vol.10, No.6, November 2019
27
checked for equivalence, and any difference observed between the graphs is reported as a XBI.
3.2.6. Adaptive Random Testing (ART)
Adaptive Random Testing - ART is an extension of the RandomTesting technique. ART analyses
the behavior of test results observed in software and generate test input which are more likely to
produce failures in the software [30]. Generally, similar test cases tend to identify similar failures,
hence ART is a technique to identify test cases which are more likely to improve the coverage of
a software input domain.
Selay et al. [13] used ART to compare images for equivalence, employing a screenshot comparison
approach. The ART approach used in [13] implemented the Fixed Size Candidate Set (FSCS)
algorithm to randomly chose test cases. This algorithm makes use of a test case similarity metric
to guarantee that it generates different test cases to explore different parts of the software’s input
domain [30]. Selay et al. ran an experiment with the goal of improving the performance in terms
of the time required to identify incompatibilities while comparing two distinct screenshots [13].
3.2.7. Record’n’play
In order to detect Functional XBIs, the approaches built a navigation graphs for each browser and
carried out further equivalence analysis of these graphs. Initially these graphs were built from data
generated by AJAX crawling tools, such as Crawljax and WebMate, which simulate user
interactionscenarios. However, these approaches do not deal with non-deterministic events in the
web application, or dynamic content changes generated by the server.
In [19, 20], the use of a proxy-based approach is proposed for recording web application’s dis-
patched events and replaying the same exact events when rendered in different browsers, through
the X-Check tool. Besides increasing the precision of the XBI detection approaches, thisallowed
the analysis of specific user interaction scenarios, considering events not priorly programed in
other crawlers, possibly enhancing the coverage of web applications. In [19, 20], X-Check is
compared to the X-Pert tool, in which the X-Check showed enhanced effectiveness and higher
number of XBI detection than X-Pert.
4. RELATED WORK AND DISCUSSION
There are other secondary studies in the area of Web applications testing [31–33]. Garousi et al.
(2013) conducted a Systematic Mapping study [31] for identifying and classifying research effort
in this area. Their study reported 79 papers from 2000 to 2011 and their classification con-
sidered aspects such as Testing activity, Testing location, Generated Artifacts, Used Techniques,
Static/dynamic webpages testing, Synchronous/Asynchronous, Client-tier technologies, Server-
tier technologies, among other data.
Li et al. (2014) presented a survey to report the evolution of techniques throughout 20 years of Web
application testing [32]. Their study classified articles according to Motivation (Interoperability,
Security, Dynamics, among others), Graph and model-based techniques, Mutation testing, Search-
based Software Engineering testing, Scanning and crawling techniques, Random testing, Fuzz
testing, Concolic testing and User-based testing.
Dogan et al. (2014) conducted a SLR on Web application testing [33]. The authors selected 95
papers published from 2000 to 2013. Their SLR was conducted with the goal of listing tools,
identifying test/fault models used in the studies, map the empirical evaluation methods and their
relevance for the industry.
12. International Journal of Software Engineering & Applications (IJSEA), Vol.10, No.6, November 2019
28
This paper presented a SLR on strategies for automatically detecting XBIs in web applications. In
the SLR, 23 studies were selected and 7 XBI detection strategies were identified in thesestud- ies:
Structural DOM analysis, Screenshot comparison, Graphs isomorphism, Machine learning,
Relative layout comparison, Adaptive random testing and Record’n’play. We described these
strategies, compared how each study implemented them and synthesized theirresults.
We also proposed a framework which separated XBI detection strategies into three main steps:
Webpage segmentation, Content comparison and Behavior analysis. The 23 selected studies are
situated in distinct parts of this framework, which could be used to organize these studies and
future research in the area. The framework classifies XBI detection approaches as belonging to
either of two groups: DOM-based or Computer-vision. The framework presents the components of
each group and how these components could be used together to form a XBI detection approach.
These results could also be used to guide future research in the area, and for comparing and
implementing new approaches.
XBI detection strategies evolved through time, targeting both Layout and Functionality XBIs.
Multiple approaches have been reported to improve the precision of XBI detection and the range
of XBIs identified. Web applications consist of multiple web pages and complex built-in func-
tionalities. The studies aimed to improve precision in XBIs identification and help decrease the
software development costs. Most validation procedures conducted in the studies consisted of
applying their XBI detection strategy to a group of websites, then comparing the effectiveness of
their approach with the results obtained through manual inspection or with other approaches.
Even though each experiment was conducted while measuring the effectiveness of the proposed
XBI detection strategies, their results are not comparable, since they were analyzed with different
websites. The number of articles in relation to each browser used in the experiments were: Mozilla
Firefox (10 articles), Google Chrome (7 articles), Microsoft Internet Explorer (10 articles), and
Apple Safari (4 articles). It is worth noting that in [11], the experiment was conducted with 15
distinct configurations of these browsers, by changing the OS and browserversions.
Even though browser engines have also evolved, research in this area still poses several challenges,
such as: detecting cross-browser incompatibilities between different platforms (Cross-platform
incompatibilities) [16, 34], given the increased popularity and variety of mobile devices and alter-
native operational systems; responsive design failure detection [35] for detecting inconsistencies
in web applications when rendered in different viewport width scenarios; and automatic correction
of Cross-browser incompatibilities using Search-Based Software Engineering[17].
The SLR process was conducted in order to identify all automatic XBI detection approaches
which were used in the state-of-the-art research. However, there might be studies which report the
use of XBI detection techniques but were not included. In order to aleviate this concern, the
SLR process was conducted by two researchers; all studies and the applicability of inclu-
sion/exclusion/quality criteria were discussed and reviewed by them. Futhermore, our SLR pro-
cess used database searches in ACM, IEEEXplore and SpringerLink, and forward snowballing in
IEEEXplorer and Google Scholar to search for articles related to XBI detection techniques. The
results of this SLR could be enhanced if other databases were included and backward snowballing
was also employed. However, the inclusion of other studies in the SLR would mean the addition
of other approaches related to the already described ones in this SLR or, at most, the addition of a
new approach category, not invalidating any of the previously reportedresults.
5. FINAL REMARKS
13. International Journal of Software Engineering & Applications (IJSEA), Vol.10, No.6, November 2019
29
Cross Browser Incompatibilities (XBIs) are frequent in web projects and represent a serious prob-
lem for web application developers. This article conducted a Systematic Literature Review with
the goal of identifying different approaches which have been used to automatically detect XBIs.
The results showed that many studies used Structural DOM Analysis and Screenshot Comparison
techniques for the automatic detection of XBIs. It was observed that most studies evolved with the
goal of reducing the number of false positives reported using Machine learning, Graphs isomor-
phism, Relative layout comparison, ART and Record’n’play strategies. The techniques were also
organized into a XBI detection framework composed of Webpage segmentation, Content compar-
ison and Behavior analysis. The classification scheme of different technological approaches, as
well as the comparison between the XBI detection strategies and the proposed framework, can be
used to guide the elaboration of new strategies and tools.
Future studies should discuss the implementation of automatic XBIs detection for different plat-
forms (on desktop and mobile devices), running experiments to compare the effectiveness ofdif-
ferent XBI detection approaches and investigate techniques able to assist the developer in the
diagnosis and correction of incompatibilities.
ACKNOWLEDGMENTS
The authors would like to thank UTFPR, CAPES and CNPq for supporting this work.
REFERENCES
[1] S. Choudhary, H. Versee, and A. Orso, “Webdiff: Automated identification of cross-browser issues
in web applications,” in Proc. of the 2010 IEEE International Conference on Software Maintenance
(ICSM 2010), Sept 2010, pp. 1–10.
[2] S. Choudhary, M. Prasad, and A. Orso, “X-pert: Accurate identification of cross-browser issues in
web applications,” in Proc. of the 35th International Conference on Software Engi- neering (ICSE
2013), May 2013, pp. 702–711.
[3] V.Dallmeier, M. Burger, T. Orth, and A. Zeller, “Webmate: Generating test cases for web 2.0,” in
Software Quality. Increasing Value in Software and Systems Development, D. Win- kler, S. Biffl,
and J. Bergsmann, Eds., vol. 133. Berlin, Heidelberg: Springer Berlin Hei- delberg, 2013, pp. 55–
69.
[4] S. Choudhary, M. Prasad, and A. Orso, “Crosscheck: Combining crawling and differenc- ing to
better detect cross-browser incompatibilities in web applications,” in Proc. of the 5thInternational
Conference on Software Testing, Verification and Validation (ICST 2012), April 2012, pp. 171–
180.
[5] N. Semenenko, M. Dumas, and T. Saar, “Browserbite: Accurate cross-browser testing via machine
learning over image features,” in Proc. of the 29th IEEE International Conference on Software
Maintenance (ICSM 2013), Sept 2013, pp. 528–531.
[6] B. Kitchenham, “Procedures for performing systematic reviews,” Keele University, Keele, UK,
Tech. Rep., 2004.
[7] D. Badampudi, C. Wohlin, and K. Petersen, “Experiences from using snowballing and database
14. International Journal of Software Engineering & Applications (IJSEA), Vol.10, No.6, November 2019
30
searches in systematic literature studies,” in Proc. of the 19th International Conference on
Evaluation and Assessment in Software Engineering (EASE 2015). New York, NY, USA:
ACM, 2015, pp. 17:1–17:10.
[Online]. Available: http: //doi.acm.org/10.1145/2745802.2745818
[8] A. Mesbah and M. R. Prasad, “Automated cross-browser compatibility testing,” in
Proceedings of the 33rd International Conference on Software Engineering (ICSE
2011). New York, NY, USA: ACM, 2011, pp. 561–570. [Online]. Available:
http://doi.acm.org/10.1145/1985793.1985870
[9] T. Saar, M. Dumas, M. Kaljuve, and N. Semenenko, “Cross-browser testing in browserbite,” in Web
Engineering, S. Casteleyn, G. Rossi, and M. Winckler, Eds. Cham: Springer Inter- national
Publishing, 2014, pp. 503–506.
[10] L. Sanchez and P. T. A. Jr., “Automatic deformations detection in internet interfaces: Addii,” in
Proceedings of the 17th International Conference on Human-Computer Interaction. Part III:
Users and Contexts (HCI International 2015), ser. Lecture Notes in Computer Science, vol. 9171.
Springer International Publishing, 2015, pp. 43–53.
[11] T. Saar, M. Dumas, M. Kaljuve, and N. Semenenko, “Browserbite: Cross-browser testing via
image processing,” Softw. Pract. Exper., vol. 46, no. 11, pp. 1459–1477, Nov. 2016. [Online].
Available: https://doi.org/10.1002/spe.2387
[12] S. R. Choudhary, M. R. Prasad, and A. Orso, “X-pert: A web application testing tool for cross-
browser inconsistency detection,” in Proc. of the 2014 International Symposium on Software
Testing and Analysis (ISSTA 2014). New York, NY, USA: ACM, 2014, pp. 417–420. [Online].
Available: http://doi.acm.org/10.1145/2610384.2628057
[13] E. Selay, Z. Q. Zhou, and J. Zou, “Adaptive random testing for image comparison in regres- sion
web testing,” in Proc. of the 2014 International Conference on Digital Image Comput- ing:
Techniques and Applications (DICTA 2014), Nov 2014, pp. 1–7.
[14] P. Lu, W. Fan, J. Sun, H. Tanaka, and S. Naoi, “Webpage cross-browser test from image level,” in
2017 IEEE International Conference on Multimedia and Expo (ICME), July 2017, pp. 349–354.
[15] F. C. Paes, “Automatic detection of cross-browser incompatibilities using machine learning and
screenshot similarity: Student research abstract,” in Proceedings of the Symposium on Applied
Computing, ser. SAC ’17. New York, NY, USA: ACM, 2017, pp. 697–698. [Online]. Available:
http://doi.acm.org/10.1145/3019612.3019924
[16] F. C. Paes and W. M. Watanabe, “Layout cross-browser incompatibility detection using machine
learning and dom segmentation,” in Proceedings of the 33rd Annual ACM Symposium on Applied
Computing, ser. SAC ’18. New York, NY, USA: ACM, 2018, pp. 2159–2166. [Online]. Available:
http://doi.acm.org/10.1145/3167132.3167364
[17] S. Mahajan, A. Alameer, P. McMinn, and W. G. J. Halfond, “Xfix: An automated tool for
the repair of layout cross browser issues,” in Proceedings of the 26th
ACM SIGSOFT International Symposium on Software Testing and Analysis, ser. ISSTA 2017.
New York, NY, USA: ACM, 2017, pp. 368–371. [Online]. Available:
http://doi.acm.org/10.1145/3092703.3098223
[18] S. Choudhary, “Detecting cross-browser issues in web applications,” in Proc. of the 33rd
International Conference on Software Engineering (ICSE 2011), May 2011, pp. 1146–1148.
[19] G. Wu, M. He, H. Tang, and J. Wei, “Detect cross-browser issues for javascript-based web
15. International Journal of Software Engineering & Applications (IJSEA), Vol.10, No.6, November 2019
31
applications based on record/replay,” in 2016 IEEE International Conference on Software
Maintenance and Evolution (ICSME), Oct 2016, pp. 78–87.
[20] M. He, G. Wu, H. Tang, W. Chen, J. Wei, H. Zhong, and T. Huang, “X-check: A novel cross-
browser testing service based on record/replay,” in 2016 IEEE International Conference on Web
Services (ICWS), June 2016, pp. 123–130.
[21] X. Li and H. Zeng, “Modeling web application for cross-browser compatibility testing,” in Proc.
of the 15th IEEE/ACIS International Conference on Software Engineering (ICSE 2014), Artificial
Intelligence,NetworkingandParallel/Distributed Computing(SNPD2014), June 2014, pp. 1–5.
[22] V. Dallmeier, M. Burger, T. Orth, and A. Zeller, “Webmate: A tool for testing web 2.0
applications,” in Proc. of the Workshop on JavaScript Tools (JSTools 2012). New York, NY,
USA: ACM, 2012, pp. 11–15. [Online]. Available: http:
//doi.acm.org/10.1145/2307720.2307722
[23] V. Dallmeier, B. Pohl, M. Burger, M. Mirold, and A. Zeller, “Webmate: Web application test
generation in the real world,” in Proc. of the Workshops of the IEEE 7th International Con- ference
on Software Testing, Verification and Validation Workshops (ICSTW 2014), March 2014, pp. 413–
418.
[24] C. P. Patidar, M. Sharma, and V. Sharda, “Detection of cross browser inconsistency by
comparing extracted attributes,” International Journal of Scientific Research in Computer Science
and Engineering, vol. 5, no. 1, pp. 1–6, 2017. [Online]. Available:
http://www.isroset.org/journal/IJSRCSE/full_paper_view.php?paper_id=307
[25] S. Choudhary, H. Versee, and A. Orso, “A cross-browser web application testing tool,” in Proc. of
the 2010 IEEE International Conference on Software Maintenance (ICSM 2010), Sept 2010, pp.
1–6.
[26] H. Shi and H. Zeng, “Cross-browser compatibility testing based on model comparison,” in
Computer Application Technologies (CCATS), 2015 International Conference on, Aug 2015, pp.
103–107.
[27] A. Mesbah, A. van Deursen, and S. Lenselink, “Crawling ajax-based web applications through
dynamic analysis of user interface state changes,” ACM Transactions on the Web, vol. 6, no.
1, pp. 3:1–3:30, Mar. 2012. [Online]. Available: http:
//doi.acm.org/10.1145/2109205.2109208
[28] J. R. Quinlan, C4.5: Programs for Machine Learning. San Francisco, CA, USA: Morgan
Kaufmann Publishers Inc., 1993.
[29] J. Heaton, Introduction to Neural Networks for Java, 2Nd Edition, 2nd ed. Heaton Research, Inc.,
2008.
[30] T. Y.Chen, F.-C. Kuo, R. G. Merkel, and T. H. Tse, “Adaptive random testing: The art of test case
diversity,” Journal of Systems and Software, vol. 83, no. 1, pp. 60–66, Jan. 2010. [Online].
Available: http://dx.doi.org/10.1016/j.jss.2009.02.022
[31] V. Garousi, A. Mesbah, A. Betin-Can, and S. Mirshokraie, “A systematic mapping study of web
application testing,” Information and Software Technology, vol. 55, no. 8, pp. 1374 – 1396, 2013.
[Online]. Available: http://www.sciencedirect.com/science/article/pii/ S0950584913000396
[32] Y.-F. Li, P. K. Das, and D. L. Dowe, “Two decades of web application testing-a survey of recent
16. International Journal of Software Engineering & Applications (IJSEA), Vol.10, No.6, November 2019
32
advances,” Inf. Syst., vol. 43, no. C, pp. 20–54, Jul. 2014. [Online]. Available:
http://dx.doi.org/10.1016/j.is.2014.02.001
[33] S. Dog˘an, A. Betin-Can, and V. Garousi, “Web application testing: A systematic
literature review,” J. Syst. Softw., vol. 91, pp. 174–201, May 2014. [Online]. Available:
http://dx.doi.org/10.1016/j.jss.2014.01.010
[34] W. M. Watanabe, G. L. Amêndola, and F. C. Paes, “Layout cross-platform and cross-browser
incompatibilities detection using classification of dom elements,” ACM Trans. Web, vol.13, no. 2,
pp. 12:1–12:27, Mar. 2019. [Online].Available: http://doi.acm.org/10.1145/3316808
[35] T. A. Walsh, G. M. Kapfhammer, and P. McMinn, “Redecheck: An automatic layout failure
checking tool for responsively designed web pages,” in Proceedings of the 26th ACM
SIGSOFT International Symposium on Software Testing and Analysis, ser. ISSTA 2017. New York,
NY, USA: ACM, 2017, pp. 360–363. [Online]. Available:
http://doi.acm.org/10.1145/3092703.3098221