This document proposes a hybrid crowd-powered approach for compatibility testing of mobile devices and applications. It discusses limitations of existing automated, cloud-based, and crowdsourced testing approaches. The proposed approach involves a web-based crowdtesting platform that allows both low-level code testing and high-level testing of physical device features through public crowd testers using real mobile devices. It also includes a knowledge base for storing and sharing results of compatibility testing to help developers address issues. The goal is to provide a comprehensive solution for testing mobile apps across a wide variety of devices and OS versions through a global crowd.
Mobile Application Testing - White PaperJade Global
Mobile applications are the sole reason for the rise in popularity of smartphones. The ease and convenience of mobile applications usage has created a huge dependency on it. Over the years, the number and variety of consumer and enterprise mobile applications has grown phenomenally.
The Essentials of Mobile App Testing and MonitoringMobilePundits
Mobile Technology is transforming the way people use their cell phones. Although demand is highest for consumer apps, enterprise applications are evolving too, allowing businesses to work more productively In this document we look at how the testing of mobile applications helps to achieve quality. Here we explore a typical way that an app is developed, look at the testing stages involved, answer some of the frequent questions concerning testing, and provide a definition of the common testing terms.
Enabling Continuous Quality in Mobile App DevelopmentMatthew Young
This document discusses how organizations can extend continuous integration (CI) practices to mobile app development. CI allows for continuous feedback throughout development to improve quality while speeding up time to market. However, mobile app testing presents new challenges due to the large number of device and OS combinations. The document recommends that mobile CI solutions provide scalable test automation across many devices, emulate real-world conditions on real devices, and integrate seamlessly with development tools and workflows to provide actionable feedback. This will allow mobile teams to thoroughly test apps and build quality in from the start to meet demanding timelines.
Challenges and solutions in mobile and cloud computing testing - ZANECSatya Kaliki
Cloud Computing and Mobile platforms (e.g. Android, iPhone) have emerged as compelling choices for a large number of software systems and Apps that are built today. While these new paradigms present opportunities for suppliers to provide innovative services, they also present significant challenges to quality engineers. This case study presents practical solutions to overcome those challenges.
MOBILE APPLICATION DEVELOPMENT METHODOLOGIES ADOPTED IN OMANI MARKET: A COMPA...mathsjournal
Popularity of mobile phones and huge growing for mobile applications make developers in need for flexible
software process, which can deal with many challenges facing the mobile app development process. These
challenges include: volatility of requirements, strong user involvement, development time tightness, process
simplicity, and production of valuable software in low cost. This research study investigates the current
mobile app development approaches adopted in Omani market and provides a comparison between
existing methods. The results reveal that Agile approach is the most popular model for mobile software
engineering in Omani, as it naturally fits most of the applications required in this market. The study also
discusses various agile process models such as Scrum, XP, Lean, DSDM, and others. It is concluded that
XP model is the most preferable model used by Omani developers due to its dynamic and adaptive nature
for different mobile app processes. The study provides also a series of recommendations for mobile app
developers which should help in selecting the most appropriate method that suits the targeted market
sector.
Mobile Application Development Methodologies Adopted in Omani Market: A Compa...ijseajournal
Popularity of mobile phones and huge growing for mobile applications make developers in need for flexible software process, which can deal with many challenges facing the mobile app development process. These challenges include: volatility of requirements, strong user involvement, development time tightness, process simplicity, and production of valuable software in low cost. This research study investigates the current mobile app development approaches adopted in Omani market and provides a comparison between existing methods. The results reveal that Agile approach is the most popular model for mobile software engineering in Omani, as it naturally fits most of the applications required in this market. The study also discusses various agile process models such as Scrum, XP, Lean, DSDM, and others. It is concluded that XP model is the most preferable model used by Omani developers due to its dynamic and adaptive nature for different mobile app processes. The study provides also a series of recommendations for mobile app developers which should help in selecting the most appropriate method that suits the targeted market sector.
The document discusses test automation challenges for mobile applications due to the variety of devices, operating systems, and frequent updates. It introduces the HP QuickTest Professional software and the DeviceAnywhere solution for mobile application test automation. DeviceAnywhere allows creating and running automated tests on over 2000 real devices across different carriers. It has been integrated with HP QuickTest Professional, enabling testers to build reusable mobile testing assets and run tests on various devices without manual effort. The combination of these two solutions provides a comprehensive approach for validating the quality and reliability of mobile applications.
Mobile Application Testing - White PaperJade Global
Mobile applications are the sole reason for the rise in popularity of smartphones. The ease and convenience of mobile applications usage has created a huge dependency on it. Over the years, the number and variety of consumer and enterprise mobile applications has grown phenomenally.
The Essentials of Mobile App Testing and MonitoringMobilePundits
Mobile Technology is transforming the way people use their cell phones. Although demand is highest for consumer apps, enterprise applications are evolving too, allowing businesses to work more productively In this document we look at how the testing of mobile applications helps to achieve quality. Here we explore a typical way that an app is developed, look at the testing stages involved, answer some of the frequent questions concerning testing, and provide a definition of the common testing terms.
Enabling Continuous Quality in Mobile App DevelopmentMatthew Young
This document discusses how organizations can extend continuous integration (CI) practices to mobile app development. CI allows for continuous feedback throughout development to improve quality while speeding up time to market. However, mobile app testing presents new challenges due to the large number of device and OS combinations. The document recommends that mobile CI solutions provide scalable test automation across many devices, emulate real-world conditions on real devices, and integrate seamlessly with development tools and workflows to provide actionable feedback. This will allow mobile teams to thoroughly test apps and build quality in from the start to meet demanding timelines.
Challenges and solutions in mobile and cloud computing testing - ZANECSatya Kaliki
Cloud Computing and Mobile platforms (e.g. Android, iPhone) have emerged as compelling choices for a large number of software systems and Apps that are built today. While these new paradigms present opportunities for suppliers to provide innovative services, they also present significant challenges to quality engineers. This case study presents practical solutions to overcome those challenges.
MOBILE APPLICATION DEVELOPMENT METHODOLOGIES ADOPTED IN OMANI MARKET: A COMPA...mathsjournal
Popularity of mobile phones and huge growing for mobile applications make developers in need for flexible
software process, which can deal with many challenges facing the mobile app development process. These
challenges include: volatility of requirements, strong user involvement, development time tightness, process
simplicity, and production of valuable software in low cost. This research study investigates the current
mobile app development approaches adopted in Omani market and provides a comparison between
existing methods. The results reveal that Agile approach is the most popular model for mobile software
engineering in Omani, as it naturally fits most of the applications required in this market. The study also
discusses various agile process models such as Scrum, XP, Lean, DSDM, and others. It is concluded that
XP model is the most preferable model used by Omani developers due to its dynamic and adaptive nature
for different mobile app processes. The study provides also a series of recommendations for mobile app
developers which should help in selecting the most appropriate method that suits the targeted market
sector.
Mobile Application Development Methodologies Adopted in Omani Market: A Compa...ijseajournal
Popularity of mobile phones and huge growing for mobile applications make developers in need for flexible software process, which can deal with many challenges facing the mobile app development process. These challenges include: volatility of requirements, strong user involvement, development time tightness, process simplicity, and production of valuable software in low cost. This research study investigates the current mobile app development approaches adopted in Omani market and provides a comparison between existing methods. The results reveal that Agile approach is the most popular model for mobile software engineering in Omani, as it naturally fits most of the applications required in this market. The study also discusses various agile process models such as Scrum, XP, Lean, DSDM, and others. It is concluded that XP model is the most preferable model used by Omani developers due to its dynamic and adaptive nature for different mobile app processes. The study provides also a series of recommendations for mobile app developers which should help in selecting the most appropriate method that suits the targeted market sector.
The document discusses test automation challenges for mobile applications due to the variety of devices, operating systems, and frequent updates. It introduces the HP QuickTest Professional software and the DeviceAnywhere solution for mobile application test automation. DeviceAnywhere allows creating and running automated tests on over 2000 real devices across different carriers. It has been integrated with HP QuickTest Professional, enabling testers to build reusable mobile testing assets and run tests on various devices without manual effort. The combination of these two solutions provides a comprehensive approach for validating the quality and reliability of mobile applications.
Analyzing Reviews and Code of Mobile Apps for Better Release PlanningSebastiano Panichella
The mobile applications industry experiences an unprecedented high growth, developers working in this context face a fierce competition in acquiring and retaining users.
They have to quickly implement new features and fix bugs, or risks losing their users to the competition. To achieve this goal they must closely monitor and analyze the user feedback they receive in form of reviews. However, successful apps can receive up to several thousands of reviews per day, manually analysing each of them is a time consuming task. To help developers deal with the large amount of available data, we manually analyzed the text of 1566 user reviews and defined a high and low level taxonomy containing mobile specific categories (e.g. performance, resources, battery, memory, etc.) highly relevant for developers during the planning of maintenance and evolution activities. Then we built the User Request Referencer (URR) prototype, using Machine Learning and Information Retrieval techniques, to automatically classify reviews according to our taxonomy and recommend for a particular review what are the source code files that need to be modified to handle the issue described in the user review. We evaluated our approach through an empirical study involving the reviews and code of 39 mobile applications. Our results show a high precision and recall of URR in organising reviews according to the defined taxonomy
What Would Users Change in My App? Summarizing App Reviews for Recommending ...Sebastiano Panichella
Mobile app developers constantly monitor feedback in user reviews with the goal of improving their mobile apps and better
meeting user expectations. Thus, automated approaches have
been proposed in literature with the aim of reducing the effort
required for analyzing feedback contained in user reviews via
automatic classication/prioritization according to specific
topics. In this paper, we introduce SURF (Summarizer of
User Reviews Feedback), a novel approach to condense the
enormous amount of information that developers of popular
apps have to manage due to user feedback received on a
daily basis. SURF relies on a conceptual model for capturing
user needs useful for developers performing maintenance and
evolution tasks. Then it uses sophisticated summarisation
techniques for summarizing thousands of reviews and generating
an interactive, structured and condensed agenda of
recommended software changes. We performed an end-to-end
evaluation of SURF on user reviews of 17 mobile apps (5 of
them developed by Sony Mobile), involving 23 developers
and researchers in total. Results demonstrate high accuracy
of SURF in summarizing reviews and the usefulness of the
recommended changes. In evaluating our approach we found
that SURF helps developers in better understanding user
needs, substantially reducing the time required by developers
compared to manually analyzing user (change) requests and
planning future software changes.
This document discusses performance testing concepts, methodologies, and commonly used tools. It begins by defining performance testing as a process of exercising an application with load-generating tools to find bottlenecks and test for scalability, availability, and performance. It highlights the importance of performance testing for both enterprise and scientific applications. The document then covers key performance testing concepts like load testing and factors that impact system performance. It also outlines features of effective load testing tools and factors for successful load testing projects.
This document discusses performance testing concepts, methodologies, and commonly used tools. It begins by defining performance testing as a process of exercising an application with load-generating tools to find bottlenecks and test scalability, availability, and performance from hardware and software perspectives. It then discusses why performance testing is important, especially for mission-critical applications. Finally, it outlines key features that load testing tools should provide and factors for successful load testing such as testing at different speeds and browsers and generating complex scenarios.
QualiPSo is a four-year project, co-funded by the EU Commission, whose objective is to foster and promote the adoption of Open Source in Industries, SMES, and Public administrations by means of new software, methods, mortars and bricks. In particular, the objective will be achieved by means of different research activities, ranging from the legal issues to the definition of quality measurements mechanisms, all integrated into next generation Factories. These results will be then used within QualiPSo Competence Centre, widespread all over the world.
Google Play, Apple App Store and Windows Phone Store are well known distribution platforms where users can download mobile apps, rate them and write review comments about the apps they are using. Previous research studies demonstrated that these reviews contain important information to help developers improve their apps. However, analyzing reviews is challenging due to the large amount of reviews posted every day, the unstructured nature of reviews and its varying quality. In this demo we present
ARdoc, a tool which combines three techniques: (1) Natural Language Parsing,(2) Text Analysis and (3) Sentiment Analysis to automatically classify useful feedback contained in app reviews important for performing software maintenance and evolution tasks. Our quantitative and qualitative analysis (involving mobile professional developers) demonstrates that ARdoc correctly classifies feedback useful for maintenance perspectives in user reviews with high precision (ranging between84% and 89%), recall (ranging between 84% and 89%), and an F-Measure (ranging between 84% and 89%). While evaluating our tool developers of our study confirmed the use-fulness of ARdoc in extracting important maintenance tasks for their mobile applications.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Chapter 3 - Common Test Types and Test Process for Mobile ApplicationsNeeraj Kumar Singh
This is chapter 3 of ISTQB Specialist Mobile Application Tester certification. This presentation helps aspirants understand and prepare the content of the certification.
ASE 2016 Taming Android Fragmentation: Characterizing and Detecting Compatibi...Lili Wei
The slides are used in ASE 2016 for presentation of paper Taming Android Fragmentation: Characterizing and Detecting Compatibility Issues for Android Apps
The document outlines Moqod's quality assurance guidelines and processes. It defines a bug, sets the goal of delivering bug-free software, and establishes their quality standard as software functioning as expected by the customer. It details code review, automated testing, manual testing, and the responsibilities of QA engineers in accepting or rejecting work. Testing includes unit tests, regression tests at interim milestones, and final acceptance testing against all use cases.
Too little quality will lead to disappointed users, while too much quality is costly, inefficient, and unsustainable. In this talk we show how to elicit appropriate quality levels with quality-impact relationships. The results help to adjust quality in case of negative user feedback and to prespare service level agreements (SLA) based on empirical evidence.
Advantages and Disadvantages of a Monolithic Repositorymustafa sarac
This document discusses a case study conducted at Google to investigate the advantages and disadvantages of using a monolithic source code repository compared to multiple per-project repositories. The study utilized a survey of Google engineers as well as an analysis of developer tool logs. The survey found that engineers strongly prefer Google's monolithic repository and cited visibility of the codebase and simple dependency management as primary benefits. Engineer logs confirmed that they frequently view and edit code from other teams, indicating they take advantage of increased visibility. However, multi-repo systems provide benefits like more stable dependencies and flexibility in tool selection that a monolithic repo cannot match. Overall, the study uncovered several tradeoffs between the two approaches and that preferred development tools also influence engineers' preferences.
Choosing the Right Testing Strategy to Scale up Mobile App Testing.pdfpCloudy
The document discusses the importance of developing a robust mobile app testing strategy to handle the challenges of mobile app testing at scale. It outlines 14 key elements that should be considered when creating a testing strategy, including device selection, deciding between automated and manual testing, network connectivity testing, performance testing, and security testing. The document stresses the need for a balanced approach that blends automated and manual testing techniques to effectively test mobile apps.
Comparative Study on Different Mobile Application FrameworksIRJET Journal
The document discusses and compares several frameworks for automating mobile application testing, including Robotium, Appium, Espresso, Calabash, and UI Automator. It provides brief descriptions of each framework, highlighting their key features and limitations. A table compares the frameworks across different parameters. The document concludes that while each framework has its uses, Appium stands out for its platform independence, support for both native and hybrid applications, and ability to work across multiple programming languages. Automated testing frameworks are important for thoroughly testing mobile applications and ensuring quality and stability.
Test Cases and Testing Strategies for Mobile Apps –A SurveyIRJET Journal
This document discusses testing strategies and test cases for mobile applications. It begins by introducing the types of mobile applications (native, hybrid, and web apps) and mobile operating systems. It then discusses how software engineering principles apply to developing mobile apps. The document outlines that testing mobile apps involves both hardware and software testing. It emphasizes the importance of a comprehensive mobile testing strategy that incorporates device and network testing, selection of target devices, and both manual and automated testing tools to test functionality and performance. The remainder of the document focuses on test cases for mobile apps and automation testing tools.
This document discusses the design and development of a Service Oriented Architecture (SOA) interface for mobile device testing. It proposes using a SOA approach to address the challenges of mobile device testing, which is made difficult by the complex and evolving nature of mobile software and hardware. The paper describes building modular components according to SOA principles and using a common interface to allow components to communicate and reuse test cases. It outlines developing fault injection techniques and a taxonomy of faults specific to SOA to test the reliability of the proposed interface. The goal is to create a more flexible and reusable framework for mobile device testing.
The document discusses an integrated analytical framework for enhancing software methodologies for developing social networking apps using Agile principles. It proposes a model that incorporates features of Extreme Programming within an Agile methodology to address challenges in designing social networking apps. The model was evaluated through case studies of developing multiple social networking apps on different platforms. Results showed the model provided effective visualization of the development process to guide teams, especially for small and medium enterprises.
Software testing automation a comparative study on productivity rate of ope...Conference Papers
This document compares the productivity of two open source automated software testing tools, Robot Framework 3.0 and Katalon Studio 7.0, for testing smart manufacturing applications. Ten subject matter experts tested their productivity using each tool across various stages of the software development lifecycle. Katalon Studio 7.0 was found to be significantly more productive than Robot Framework 3.0 based on statistical analysis of the time taken using each tool. The study provides guidance for selecting automated testing tools to improve productivity for software test engineers working in smart manufacturing.
This document discusses mobile application testing and provides a testing matrix. It defines mobile applications and different testing techniques, including automated testing and manual testing. The testing matrix classifies test techniques, environments, levels, and scopes. It includes test levels like unit testing, functionality testing, and compatibility testing. Test scopes are divided into black box and white box testing. The document also introduces state-of-the-art mobile application testing tools and their support for different platforms and programming languages. Case studies are analyzed to evaluate findings from automated versus manual testing experiments.
The burgeoning use of mobile devices has created enormous opportunities for organizations to leverage mobile to increase sales, advertise products, and collaborate with internal and external resources. However, with increasing usage, the need to perform testing on these devices is increasing significantly. This is not an easy task considering the number of devices, device operating systems, and operating system versions. To manage the number of variations, organizations rely on mobile testing tools to support their testing efforts. David Dang shares his experiences analyzing numerous mobile testing tool platforms for a prominent shopping network. Learn how identifying the "right" mobile testing tool depends on multiple factors such as supported devices, level of testing, resources, and required integration with other tools. Take back to share with your team a review of common tools on the market and the pros and cons of each.
12 considerations for mobile testing (march 2017)Antoine Aymer
The document is a brochure that outlines 12 key considerations for choosing a mobile application testing solution. It discusses the importance of testing apps on real devices and emulators, enabling remote access to devices, supporting both manual and automated testing, testing under realistic network conditions, simulating common user interruptions, using object ID recognition, and testing the functional, performance, and security aspects of apps. It positions HPE's mobile testing solutions as addressing all 12 considerations by supporting testing on devices/emulators, remote access, manual/automated testing, network simulation, interruption simulation, object ID recognition, and functional, performance, and security testing. It emphasizes the importance of an end-to-end solution and expertise in mobile testing.
Explain the different types of Apps testing and Outsourcing QA.pdfLorryThomas1
The evolution of mobile app testing mirrors the rapid advancements in portable technology and user transformation.
Significantly, testing was often out-of-the-box, with an immediate focus on functionality rather than user background.
Early mobile app tests were mostly manual, leading to inefficiency and inconveniences.
Analyzing Reviews and Code of Mobile Apps for Better Release PlanningSebastiano Panichella
The mobile applications industry experiences an unprecedented high growth, developers working in this context face a fierce competition in acquiring and retaining users.
They have to quickly implement new features and fix bugs, or risks losing their users to the competition. To achieve this goal they must closely monitor and analyze the user feedback they receive in form of reviews. However, successful apps can receive up to several thousands of reviews per day, manually analysing each of them is a time consuming task. To help developers deal with the large amount of available data, we manually analyzed the text of 1566 user reviews and defined a high and low level taxonomy containing mobile specific categories (e.g. performance, resources, battery, memory, etc.) highly relevant for developers during the planning of maintenance and evolution activities. Then we built the User Request Referencer (URR) prototype, using Machine Learning and Information Retrieval techniques, to automatically classify reviews according to our taxonomy and recommend for a particular review what are the source code files that need to be modified to handle the issue described in the user review. We evaluated our approach through an empirical study involving the reviews and code of 39 mobile applications. Our results show a high precision and recall of URR in organising reviews according to the defined taxonomy
What Would Users Change in My App? Summarizing App Reviews for Recommending ...Sebastiano Panichella
Mobile app developers constantly monitor feedback in user reviews with the goal of improving their mobile apps and better
meeting user expectations. Thus, automated approaches have
been proposed in literature with the aim of reducing the effort
required for analyzing feedback contained in user reviews via
automatic classication/prioritization according to specific
topics. In this paper, we introduce SURF (Summarizer of
User Reviews Feedback), a novel approach to condense the
enormous amount of information that developers of popular
apps have to manage due to user feedback received on a
daily basis. SURF relies on a conceptual model for capturing
user needs useful for developers performing maintenance and
evolution tasks. Then it uses sophisticated summarisation
techniques for summarizing thousands of reviews and generating
an interactive, structured and condensed agenda of
recommended software changes. We performed an end-to-end
evaluation of SURF on user reviews of 17 mobile apps (5 of
them developed by Sony Mobile), involving 23 developers
and researchers in total. Results demonstrate high accuracy
of SURF in summarizing reviews and the usefulness of the
recommended changes. In evaluating our approach we found
that SURF helps developers in better understanding user
needs, substantially reducing the time required by developers
compared to manually analyzing user (change) requests and
planning future software changes.
This document discusses performance testing concepts, methodologies, and commonly used tools. It begins by defining performance testing as a process of exercising an application with load-generating tools to find bottlenecks and test for scalability, availability, and performance. It highlights the importance of performance testing for both enterprise and scientific applications. The document then covers key performance testing concepts like load testing and factors that impact system performance. It also outlines features of effective load testing tools and factors for successful load testing projects.
This document discusses performance testing concepts, methodologies, and commonly used tools. It begins by defining performance testing as a process of exercising an application with load-generating tools to find bottlenecks and test scalability, availability, and performance from hardware and software perspectives. It then discusses why performance testing is important, especially for mission-critical applications. Finally, it outlines key features that load testing tools should provide and factors for successful load testing such as testing at different speeds and browsers and generating complex scenarios.
QualiPSo is a four-year project, co-funded by the EU Commission, whose objective is to foster and promote the adoption of Open Source in Industries, SMES, and Public administrations by means of new software, methods, mortars and bricks. In particular, the objective will be achieved by means of different research activities, ranging from the legal issues to the definition of quality measurements mechanisms, all integrated into next generation Factories. These results will be then used within QualiPSo Competence Centre, widespread all over the world.
Google Play, Apple App Store and Windows Phone Store are well known distribution platforms where users can download mobile apps, rate them and write review comments about the apps they are using. Previous research studies demonstrated that these reviews contain important information to help developers improve their apps. However, analyzing reviews is challenging due to the large amount of reviews posted every day, the unstructured nature of reviews and its varying quality. In this demo we present
ARdoc, a tool which combines three techniques: (1) Natural Language Parsing,(2) Text Analysis and (3) Sentiment Analysis to automatically classify useful feedback contained in app reviews important for performing software maintenance and evolution tasks. Our quantitative and qualitative analysis (involving mobile professional developers) demonstrates that ARdoc correctly classifies feedback useful for maintenance perspectives in user reviews with high precision (ranging between84% and 89%), recall (ranging between 84% and 89%), and an F-Measure (ranging between 84% and 89%). While evaluating our tool developers of our study confirmed the use-fulness of ARdoc in extracting important maintenance tasks for their mobile applications.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Chapter 3 - Common Test Types and Test Process for Mobile ApplicationsNeeraj Kumar Singh
This is chapter 3 of ISTQB Specialist Mobile Application Tester certification. This presentation helps aspirants understand and prepare the content of the certification.
ASE 2016 Taming Android Fragmentation: Characterizing and Detecting Compatibi...Lili Wei
The slides are used in ASE 2016 for presentation of paper Taming Android Fragmentation: Characterizing and Detecting Compatibility Issues for Android Apps
The document outlines Moqod's quality assurance guidelines and processes. It defines a bug, sets the goal of delivering bug-free software, and establishes their quality standard as software functioning as expected by the customer. It details code review, automated testing, manual testing, and the responsibilities of QA engineers in accepting or rejecting work. Testing includes unit tests, regression tests at interim milestones, and final acceptance testing against all use cases.
Too little quality will lead to disappointed users, while too much quality is costly, inefficient, and unsustainable. In this talk we show how to elicit appropriate quality levels with quality-impact relationships. The results help to adjust quality in case of negative user feedback and to prespare service level agreements (SLA) based on empirical evidence.
Advantages and Disadvantages of a Monolithic Repositorymustafa sarac
This document discusses a case study conducted at Google to investigate the advantages and disadvantages of using a monolithic source code repository compared to multiple per-project repositories. The study utilized a survey of Google engineers as well as an analysis of developer tool logs. The survey found that engineers strongly prefer Google's monolithic repository and cited visibility of the codebase and simple dependency management as primary benefits. Engineer logs confirmed that they frequently view and edit code from other teams, indicating they take advantage of increased visibility. However, multi-repo systems provide benefits like more stable dependencies and flexibility in tool selection that a monolithic repo cannot match. Overall, the study uncovered several tradeoffs between the two approaches and that preferred development tools also influence engineers' preferences.
Choosing the Right Testing Strategy to Scale up Mobile App Testing.pdfpCloudy
The document discusses the importance of developing a robust mobile app testing strategy to handle the challenges of mobile app testing at scale. It outlines 14 key elements that should be considered when creating a testing strategy, including device selection, deciding between automated and manual testing, network connectivity testing, performance testing, and security testing. The document stresses the need for a balanced approach that blends automated and manual testing techniques to effectively test mobile apps.
Comparative Study on Different Mobile Application FrameworksIRJET Journal
The document discusses and compares several frameworks for automating mobile application testing, including Robotium, Appium, Espresso, Calabash, and UI Automator. It provides brief descriptions of each framework, highlighting their key features and limitations. A table compares the frameworks across different parameters. The document concludes that while each framework has its uses, Appium stands out for its platform independence, support for both native and hybrid applications, and ability to work across multiple programming languages. Automated testing frameworks are important for thoroughly testing mobile applications and ensuring quality and stability.
Test Cases and Testing Strategies for Mobile Apps –A SurveyIRJET Journal
This document discusses testing strategies and test cases for mobile applications. It begins by introducing the types of mobile applications (native, hybrid, and web apps) and mobile operating systems. It then discusses how software engineering principles apply to developing mobile apps. The document outlines that testing mobile apps involves both hardware and software testing. It emphasizes the importance of a comprehensive mobile testing strategy that incorporates device and network testing, selection of target devices, and both manual and automated testing tools to test functionality and performance. The remainder of the document focuses on test cases for mobile apps and automation testing tools.
This document discusses the design and development of a Service Oriented Architecture (SOA) interface for mobile device testing. It proposes using a SOA approach to address the challenges of mobile device testing, which is made difficult by the complex and evolving nature of mobile software and hardware. The paper describes building modular components according to SOA principles and using a common interface to allow components to communicate and reuse test cases. It outlines developing fault injection techniques and a taxonomy of faults specific to SOA to test the reliability of the proposed interface. The goal is to create a more flexible and reusable framework for mobile device testing.
The document discusses an integrated analytical framework for enhancing software methodologies for developing social networking apps using Agile principles. It proposes a model that incorporates features of Extreme Programming within an Agile methodology to address challenges in designing social networking apps. The model was evaluated through case studies of developing multiple social networking apps on different platforms. Results showed the model provided effective visualization of the development process to guide teams, especially for small and medium enterprises.
Software testing automation a comparative study on productivity rate of ope...Conference Papers
This document compares the productivity of two open source automated software testing tools, Robot Framework 3.0 and Katalon Studio 7.0, for testing smart manufacturing applications. Ten subject matter experts tested their productivity using each tool across various stages of the software development lifecycle. Katalon Studio 7.0 was found to be significantly more productive than Robot Framework 3.0 based on statistical analysis of the time taken using each tool. The study provides guidance for selecting automated testing tools to improve productivity for software test engineers working in smart manufacturing.
This document discusses mobile application testing and provides a testing matrix. It defines mobile applications and different testing techniques, including automated testing and manual testing. The testing matrix classifies test techniques, environments, levels, and scopes. It includes test levels like unit testing, functionality testing, and compatibility testing. Test scopes are divided into black box and white box testing. The document also introduces state-of-the-art mobile application testing tools and their support for different platforms and programming languages. Case studies are analyzed to evaluate findings from automated versus manual testing experiments.
The burgeoning use of mobile devices has created enormous opportunities for organizations to leverage mobile to increase sales, advertise products, and collaborate with internal and external resources. However, with increasing usage, the need to perform testing on these devices is increasing significantly. This is not an easy task considering the number of devices, device operating systems, and operating system versions. To manage the number of variations, organizations rely on mobile testing tools to support their testing efforts. David Dang shares his experiences analyzing numerous mobile testing tool platforms for a prominent shopping network. Learn how identifying the "right" mobile testing tool depends on multiple factors such as supported devices, level of testing, resources, and required integration with other tools. Take back to share with your team a review of common tools on the market and the pros and cons of each.
12 considerations for mobile testing (march 2017)Antoine Aymer
The document is a brochure that outlines 12 key considerations for choosing a mobile application testing solution. It discusses the importance of testing apps on real devices and emulators, enabling remote access to devices, supporting both manual and automated testing, testing under realistic network conditions, simulating common user interruptions, using object ID recognition, and testing the functional, performance, and security aspects of apps. It positions HPE's mobile testing solutions as addressing all 12 considerations by supporting testing on devices/emulators, remote access, manual/automated testing, network simulation, interruption simulation, object ID recognition, and functional, performance, and security testing. It emphasizes the importance of an end-to-end solution and expertise in mobile testing.
Explain the different types of Apps testing and Outsourcing QA.pdfLorryThomas1
The evolution of mobile app testing mirrors the rapid advancements in portable technology and user transformation.
Significantly, testing was often out-of-the-box, with an immediate focus on functionality rather than user background.
Early mobile app tests were mostly manual, leading to inefficiency and inconveniences.
Crowd testing involves using an online platform to connect testers from around the world with software testing projects. It allows testing on multiple devices in various real-world conditions. Companies use crowd testing to supplement internal QA and get faster feedback to identify and fix bugs before public release. Testers are rewarded based on the quality of bugs found, encouraging thorough testing.
What is Appium Testing?
Appium is an open-source automation framework specifically designed for mobile app testing. It enables testers to automate the testing of native, hybrid, and web applications across multiple platforms, including iOS and Android. By leveraging the power of Appium, developers and QA professionals can ensure that their apps function flawlessly across different devices, operating systems, and screen sizes.
One of the key advantages of using Appium is its cross-platform compatibility. It supports a wide range of programming languages, including Java, Python, C#, Ruby, and JavaScript, allowing testers to work with their preferred language. This flexibility ensures that organizations can leverage their existing skill sets and resources effectively.
Appium operates on the principle of the WebDriver protocol, which provides a standardized way of interacting with mobile devices. It uses the underlying automation frameworks provided by the platform vendors, such as Apple’s UIAutomation framework for iOS and Google’s UiAutomator framework for Android. This approach enables testers to interact with the app in a manner that closely resembles real user interactions.
Another notable feature of Appium is its ability to handle both native and hybrid applications. Native apps are developed specifically for a particular platform, while hybrid apps combine elements of web and native applications. Appium supports both types of apps, allowing testers to perform a comprehensive evaluation of the application’s functionality, performance, and user experience.
One of the challenges in mobile app testing is the diversity of devices and operating systems in the market. Appium addresses this challenge by providing robust support for device farms and cloud testing services. These services allow testers to run their tests on a wide range of real devices or emulators, ensuring broad coverage and accurate results. Additionally, Appium integrates seamlessly with popular testing frameworks and continuous integration tools, making it easier to incorporate testing into the development pipeline.
Appium’s strong community support is another factor that contributes to its popularity. The open-source nature of the framework encourages collaboration and knowledge sharing among testers and developers worldwide. The community actively contributes to the framework’s development, regularly adding new features, fixing bugs, and providing support to fellow users.
In conclusion, Appium has emerged as a powerful and versatile solution for mobile app testing. Its cross-platform compatibility, support for different app types, integration capabilities, and robust community make it an ideal choice for organizations looking to ensure the quality and reliability of their mobile applications. With the increasing complexity of mobile app development and the ever-growing diversity of devices
Software testing is a main part of Software Development Life Cycle and one of the important aspects of Software Engineering. There is a wide variety of testing tools which require or not the user experience in testing software products. According to the daily use, Mobile and Web applications take the first place in development and testing. Testing automation enables developers and testers to easily automate the entire process of testing in software development saving time and costs. This paper provide a feasibility study for commercial and open source web testing tools helping developers or users to pick the suitable tool based on their requirements.
A 5 minute guide to delivering Flawless Mobile AppsCygnet Infotech
Scale up the performance and functioning of mobile applications by implementing dedicated mobile test automation practices with thoughtful planning and strategizing.
The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...kalichargn70th171
Many specialized tools cater to distinct stages within the software development lifecycle (SDLC). These tools target various aspects of development, delivery, and operations, each with its unique strengths. Uniting these diverse testing needs into a single continuous testing platform presents several challenges. Such a platform must seamlessly integrate with various development tools and environments, accommodate different testing methodologies, and remain flexible to adapt to organizational processes and quality standards.
Thick and Thin Lines in Choosing Mobile Test Cloud Environment by Shrinathach...Agile Testing Alliance
Shrinathacharya has given a session on Thick and Thin Lines in Choosing Mobile Test Cloud Environment in ATA Bangalore 13th Meetup. All copyright belongs to the author.
Welcome To
Mobile App Testing
Mobile app testing entails evaluating an app's usability, functionality, aesthetics, and consistency across various mobile devices. Regardless of the device used to access the app, it ensures the best user experience. Here is everything you need to know about mobile app testing, how to do it correctly, and why it matters for teams and developers creating more device-independent mobile apps.
What is testing for mobile apps?
Mobile app testing, as the name suggests, is the procedure of evaluating an Android or iOS mobile app for usability and functioning prior to its general release. Mobile app testing enables users to confirm whether an app satisfies the expected business and technical criteria.
Teams must test apps across a range of screen resolutions, operating system iterations, and network bandwidths in order to conduct successful mobile app testing. When the app is made available to the general public, this helps to assure flawless performance across a variety of device setups.
• Mobile app testing generally entails the following:
• Evaluating the performance of apps on various OS versions (such as Android 7.0, 8.0, etc.)
• Examining how an application would look in landscape and portrait modes
Verifying an app's performance and compatibility when used with a SEO Expate Bangladesh LTD particular configuration checking the app's compatibility with mobile sensors such the GPS, accelerometer, and gyroscope. Checking the functionality of the app's GUI (Menus, dropdowns, navigation buttons, etc.)
Why is testing mobile apps important?
Creating mobile apps has as its ultimate objective expanding corporate reach to more users worldwide. However, if the software is broken or challenging to use, people are unlikely to use it. Naturally, no company wants their clients to uninstall their apps and use a rival.
If mobile apps are not fully tested, there is a great likelihood that users could run into serious issues on their device, which could make for a poor user experience—especially for novice users. Keep in mind that each mobile app's success depends largely on its first impression. Any unanticipated app crash or fault in the functionality of the program can result in its instant deletion. Here are some important figures that demonstrate the importance of mobile app testing:
• Eighty percent of users delete or uninstall an app if it falls short of their expectations.
• 50% of users tend to remove apps that take up too much space from their devices.
• If an app is too slow, 48% of customers remove it or stop using it.
How can app testing help your company?
End users in the real world can download and utilize the app on any of the more than 9000 different mobile devices. It goes without saying that the program must be sufficiently customized for that specific device in order to function properly.
Mobile device usage has surged as a result of smartphones' ongoing growth.
Criteria For Selecting Mobile App Testing Tools.pdfpCloudy
The document provides criteria for selecting mobile app testing tools, including ease of use, support for programming languages and platforms, cost considerations, and support for continuous testing. It emphasizes selecting tools that real people can use to test apps on actual devices, as this yields the most accurate results. Testing tools should accommodate different device fragmentation, platforms, and network settings to ensure apps perform well across environments. The criteria provides a starting point, but testers may consider additional factors based on project needs.
DEPLOYMENT OF CALABASH AUTOMATION FRAMEWORK TO ANALYZE THE PERFORMANCE OF AN ...Journal For Research
The use of mobile phones has grown rapidly and has become a vital part in the present era. The conventional software testing practices are not really feasible for mobile applications. Test Automation does not cover all the necessary features, which makes manual testing also very crucial. The challenge is even greater. The problem is even greater in projects where agile methodologies are deployed. In such projects Test Automation plays a very significant role and is very important during the development cycle. The Automation Framework should be deployable on multiple heterogeneous platforms. The actions that are frequently used in the mobile devices are extracted. They are then mapped into the corresponding functions of Calabash testing framework.The objective and intention of this paper is to bring out significant merits and demerits of different Automation Frameworks and then use the Calabash Automation Framework to develop the Performance Analysis module which can effectively determine the launch time of the mobile application.
Rethinking Kållered │ From Big Box to a Reuse Hub: A Transformation Journey ...SirmaDuztepeliler
"Rethinking Kållered │ From Big Box to a Reuse Hub: A Transformation Journey Toward Sustainability"
The booklet of my master’s thesis at the Department of Architecture and Civil Engineering at Chalmers University of Technology. (Gothenburg, Sweden)
This thesis explores the transformation of the vacated (2023) IKEA store in Kållered, Sweden, into a "Reuse Hub" addressing various user types. The project aims to create a model for circular and sustainable economic practices that promote resource efficiency, waste reduction, and a shift in societal overconsumption patterns.
Reuse, though crucial in the circular economy, is one of the least studied areas. Most materials with reuse potential, especially in the construction sector, are recycled (downcycled), causing a greater loss of resources and energy. My project addresses barriers to reuse, such as difficult access to materials, storage, and logistics issues.
Aims:
• Enhancing Access to Reclaimed Materials: Creating a hub for reclaimed construction materials for both institutional and individual needs.
• Promoting Circular Economy: Showcasing the potential and variety of reusable materials and how they can drive a circular economy.
• Fostering Community Engagement: Developing spaces for social interaction around reuse-focused stores and workshops.
• Raising Awareness: Transforming a former consumerist symbol into a center for circular practices.
Highlights:
• The project emphasizes cross-sector collaboration with producers and wholesalers to repurpose surplus materials before they enter the recycling phase.
• This project can serve as a prototype for reusing many idle commercial buildings in different scales and sizes.
• The findings indicate that transforming large vacant properties can support sustainable practices and present an economically attractive business model with high social returns at the same time.
• It highlights the potential of how sustainable practices in the construction sector can drive societal change.
2. Hybrid Crowd-powered Approach for Compatibility Testing of
Mobile Devices and Applications
Qamar Naith1 and Fabio Ciravegna2
12 The University of Sheield
1 University of Jeddah
1 {qhnaith1@sheield.ac.uk, qnaith@uj.edu.sa}, 2 {f.ciravegna@sheield.ac.uk}
ABSTRACT
Testing mobile applications (apps) to ensure they work seamlessly
on all devices can be diicult and expensive, especially for small
development teams or companies due to limited available resources.
There is a need for methods to outsource testing on all these models
and OS versions. In this paper, we propose a crowdsourced testing
approach that leverages the power of the crowd to perform mobile
device compatibility testing in a novel way. This approach aims to
provide support for testing code, features, or hardware characteris-
tics of mobile devices which is hardly investigated. This testing will
enable developers to ensure that features and hardware character-
istics of any device model or features of a speciic OS version will
work correctly and will not cause any problems with their apps. It
empowers developers to ind a solution for issues they may face
during the development of an app by asking testers to perform a
test or searching a knowledge base provided with the platform. It
will ofer the ability to add a new issue or add a solution to existing
issues. We expect that these capabilities will improve the testing
and development of mobile apps by considering variant mobile
devices and OS versions on the crowd.
KEYWORDS
Crowdtesting, Compatibility, Knowledge base, Mobile Devices
1 INTRODUCTION
Testing mobile apps on diferent mobile device models and OS ver-
sions is an expensive and complicated process. The main reason
is the compatibility issues (mobile fragmentation) which comes
from the complexity of mobile technologies and variety of existing
mobile device models and OS versions [19]. Developers are not
sure whether their apps behave and work as expected on all mobile
devices. This issue requires developers to perform mobile device
compatibility testing on a variety of mobile device platforms, mod-
els, and OS versions in the least time possible to ensure the quality
of the app.
The issue of compatibility testing is attributed to several causes:
(a) Mobile apps (especially, sensing apps) behave slightly difer-
ently, not only on mobile devices from diferent manufacturers
but also on mobile devices from the same manufacturer[17, 29];
(b) Most of the mobile app developers are small teams or individ-
ual developers, it is often too expensive and diicult for them to
have a variety of mobile devices models and OS versions for test-
ing. (c) The automated testing approach is insuicient to test all
ICCSE, 28-31 July 2018, Singapore
2018. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM...$15.00
https://doi.org/10.1145/nnnnnnn.nnnnnnn
real-life scenarios or captures all aspects of real mobile devices [23].
(d) The lack of relevant knowledge and experience[33]. Recently,
several testing approaches like Automated testing [28], Cloud test-
ing services [15, 19, 20, 30] and industrial crowdtesting platforms
or companies [5, 7, 11, 13, 19, 20, 26, 30] have been built to address
the compatibility issue. However, the issue remains [16].
Due to the need for a generic solution to address compatibility
issues mentioned above, we consider the following requirements
in our proposed solution: (1) A new testing approach is required
to support a global-scale mobile device compatibility testing [12];
(2) The individual developers and small teams cannot have all dif-
ferent types of devices, so there is a need for outsourcing to obtain
enough mobile devices models with diferent OS versions; (3) An ef-
icient method for sharing relevant compatibility testing knowledge
among developers and testers. (4) The need for manual compat-
ibility testing on a real mobile phone to address the automated
testing issue. (5) Guidelines to assist developers in the mobile apps
development lifecycle.
This paper proposes a new mobile devices compatibility testing
approach based on a new crowdsourcing method " Hybrid Crowd-
sourcing"( see section 4). The proposed hybrid crowdsourced com-
patibility testing approach is deined as a web-based crowdtesting
platform. It provides the ability to perform compatibility testing of
mobile devices at two diferent levels, low-level (as code) or high-
level (as physical features of API level related to OS versions or
mobile device) against mobile apps requirements. These two levels
of compatibility testing will be achieved through the participation
of public crowd testers using real mobile devices with diferent OS
versions. This platform not only provides testing services but also
a knowledge base for storing the collected results (incompatibility
issues and their reasons) from diferent human and hardware re-
sources in a structured manner, which enable developers to search
for the speciic issues they may face.
The overall content of this paper is organized as follows.Section 2
describes existing approaches for mobile app compatibility testing.
Section 3 discusses the limitations of all these approaches. Section 4
presents our proposed hybrid crowdsourced compatibility testing
approach. Section 5 presents the working mechanism of our crowd-
sourced compatibility testing approach. Section 6 provides a list of
all our contribution in this paper. Finally, Section 7 presents our
conclusions.
2 RELATED WORKS
Several solutions have been proposed in literature to address the
compatibility testing issues of mobile apps [2, 4ś7, 10, 11, 13, 14,
19, 26, 30].
1
3. Hybrid Crowd-powered Approach for Compatibility Testing of Mobile Devices and Applications
Industrial Crowdsourced Testing Platforms
Industrial crowdtesting platforms or companies like Mob4Hire [4],
Pay4Bugs [7], uTest (Applause) [10], 99Tests [1], Testbirds [8], Pass-
brains [6], Global App Testing (GAP) [2] etc., are considered as
initial solutions to address computability issues. Most of these plat-
forms and companies are not directly targeting compatibility testing
in their testing solutions. These platforms and companies focus
more on other types of testing such as functional, security, usability,
load, localization, and automation [13]. The crowdtesting method
used in these aforementioned platforms and companies is mainly
focused on testing the app as a whole to make sure that the app is
free of errors. They have designed based on "On-demand matching
and competition" [26], which means that crowd testers are selected
based on matched requirements (their availability, demographic
information, rate, and testing type preferences).
Automated and Cloud-based Testing Services
Prathibhan et al. [28] designed an automated testing framework
over the cloud for testing mobile app on diferent Android devices.
This framework can provide performance, functional and compati-
bility testing. Testdroid [21]is an online platform for automated UI
testing. It enables developers to execute automated scripted tests on
physical Android or iOS mobile devices with diferent OS versions
and screen resolutions. These devices are physically connected to
online servers which manage the test queue for the individual phys-
ical device. This platform has limited testing service, i.e. it does not
support for testing applications that depend on gesture, voice, or
movement input.
Cloud-based testing services are another type of solution that
is used to address the compatibility issue since diferent mobile
devices and OS versions are available in the cloud. Developers can
access these devices and use them via various cloud services. Huang
and Gong [20] proposed RTMS which is a cloud of mobile devices
for testing mobile apps. In the RTMS the users could remotely access
a pool of mobile devices that exist in a local lab and perform app
testing of uploading, startup, and testing by clicking and swiping
actions.
Besides, the authors of [19] extended the RTMS and they pro-
posed AppACTS that provides automated scripting service to per-
form compatibility testing of apps on diferent users real Android
devices and collecting the results of testing through a web server.
The compatibility testing of the proposed method covers installa-
tion process, startup, random key, screen actions, and the removal
process of the mobile app on real remote devices. The most impor-
tant feature of the AppACTS is its geographical distribution and
scalability, and unlike RTMS it will not use a cloud of mobile devices,
and it uses real remote devices. The MyCrowd platform [11] pro-
vides an automated compatibility testing service for mobile apps.
The service proposed is considered as SaaS (Software as a Service)
and diferent from the service provided in AppACTS [19]. This ser-
vice help developers to upload their apps through the web interface,
choose the target device models and OS version versions, perform
UI testing remotely, and then review testing report after inishing
testing.
Starov et al. [30] proposed CTOMS framework for cloud-based
testing of mobile systems that perform the testing on the remote
real mobile device through the cloud (each pool of real mobile
devices connect to a mobile server, and then all mobile servers
connected to one master server). This cloud testing is carried out by
the participation of their own crowd community. This framework
provided the concept and the prototype of cloud testing of mobile
systems together with multi-directional testing that tests the app
on various Android devices only with various OS versions and new
device models for their compatibility testing.
Knowledge base development for Compatibility
Testing
StackOverlow is a service based on open crowdsourcing (volun-
teer work) for addressing technical programming issues [14, 27].
This platform has documentation that focuses on storing questions
and issues related to computer programming only (coding issue).
It is also used by mobile apps developers to assist them with the
issues they are facing within the implementation process (program-
ming) of mobile apps, but it is clear that rarely they used for testing
(see section 3). The authors of [18] addressed Android fragmenta-
tion problem in mobile application compatibility automated testing.
Also, it could analyze the fragmentation both in code level as well
and API level based on the provided knowledge base from pre-
vious tests. The proposed crowdsourcing platform will involve a
knowledge base (documentation), especially for documenting issues
related to mobile device compatibility testing.
3 THE LIMITATIONS OF EXISTING
APPROACHES
In our view, the solutions above do not provide enough support
to address issues of mobile device compatibility testing for the
following reasons:
• Test Distribution: Most of these platforms, companies, and
cloud services described previously have a limited distribu-
tion of the test. Since they do not provide opportunities to
other public crowd testers with limited resources in world-
wide to contribute. Consequently, these solutions will be
unable to study enough users’ behaviors or interactions with
the app.
• Variety of Mobile Devices: Most of the mentioned approaches
may not cover a large variety of mobile devices or OS ver-
sions. The main reason is the lack of availability of all difer-
ent versions of mobile phones models or OS versions in one
place like a company headquarter or its speciic crowd com-
munity. To our best knowledge, there is not a complete set
of mobile phones with diferent OS versions in a laboratory
or a registered cloud system to be accessible by developers
as discussed at RTMS [20], AppACTS [19], MyCrowd [11],
as well as CTOMS [30].
• Coordinated Scheduling: It can be also an important is-
sue for all mentioned approaches when crowd testers need
to share a single pool of mobile devices in diferent cases
such as: (1) When they required performing the test across
various locations at the same time. (2) When some of the
devices need to be tested with more than one OS versions
for diferent tasks at the same time. This could lead to issues
2
4. Hybrid Crowd-powered Approach for Compatibility Testing of Mobile Devices and Applications
in the coordination process between testers and delay [32]
as well as it may not covering all the OS version.
• Knowledge Accessibility: Most of the mentioned testing
methods do not have a public place to store and share the
results of compatibility testing with public developers to
access. The results are stored in a private space of a company,
only developer teams within the company will be able to
access. This will not improve the experiences and knowledge
of both testers and developers as well as domain knowledge
of the mobile app development.
• Compatibility Testing Type: Most of these approaches fo-
cus on testing mobile apps itself to be sure there is not any
failure, not for checking and understanding the compatibility
of various mobile devices with the app requirement. This
way of testing will not help app developers to broaden their
level of knowledge and experiences.
• Impossibility of Testing: In Stack Overlow, there is a doc-
umentation for programming purposes, and is not designed
for testing. Also, there is no space where developers can ask
to test a speciic issue (test case, feature, or code) on diferent
mobile phone models. Human-based tagging system might
be another drawback found in StackOverlow, where users
select the tag based on their feeling or potentially commu-
nities surrounding those tags. This might lead to tag the
information in a wrong classiication.
The proposed crowdtesting platform and compatibility testing ap-
proach are designed to satisfy this need.
4 PROPOSED HYBRID CROWDSOURCED
COMPATIBILITY TESTING PLATFORM
Manual compatibility testing is the suggested method for testing the
mobile apps during the development process [22]. We design our
approach as a web-based platform specially designed to support
a fully mobile device compatibility testing for Android and iOS
through crowdsourcing methods. This platform involves two main
parts: Manual compatibility testing services based on proposed
hybrid crowdsourcing model, and a crowd-powered knowledge
base.
4.1 The Concept of Proposed Compatibility
Testing Approach
The concept of our compatibility testing approach focuses on test-
ing whether features or hardware components of mobile devices
and features of OS versions compatible with functionalities of dif-
ferent types mobile apps (e.g., health, banking, activity tracking,
social, or any other apps). It considers testing the compatibility for
all various dimensions of the mobile device and its criteria with
the app like (1) Hardware dimensions such as the camera (type
and resolution), screen (rotation, size, resolution), sensors, CPU
speed, GPS, Bluetooth, RAM, etc.); (2) Software dimensions such as
diferent APIs level (minimum supported Application Programming
Interfaces), Multimedia supported, etc.; (3) Human behaviors and
interaction with the app also considered as another dimension to
reduce compatibility testing issue such as localization, languages,
style, accessibility requirements, target user of the app, type of
environment (e.g., medical, inance, education, etc.); Therefore, our
approach possibly could answer the following questions: (1) Is there
any missing hardware components within mobile device models
or features within API levels related to OS versions that the app
needs it to work correctly? (2) How much the hardware compo-
nent is compatible with a speciic functionality of a speciic app for
diferent mobile device models and OS versions?
In our approach, the compatibility testing could be achieved in
two ways:
(1) Low-level testing: A developer can publicly test the com-
patibility by examining a piece of code to identify whether
the code testing results comply with the expected results
from a variety of mobile devices model and OS versions. By
proposed approach, this type of test could be carried out by
a large number of testers with both adequate and very high
level of experience.
(2) High-level testing: A developer can privately test the com-
patibility by testing the hardware features of the device itself
under diferent OS versions. This type of test may help testers
with a limited level of experience to perform multiple tests
and improve their experience.
These two ways of testing with all mobile device dimensions allow
for achieving full compatibility testing service through the use of
new crowdsourcing method.
4.2 Proposed Hybrid Crowdsourcing Method
for Testing
In the literature, two basic crowdsourcing methods are used in soft-
ware development and testing process: Traditional Crowdsourcing
method (used by most of the testing companies [26, 31] and Open
Crowdsourcing as used by in [27]. The traditional crowdsourcing
and open crowdsourcing method are similar in that both of them do
not support replication of the tasks which means that they specify a
diferent job for each crowd in order to complete the testing process
quickly within a limited time. These two methods are diferent in
the contribution form. Traditional crowdsourcing normally used
based on online competition and on-demand matching [24, 26]
where the crowd is selected from the registrants’ crowd list based
on their high experience to perform speciic tasks. On the other
hand, open crowdsourcing method is used based on open collabora-
tion [26] where the public crowd participate (as volunteers) based
on their interests in tasks.
To avoid these constraints, we proposed an alternative crowd-
sourced testing method, called "Hybrid Crowdsourcing" method.
This method uses the power of the public crowd testers to en-
hance and facilitate full mobile device and mobile app compatibility
testing process. By employing the hybrid crowdsourcing method
in proposed compatibility testing approach that described in sec-
tion 4.1, we will enable a large number of anonymous and public
crowd testers to participate based on their interests without time
constraints. Such open participation of public crowd testers will
assist developers in: (1) Testing more mobile devices with diferent
OS versions and gathering more accurate results. (2) Covering most
of the possible compatibility issues in early stages of the mobile
app testing process. This method will support lexible incentives
mechanism which is based on the negotiations between the parties
and the agreement for the incentives they willing to provide or
3
5. Hybrid Crowd-powered Approach for Compatibility Testing of Mobile Devices and Applications
Figure 1: The working mechanism of proposed hybrid crowdsourced compatibility testing approach
earn. Such incentive mechanism will help small teams or individ-
ual developers with limited resources to test their apps. Table I
shows a comparison between Traditional Crowdsourcing, Open
Crowdsourcing, and Hybrid Crowdsourcing methods based on dif-
ferent dimensions. While Table II describes the advantages and
disadvantages of the three Crowdsourcing methods.
5 PROPOSED WORKING MECHANISM
Based on the concept of the proposed compatibility testing ap-
proach described in Section 4.1 and hybrid crowdsourcing method
explained in Section 4.2, we will describe in this section a set of
process that is implemented in the working mechanism of proposed
crowdsourced compatibility testing. Fig 1 provides an overview of
the proposed working mechanism, and the set of the process are
listed below:
(1) Deinition of testing tasks: Developers deine their projects
for testing, the individual project can include more than one
task. At this stage, developers need to complete three main
steps to clearly deine the project: (a) Task information: devel-
opers are able to deine more than one task with the required
details for each task. First, they need to specify the hardware
components or feature category of the mobile device or OS
versions they would like to test. Next, developers need to se-
lect the way they would like to test their task with. testing as
code by providing the title of the task and source code or test-
ing as a feature by providing the title and testing steps. They
can use both ways at the same time to test a speciic task.
Finally, developers need to specify the results they expected
to get from each testing task will be performed. (b) Device
information: developers need to deine the required mobile
device platforms, OS versions, brand, and device models for
testing. It is possible for developers to deine mobile devices
information for the project as a whole or for each individual
task within the project. (c) Application information: At the
beginning of this step, developers need to select the type of
app they would like to ensure it works seamlessly without
any errors against the tested mobile device’s or OS version’s
feature(e.g., Social, Business, Education, Finance, Lifestyle,
Medical, etc.). Then developers will need to upload the tar-
geted app to the platform. Our approach provides an easy
way that enables crowd testers to download the app on their
device without any constraints. In case of uploading An-
droid apps, developers need to upload the.apk ile on the
platform. In contrast, for uploading the iOS app, developers
need achieving four main steps in order:
• Download ".cer" ile from the platform and click to install
it on their mac device.
• Download ".provision" ile and click to install it on their
mac device.
• Build their application with the identiier.
• Generate ".ipa" ile and Upload it to the platform.
4
6. Hybrid Crowd-powered Approach for Compatibility Testing of Mobile Devices and Applications
Table 1: The comparison of the three crowdsourcing methods
Method Participation Form Expertise
Demand
Crowd
Size
Task
Length
Task repli-
cation
Incentive Type Incentive example
Traditional
Crowdsourcing
On-demand Match-
ing
On-line Competi-
tion
Extensive Limited limited
time
None Extrinsic
(Physical
Incentive)
Compensation/Reward
(Mostly small amount
of money)
Open
Crowdsourcing
Open Collaboration Extensive Open limited
time
None Extrinsic
(Social Incentive)
Name Recognition
and Satisfaction
Hybrid
Crowdsourcing
Open Collaboration Extensive
Moderate
Minimal
Open Flexible
time
Several Extrinsic as
(Physical &
Social)
Intrinsic as
(Self knowledge
& learning)
- Diferent types of re-
ward not only money
- Name Recognition &
Satisfaction
-knowledge& Experi-
ence
Table 2: The advantages and disadvantages of the three crowdsourcing methods
Method Advantages Disadvantages
Traditional
Crowdsourcing − All the selected crowd testers have a high level
of experiences.
− Lower cost compared to non-crowdsourced test-
ing companies.
− Inherent constraints of human resources and hard-
ware resources.
− Persuading someone who is not interested to perform
a speciic task might be diicult, this will probably
lead them to perform the task quickly to get the
reward only, this may cause inaccurate results.
− The use of extrinsic incentives only will be weakened
the crowd eforts to perform tasks correctly over time
Liang et al [32].
Open
Crowdsourcing − The public crowd is participating based on their
interests which they could do more efort to get
very good results.
− Ability to study more human behavior and inter-
action with mobile devices.
− Non-physical incentives can be attributed to the lack
of more crowd participation or laziness in executing
the tasks in some cases over time.
Hybrid
Crowdsourcing − The ability for studying more of human behavior
and interaction with mobile devices.
− The anticipation of unknown reward may attract
more people to take a part in the tasks to get
diferent types of reward.
− The use of both extrinsic and intrinsic incentive
mechanism together to motivate the crowd will
signiicantly improve the efort of the crowd in
performing tasks as proved recently in 2018 by
Liang et al [25].
− The expertise of crowd participating is unknown
unless the previous experiments have been taken,
the outcome would be unpredictable
(2) Announcement and distribution of testing tasks: Once
the developers inished the process of deining the new test-
ing project with its tasks requirements, an announcement
will be sent to all crowd testers registered on the platform to
execute the test rather than selecting speciic crowd testers
based on requirements matched (e.g., available hardware and
software resources, demographic information, or geographic
5
7. Hybrid Crowd-powered Approach for Compatibility Testing of Mobile Devices and Applications
location) as in most existing crowdtesting platforms. In addi-
tion, developers will be able to use the random URL link that
automatically generated after inishing deining the process
to distribute the test on a large scale: (a) Send the link to
speciic users by email. (b) Publish (copy/paste) the URL link
in an online space such as Facebook, Twitter, LinkedIn, blogs,
or any testers’ groups on the internet. As we mentioned be-
fore, purchasing a big list of mobile devices only to test for a
speciic time is really expensive. Further, it is also diicult for
companies dealing with speciic communities, testing labs,
or cloud services to always keep the devices up to date. Our
global distribution of tests will allow public crowd testers to
participate and use their own device for testing. In this case,
it is very much possible to cover newly released devices that
probably are not still available with crowd tester community
in existing crowdtesting method or still not registered in the
could-testing system. In addition, it may also cover the old
devices that rarely became available in public.
(3) Review project and its tasks speciication by crowd testers:
When registered crowd testers receive the announcement
of the test, they will be able to review the requirements of
the project. On the other hand, when non-registered crowd
testers see the URL link anywhere on the internet, they will
irst need to register to the platform and then review test
requirements. After crowd testers review the requirements
of the announced project, they will have an option to accept
the project for testing or not. If they are not interested in
performing a speciic project, the proposed approach gives
them an opportunity to review requirements for all other
projects that can be tested using their device/s platform. So
that they can accept more than one project simultaneously
and then perform the test in order.
(4) Execute testing: After crowd testers select and accept the
project, they will review all testing requirements before start
performing the test to avoid âĂIJout of scopeâĂİ issues.
Then they will start testing all the tasks associated with
the project using required types of devices and OS versions.
In our approach, both crowd testers and developers will be
able to view all tested devices for each task so they can know
whether there are speciic devices or OS version need to be
covered. Further, due to our global distribution of test, crowd
testers that are participating to execute the test can be from
diferent levels of experience. This diversity of experience
can help to ind more issues quickly. This is because experi-
enced testers follow speciic testing steps and they always
use them for testing any mobile app. As a human behavior,
the crowd testers with diferent levels of experience may
try to do more efort to perform the test and provide better
testing results. This will also help crowd testers to ind more
issues and accurately reproduce the behavior of an app that
runs on exact mobile.
(5) Prepare and submit testing reports: Before submitting
testing report, crowd testers need to notice that if they do not
accept the project they will not be able to report their feed-
back even if they performed the test. As we mentioned previ-
ously one project may include more than one task. When the
crowd testers accept the whole project, all the tasks belong
to this project will be added directly to their ile; each task
within the project will have their own status (submitted or
in progress). This means that crowd testers can execute each
task separately and then submit a single report to developers
for each task executed. Each report includes crowd tester’s
feedback, this feedback includes the following information:
(a) Device information: It is essential for crowd testers to enter
the details of mobile devices (platform, brand, model, and
OS version) that they used in testing correctly in the report,
especially if they performed the test on many devices. In
this approach, if crowd testers would like to enter device
information for the irst time, they require selecting "new
device" from the options then using our mobile app to
detect device information. Once tester is accessing the app
from the device that has tested, the information will di-
rectly be detected and sent to the server and then upload
into reporting form during the reporting process. This
method will provide accurate inputting of device details
and avoid the errors that may occur while entering the
details manually in the reporting form, which can confuse
the developer as in most crowdsourced testing approach
existing. On the other hand, if one crowd tester would like
to enter device information that already tested and regis-
tered before, the tester will require selecting "registered
device" from the options then the tester can choose from
his list of tested devices.
(b) Task information: At this step, crowd tester needs to enter
how many times they repeated the test and the estimated
time for each test cycle, as well as actual results.
(c) Issue information: It is the responsibility of crowd testers
to provide a report with correct information for issues.
The information must compose of the following:(i) Issue
ID, title, priority, and description. (ii) Actions and steps
to reproduce the issue clearly written, showing how the
tester arrived at the location of the issue. (iii) Attachments
showing the issue either in a picture or in a video. (iv) Any
suggestion or idea they would like to share with develop-
ers in terms of improving or solving the issue as well as
how much they are sure about the result? In this approach,
crowd testers can enter more than the issue related to a
particular task in one report instead of submitting multi-
ple reports for the same task so this can reduce the time
needed for reporting process.
(6) Collect, track, and organize testing reports After many
crowd testers are executing diferent tasks for diferent projects
at diferent times, developers need to collect and organize
the report to view all issues that have been reported. To
avoid the limitations of collecting and organizing of testing
report manually by crowd manager or crowd leader as in
most existing crowdsourced testing method, our approach
will implement a report tracker system slightly similar to
JIRA [3] and Zoho [9] for collecting, tracking and organizing
all testing reports submitted by crowd testers automatically.
This tracking system will assist developers to: (a) Task Status
Tracking: Track the status of each task within each speciic
project; (b) Detailed Report: Track master details regarding
6
8. Hybrid Crowd-powered Approach for Compatibility Testing of Mobile Devices and Applications
the issues Id, type, description, priority, etc.; (c) Testers Iden-
tity: Track the crowd testers’ details who report the results;
(d) Scheduled Reports: Track recent update on the results
based on Daily, Weekly, or Hourly; (e) Modiication Method
or issue tracking life cycle: Ability to modify an existing is-
sue status, open, in progress, under review, Invalid/rejected,
accepted, or ixed; (f) Notiication Method: To notify devel-
opers about any new bug reported, also testers about any
change on the bug they have reported; (g) Results visual-
ization: To provide charts and analysis of reported results
during the time; (h) Eicient Classiication: To classify the
reported issues based on the type of app;
(7) View testing reports and validate results: After crowd
testers are submitted their reports, and after collected and
organized the reports by our tracking system, the respon-
sibility of the developers to view and validate the results
in every single report. At this step, developers starting vali-
date results in one report after another. Based on the crowd
testers’ results and validation results of developer they will
be able to "Approved" or "Rejected" the reports. In case of the
results is not accurate the developer will directly reject the
report and will not reward the crowd testers. On the other
hand, if the results are correct and developer has approved
the results, they will have the option to change the status
of the report to deferred (they will try to ix the issues after
inishing current work), in progress (I am trying to ix the
issues), or ixed (I have inished ixing issues). Once the de-
veloper changes the status of a speciic report, the status of
the report will directly be updated on the proile of crowd
tester’s who has submitted the report. Note, the priority of
ixing issues based on the priority of issues for all reports.
After developer ixing the issue, they will send the report
back with a notiication to crowd tester to retest and make
sure that the error is ixed and not appear again. Once in-
ishing retest process, crowd tester will send the report back
to developers with a notice that the issue has totally ixed
and then asks for closing the test task cycle. After developer
inishing validation of all tasks within a speciic project, a
notiication is sent to the tester to close any activities related
to the project.
(8) Update crowd testers rating by developers: Based on the
good preparation of reports and accuracy of crowd testers’
feedbacks (including full issues details reported), developers
will rate crowd testers. In addition, accepting and performing
as much as possible of testing projects will also positively
inluence developers rating for crowd testers. On the other
side, rejected reports will negatively efect on crowd testers’
rating. In our approach, due to dealing with public and anony-
mous crowd testers, the rating ( collected points) of crowd
testers can reduce developers’ concerns for working with
public crowd testers and increase their conidence in crowd
testers’ works performance and results they will report.
(9) Encourage and reward of crowd testers: After developers
inish validation and evaluation of crowd testers, they will
need to send a notiication to crowd testers to close the test-
ing project. After that, our system will monitor and calculate
all closed project for diferent testers and send a notiica-
tion with a list of all crowd testers need to reward. This
process will minimize the problem of consuming a long time
by crowd leader or crowd manager to monitor and identify
a list of crowd testers that need to be rewarded and then
send the list to developers to perform the rewarding process.
According to results proved by Liang et al [25] that using
extrinsic incentives only (rewarding) weaken crowd perfor-
mance and efort to perform tasks correctly, so there is a
need for using intrinsic incentive as well. In our approach,
the two incentive types will be implemented to encourage
crowd testers: Extrinsic and Intrinsic. The extrinsic incen-
tive" is the rewarding step which depends on the agreement
among developers and testers. In extrinsic method, develop-
ers are free to choose the type of reward they are willing
to ofer to the testers (e.g., some money, free app, gift card,
certiicate, job, etc.), it is possible to provide multiple options
for crowd testers. At this stage, developer sends an ofer
including reward options to crowd testers. Once the crowd
tester receives this ofer the crowd tester will choose one
option, and then send his approval to the ofer again to the
developer. The intrinsic incentive includes self-knowledge
and learning. In our approach, crowd testers can get beneits
and learn more from other crowd testers’ knowledge and
testing scenarios that have been stored in our knowledge
base. This incentive mechanism could motivate the testers
more and more, and at the same time serve developers with
limited resources, unlike to other crowdtesting platforms
that are restricted developers into the only money.
(10) Justify reasons of issues by developers: In this step, after
developer inish validation of all submitted reports related to
testing a particular task on a variety of mobile device, the de-
veloper will justify reasons of issues on these tested mobile
devices and OS version to be stored in our knowledge base.
This step is not available in any of current crowdsourcing
methods. Implementing this step can assist developers in
the future to understand incompatibility reasons of a spe-
ciic mobile app with mobile devices and OS versions while
developing a new mobile app.
5.1 A crowd-powered knowledge base
Our approach includes a knowledge base that documenting col-
lected compatibility results (including issues) through the crowd
testers within the platform. This knowledge base will bring sup-
porting evidence enable developers to understand the reasons for
speciic issues appeared. Also, it will allow developers to get useful
guidelines; for example, some of the mobile devices will not support
a speciic functionality within a speciic type of app for any reason
such as if some hardware component or characteristics necessary
for the proper running of an app is missed in any device model, or
not supported by any API framework related to speciic OS ver-
sions. Documenting such this results will help developers a lot in
the future when they need developing a new app that may have
some similar functionality of the apps that have been developed
before and tested on many devices. Of course, this will reduce the
issues and time taken for compatibility testing on diferent devices,
7
9. Hybrid Crowd-powered Approach for Compatibility Testing of Mobile Devices and Applications
especially during early stages of the development cycle. In addition,
it provides ability for the developers and testers to vote, rate, and
comment to evaluate the quality of other published results related
to speciic compatibility issue of the mobile device.
6 CONTRIBUTION
The main contributions in this paper are:
(1) A crowdsourced testing platform that supports the distri-
bution of full mobile device compatibility testing, which is
suitable for both professional and beginner developers and
testers.
(2) It lays out a new crowdsourced testing framework, that can
be used to execute other mobile apps testing types such as
GUI, Usability, Localization, etc. for all supported platforms.
(3) A novel direct interaction mechanism between the developer
to provide better coverage of testing in speciic software or
hardware that are in need of testing by the crowdtesting
platform.
(4) A public knowledge base allows for a deep understanding of
the internal complexity of testing the features or hardware
characteristics of mobile device models or features of API
levels related to OS versions for all supported platform. This
can improve developers’ knowledge and experiences and
helps them during the app development process.
7 CONCLUSIONS
This paper narrows the research gap related to studying causes of
compatibility testing issues. It reviewed the current approaches that
address the compatibility testing issues. Since compatibility testing
requires to cover many mobile device models and OS versions, the
hybrid crowdsourced testing method has been proposed to use
the power of public crowd testers and their own mobile devices
to perform testing on a variety of devices. This paper focuses on
testing the features and hardware characteristics of mobile devices
with diferent OS versions in two diferent ways, which are code or
features, against mobile apps requirements. The proposed approach
shows that the use of public crowd testers and processes used in the
hybrid crowdtesting method might be signiicantly strong enough
for several reasons. The ability to cover a large number of mobile
device model with diferent OS versions. To improve the testing and
development of mobile apps through direct interaction between
developers and crowd testers. The ability to discover more issues
through diferent experience levels of crowd testers. To reduce the
delay in the coordination process among crowd testers for the avail-
ability of using a speciic device. To avoid mistakes that may occur
by crowd testers when reporting information about tested mobile
device by using a mobile app as automated data detection method.
To reduce developers’ concerns of working with anonymous crowd
testers. To attract more crowd testers to participate through the use
of both extrinsic and intrinsic incentive method. To reduce delay
in waiting for either the crowd manager or crowd leader to collect
and organize testing reports. As a future work, we will evaluate
our proposed approach "hybrid crowdsourced compatibility test-
ing" to measure its efectiveness and eiciency in addressing the
compatibility testing issues.
REFERENCES
[1] 99tests news. https://99tests.com/news. [Online; accessed 29-July-2017].
[2] Global app testing. https://www.globalapptesting.com/. [Online; accessed 20-
Jun-2017].
[3] Jira. https://www.atlassian.com/software/jira. [Online; accessed 18-April-2018].
[4] Mob4hire. http://www.mob4hire.com/. [Online; accessed 8-Aug-2017].
[5] Most popular crowdsourced testing companies of 2018.
http://www.softwaretestinghelp.com/crowdsourced-testing-companies/.
[Online; accessed 28-Mar-2018].
[6] Passbrains. https://www.passbrains.com/. [Online; accessed 18-Nov-2017].
[7] Pay4bugs.crowdsource software testing. https://www.pay4bugs.com/. [Online;
accessed 20-Jun-2017].
[8] Testlight. https://developer.apple.com/testlight/. [Online; accessed 18-Nov-
2017].
[9] Zoho bug tracker. https://www.zoho.com/bugtracker/features.html. [Online;
accessed 18-April-2018].
[10] utest. https://www.utest.com/, 2007. [Online; accessed 8-Aug-2017].
[11] Mycrowd qa. https://mycrowd.com/, 2015. [Online; accessed 1-Nov-2017].
[12] M. Akour, A. A. Al-Zyoud, B. Falah, S. Bouriat, and K. Alemerien. Mobile software
testing: Thoughts, strategies, challenges, and experimental study. International
Journal of Advanced Computer Science and Applications, 7(6):12ś19, 2016.
[13] S. Alyahya and D. Alrugebh. Process improvements for crowdsourced software
testing. 2017.
[14] A. Bacchelli, L. Ponzanelli, and M. Lanza. Harnessing stack overlow for the ide.
In Proceedings of the Third International Workshop on Recommendation Systems
for Software Engineering, pages 26ś30. IEEE Press, 2012.
[15] R. J. Bhojan, K. Vivekanandan, S. Ganesan, and P. M. Monickaraj. Service based
mobile test automation framework for automotive hmi. Indian Journal of Science
and Technology, 8(15), 2015.
[16] H. K. Ham and Y. B. Park. Mobile application compatibility test system design
for android fragmentation. In International Conference on Advanced Software
Engineering and Its Applications, pages 314ś320. Springer, 2011.
[17] H. K. Ham and Y. B. Park. Designing knowledge base mobile application compat-
ibility test system for android fragmentation. 2014.
[18] H. K. Ham and Y. B. Park. Designing knowledge base mobile application compat-
ibility test system for android fragmentation. IJSEIA, 8(1):303ś314, 2014.
[19] J.-f. Huang. Appacts: Mobile app automated compatibility testing service. In
Mobile Cloud Computing, Services, and Engineering (MobileCloud), 2014 2nd IEEE
International Conference on, pages 85ś90. IEEE, 2014.
[20] J.-f. Huang and Y.-z. Gong. Remote mobile test system: a mobile phone cloud
for application testing. In Cloud Computing Technology and Science (CloudCom),
2012 IEEE 4th International Conference on, pages 1ś4. IEEE, 2012.
[21] J. Kaasila, D. Ferreira, V. Kostakos, and T. Ojala. Testdroid: automated remote ui
testing on android. In Proceedings of the 11th International Conference on Mobile
and Ubiquitous Multimedia, page 28. ACM, 2012.
[22] M. Kamran, J. Rashid, and M. W. Nisar. Android fragmentation classiication,
causes, problems and solutions. International Journal of Computer Science and
Information Security, 14(9):992, 2016.
[23] D. Knott. Hands-on mobile app testing. Indiana: Pearson education Inc, 2015.
[24] T. D. LaToza and A. van der Hoek. Crowdsourcing in software engineering:
Models, motivations, and challenges. IEEE software, 33(1):74ś80, 2016.
[25] H. Liang, M.-M. Wang, J.-J. Wang, and Y. Xue. How intrinsic motivation and
extrinsic incentives afect task efort in crowdsourcing contests: A mediated
moderation model. Computers in Human Behavior, 81:168ś176, 2018.
[26] K. Mao, L. Capra, M. Harman, and Y. Jia. A survey of the use of crowdsourcing
in software engineering. Journal of Systems and Software, 126:57ś84, 2017.
[27] D. Phair. Open crowdsourcing: Leveraging community software developers for IT
projects. PhD thesis, Colorado Technical University, 2012.
[28] C. M. Prathibhan, A. Malini, N. Venkatesh, and K. Sundarakantham. An automated
testing framework for testing android mobile applications in the cloud. In
Advanced Communication Control and Computing Technologies (ICACCCT), 2014
International Conference on, pages 1216ś1219. IEEE, 2014.
[29] T. Samuel and D. Pfahl. Problems and solutions in mobile application testing.
In Product-Focused Software Process Improvement: 17th International Conference,
PROFES 2016, Trondheim, Norway, November 22-24, 2016, Proceedings 17, pages
249ś267. Springer, 2016.
[30] O. Starov. Cloud platform for research crowdsourcing in mobile testing. East
Carolina University, 2013.
[31] K.-J. Stol, T. D. LaToza, and C. Bird. Crowdsourcing for software engineering.
IEEE Software, 34(2):30ś36, 2017.
[32] TestObject. Mobile app testing main challenges, diferent approaches, one solu-
tion. 2017.
[33] K. Wnuk and T. Garrepalli. Knowledge management in software testing: A
systematic snowball literature review. e-Informatica Software Engineering Journal,
12(1):51ś78, 2018.
8