Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Continuous, Evolutionary and Large-Scale: A New Perspective for Automated Mobile App Testing

176 views

Published on

Mobile app development involves a unique set of challenges including device fragmentation and rapidly evolving platforms, making testing a difficult task. The design space for a comprehensive mobile testing strategy includes features, inputs, potential contextual app states, and large combinations of devices and underlying platforms. Therefore, automated testing is an essential activity of the development process. However, current state of the art of automated testing tools for mobile apps posses limitations that has driven a preference for manual testing in practice. As of today, there is no comprehensive automated solution for mobile testing that overcomes fundamental issues such as automated oracles, history awareness in test cases, or automated evolution of test cases.
In this perspective paper we survey the current state of the art in terms of the frameworks, tools, and services available to developers to aid in mobile testing, highlighting present shortcom- ings. Next, we provide commentary on current key challenges that restrict the possibility of a comprehensive, effective, and practical automated testing solution. Finally, we offer our vision of a comprehensive mobile app testing framework, complete with research agenda, that is succinctly summarized along three principles: Continuous, Evolutionary and Large-scale (CEL).

Published in: Technology
  • Be the first to comment

Continuous, Evolutionary and Large-Scale: A New Perspective for Automated Mobile App Testing

  1. 1. ICSME’17 Shanghai, China Wednesday, September 20th, 2017 Mario Linares-Vásquez, Kevin Moran, & Denys Poshyvanyk Continuous, Evolutionary and Large-Scale: A New Perspective for Automated Mobile AppTesting
  2. 2. AUTOMATED MOBILE TESTING Study FindingsStudies on Mobile Testing Practices Automated Input Generation for Android Apps:Are We ThereYet? Shauvik Roy Choudhary,Alessandra Gorla, Alessandro Orso Understanding the Test Automation Culture of App Developers Pavneet Singh Kochhar, FerdianThung, Nachiappan Nagappan,Thomas Zimmermann, and David Lo How Do Developers Test Android Applications? Mario Linares-Vasquez, Carlos Bernal-Cárdenas, Kevin Moran, and Denys Poshyvanyk
  3. 3. AUTOMATED MOBILE TESTING Study Findings • Lack of Debugging Support • Lack of Reproducible Test Cases Studies on Mobile Testing Practices Automated Input Generation for Android Apps:Are We ThereYet? Shauvik Roy Choudhary,Alessandra Gorla, Alessandro Orso Understanding the Test Automation Culture of App Developers Pavneet Singh Kochhar, FerdianThung, Nachiappan Nagappan,Thomas Zimmermann, and David Lo How Do Developers Test Android Applications? Mario Linares-Vasquez, Carlos Bernal-Cárdenas, Kevin Moran, and Denys Poshyvanyk
  4. 4. AUTOMATED MOBILE TESTING Study Findings • Lack of Debugging Support • Lack of Reproducible Test Cases Studies on Mobile Testing Practices Automated Input Generation for Android Apps:Are We ThereYet? Shauvik Roy Choudhary,Alessandra Gorla, Alessandro Orso Understanding the Test Automation Culture of App Developers Pavneet Singh Kochhar, FerdianThung, Nachiappan Nagappan,Thomas Zimmermann, and David Lo How Do Developers Test Android Applications? Mario Linares-Vasquez, Carlos Bernal-Cárdenas, Kevin Moran, and Denys Poshyvanyk • Mobile Apps Tend to be Poorly Tested • Testing is Done Mostly Manually
  5. 5. AUTOMATED MOBILE TESTING Study Findings • Lack of Debugging Support • Lack of Reproducible Test Cases Studies on Mobile Testing Practices Automated Input Generation for Android Apps:Are We ThereYet? Shauvik Roy Choudhary,Alessandra Gorla, Alessandro Orso Understanding the Test Automation Culture of App Developers Pavneet Singh Kochhar, FerdianThung, Nachiappan Nagappan,Thomas Zimmermann, and David Lo How Do Developers Test Android Applications? Mario Linares-Vasquez, Carlos Bernal-Cárdenas, Kevin Moran, and Denys Poshyvanyk • Mobile Apps Tend to be Poorly Tested • Testing is Done Mostly Manually • Developers Prefer Use-Case based Testing • Test Quality Metrics need to be rethought
  6. 6. AUTOMATED MOBILE TESTING Study Findings • Lack of Debugging Support • Lack of Reproducible Test Cases Studies on Mobile Testing Practices Automated Input Generation for Android Apps:Are We ThereYet? Shauvik Roy Choudhary,Alessandra Gorla, Alessandro Orso Understanding the Test Automation Culture of App Developers Pavneet Singh Kochhar, FerdianThung, Nachiappan Nagappan,Thomas Zimmermann, and David Lo How Do Developers Test Android Applications? Mario Linares-Vasquez, Carlos Bernal-Cárdenas, Kevin Moran, and Denys Poshyvanyk • Mobile Apps Tend to be Poorly Tested • Testing is Done Mostly Manually • Developers Prefer Use-Case based Testing • Test Quality Metrics need to be rethought
  7. 7. OUR VISION:
  8. 8. OUR VISION: EASY TO USE & EFFECTIVE MOBILE TESTING BASED ON THREE CORE PRINCIPLES: CONTINUOUS, EVOLUTIONARY, & LARGE SCALE (CEL)
  9. 9. HOW DO WE GET THERE? • Part 0: State of the Art & Practice • Part 1: Existing Problems in Mobile App Testing • Part2: Continuous, Evolutionary, & Large Scale Mobile Testing
  10. 10. STATE OF THE ART & PRACTICE
  11. 11. OVERVIEW OF TOOLS & SERVICES
  12. 12. OVERVIEW OF TOOLS & SERVICES • Automation Frameworks & APIs • Record & Replay Tools • Automated Input Generation Tools • Bug & Error Reporting • Crowdsourced Testing • Cloud Testing Services • Device Streaming Tools } } Traditional Android Test Case Generation Bug Reporting, Crowdsourcing and Services
  13. 13. ANDROID TEST CASE GENERATION
  14. 14. AUTOMATION FRAMEWORKS & APIS TESTS JUnit, Espresso, UI Automator, Robotium Monkey
  15. 15. AUTOMATION FRAMEWORKS & APIS UI Automator
  16. 16. RECORD & REPLAY (R&R) AUT/SUT UI Events Recorder Script Scripts UI Events Monkey AUT/SUT UI Events
  17. 17. AUTOMATED INPUT GENERATION TECHNIQUES (AIG)
  18. 18. AUTOMATED INPUT GENERATION TECHNIQUES (AIG) • Differing Goals: • Code Coverage • Crashes • Test Sequence Length
  19. 19. AUTOMATED INPUT GENERATION TECHNIQUES (AIG) • Differing Goals: • Code Coverage • Crashes • Test Sequence Length • Three Main Types: • Random-Based • Systematic • Model-Based
  20. 20. RANDOM/FUZZ TESTING Monkey X or Y ? AUT/SUT
  21. 21. RANDOM/FUZZ TESTING Monkey X or Y ? AUT/SUT Event x Invalid EventY Valid
  22. 22. GUI-RIPPING Ripper/Extractor ModelAUT/SUT Monkey
  23. 23. GUI-RIPPING Ripper/Extractor ModelAUT/SUT Monkey A or B ?
  24. 24. GUI-RIPPING Ripper/Extractor ModelAUT/SUT Monkey Snapshot 1 Snapshot 1 GUI State 1 A or B ?
  25. 25. GUI-RIPPING Ripper/Extractor ModelAUT/SUT Monkey Snapshot 1 Snapshot 1 GUI State 1 Event 1 A or B ?
  26. 26. GUI-RIPPING Ripper/Extractor ModelAUT/SUT Monkey Snapshot 1 Snapshot 1 GUI State 1 Event 1 Snapshot 2 Snapshot 2 GUI State 2 A or B ?
  27. 27. SYSTEMATIC EXPLORATION Monkey A or B ? Breadth-First (BF)Depth-First (DF) Random (Uniform) Random (A-priori distr.) Other options (online decision)
  28. 28. MODEL-BASED TESTING UI Events Model Monkey AUT/SUT - Manually generated - Automatically generated (source code) - Ripped at runtime (upfront) - Ripped at runtime (interactive)
  29. 29. EMERGING AIG APPROACHES • Recently Introduced Approaches for AIG: • Search-Based Approaches1 • Symbolic/Concolic Execution2 1Ke Mao, Mark Harman, andYue Jia. 2016. Sapienz: multi-objective automated testing for Android applications. In Proceedings of the 25th International Symposium on SoftwareTesting and Analysis (ISSTA 2016) 2Nariman Mirzaei, Joshua Garcia, Hamid Bagheri,Alireza Sadeghi, and Sam Malek. 2016. Reducing combinatorics in GUI testing of android applications. In Proceedings of the 38th International Conference on Software Engineering (ICSE '16)
  30. 30. EMERGING AIG APPROACHES Automation Frameworks ✓ Easy reproduction ✓ High level syntax ✓ Black box testing - Learning curve - User-defined oracles - Expensive maintenance Record & Replay ✓ Easy reproduction - Expensive collection and maintenance - Coupled to locations AIG: Random Based ✓ Fast execution ✓ Good at finding crashes - Invalid events - Lack of expressiveness AIG: Systematic ✓ Achieves Reasonable Coverage ✓ May miss crashes - Can be time consuming - Typically cannot exercise complex features AIG: Model Based ✓ Event sequences ✓ Automatic exploration - Some Invalid sequences - State Explosion - Incomplete models
  31. 31. BUG REPORTING AND CROWDSOURCED TESTING
  32. 32. BUG REPORTING SERVICES AUT/SUTThird Party Library Web Service
  33. 33. BUG REPORTING SERVICES
  34. 34. BUG REPORTING SERVICES • Features of Bug Reporting Services: • Video Recording • App Analytics • Automated Crash Reporting
  35. 35. BUG REPORTING SERVICES • Features of Bug Reporting Services: • Video Recording • App Analytics • Automated Crash Reporting
  36. 36. CROWDSOURCED TESTING SERVICES AUT/SUTThird Party Service
  37. 37. CROWDSOURCED TESTING SERVICES
  38. 38. CROWDSOURCED TESTING SERVICES • Types of Crowdsourced Testing Services: • Expert Testing • Functional Testing • UX Testing • Security Testing • Localization Testing
  39. 39. DEVICE STREAMING SERVICES Third Party Service Third Party Service
  40. 40. TESTING SERVICES PROS & CONS Bug Reporting Services ✓ Allows for more details about field failures ✓ App Analytics can help with UI/UX design - Can be expensive - Requires integrating library - Typically do report GUI- traces
  41. 41. TESTING SERVICES PROS & CONS Crowdsourcing Services ✓ Low effort required from developers ✓ Expert tester might uncover unexpected bugs - Can be expensive - May not fit within Agile workflows - Quality of Reports can vary Bug Reporting Services ✓ Allows for more details about field failures ✓ App Analytics can help with UI/UX design - Can be expensive - Requires integrating library - Typically do report GUI- traces
  42. 42. TESTING SERVICES PROS & CONS Crowdsourcing Services ✓ Low effort required from developers ✓ Expert tester might uncover unexpected bugs - Can be expensive - May not fit within Agile workflows - Quality of Reports can vary Bug Reporting Services ✓ Allows for more details about field failures ✓ App Analytics can help with UI/UX design - Can be expensive - Requires integrating library - Typically do report GUI- traces Device Streaming ✓ Allows remote users access to controlled devices ✓ Allows for collection of detailed user information - Can be difficult to configure - Relies on strong network connection - Cannot simulate mobile specific contexts like sensors
  43. 43. CHALLENGES IN MOBILE APP TESTING
  44. 44. OVERVIEW OF CHALLENGES • Fragmentation • Test Flakiness • Lack Mobile-Specific Fault Model • Lack of Intelligent Test Case Generation • Absence of Mobile Testing Oracles • Lack of Support for Multiple Testing Goals
  45. 45. FRAGMENTATION Credit: https://thenextweb.com/
  46. 46. TEST-FLAKINESS GUITest Suite #1
  47. 47. TEST-FLAKINESS GUITest Suite #1
  48. 48. TEST-FLAKINESS GUITest Suite #1
  49. 49. TEST-FLAKINESS GUITest Suite #1 GUITest Suite #1
  50. 50. TEST-FLAKINESS GUITest Suite #1 GUITest Suite #1
  51. 51. TEST-FLAKINESS GUITest Suite #1 GUITest Suite #1
  52. 52. MOBILE SPECIFIC FAULT MODEL Issues Documentation User Reviews Coding and Categorization Mobile Specific Fault Taxonomy & Model Repository Mining
  53. 53. MOBILE SPECIFIC FAULT MODEL Issues Documentation User Reviews Coding and Categorization Mobile Specific Fault Taxonomy & Model Repository Mining Linares-Vásquez, M., Bavota, G.,Tufano, M., Moran, K., Di Penta, M.,Vendome, C., M., Bernal-Cárdenas, C. and Poshyvanyk, D., “Enabling Mutation Testing for Android Apps”,  in FSE’17
  54. 54. INTELLIGENT TEST CASE GENERATION AIG Tool
  55. 55. INTELLIGENT TEST CASE GENERATION AIG Tool 1)Tap on the “CreateTask” Button
  56. 56. INTELLIGENT TEST CASE GENERATION AIG Tool 1)Tap on the “CreateTask” Button 2)Type “Get Milk” into the “Task” EditText
  57. 57. INTELLIGENT TEST CASE GENERATION AIG Tool 1)Tap on the “CreateTask” Button 2)Type “Get Milk” into the “Task” EditText 3)Tap on the “Done” Button
  58. 58. INTELLIGENT TEST CASE GENERATION AIG Tool 1)Tap on the “CreateTask” Button 2)Type “Get Milk” into the “Task” EditText 3)Tap on the “Done” Button 3) ????
  59. 59. ABSENCE OF MOBILE TESTING ORACLES AIG Tool
  60. 60. ABSENCE OF MOBILE TESTING ORACLES AIG Tool ????
  61. 61. ABSENCE OF MOBILE TESTING ORACLES AIG Tool ???? ????
  62. 62. ABSENCE OF MOBILE TESTING ORACLES AIG Tool ???? ???? Reyhaneh Jabbarvand and Sam Malek, “μDroid: An Energy-Aware Mutation Testing Framework for Android”,  in FSE’17
  63. 63. MULTIPLE TESTING GOALS
  64. 64. MULTIPLE TESTING GOALS Test Specific Use Cases SecurityTesting Fuzz/StressTesting EnergyTesting PerformanceTesting PlayTesting
  65. 65. MULTIPLE TESTING GOALS Test Specific Use Cases SecurityTesting Fuzz/StressTesting EnergyTesting PerformanceTesting PlayTesting AutomatedTesting Approaches have mainly been concerned with Destructive Testing Developers need automated support for diverse testing Goals!
  66. 66. OUR VISION: CONTINUOUS, EVOLUTIONARY & LARGE SCALE MOBILE TESTING
  67. 67. CEL MOBILE TESTING PRINCIPLES • Continuous • Following the trends of CI/CD, Mobile testing for different goals should happen in a continuous manner • Evolutionary • Testing should automatically adapt as an app evolves, according to code, usage, or hardware/platform changes • Large-Scale • Parallelization of tests across different device configurations
  68. 68. CEL ARCHITECTURE APIs evolution monitor Changes Monitoring Subsystem Developers User reviews APIs Users On-device usages monitor Source code changes monitor Markets monitor Source code Impact analyzer Execution traces+ logs analyzer API changes analyzer User reviews + ratings analyzer Models repository Models generator Test cases generator Large Execution Engine Containers Manager Reports repository Reports generator Test cases runner Testing Artifacts Generation Artifacts repository Multi-model generator Multi-model repository Test cases + oracles Domain, GUI, Usage, Contextual, Faults models Multi-model
  69. 69. CEL ARCHITECTURE APIs evolution monitor Changes Monitoring Subsystem Developers User reviews APIs Users On-device usages monitor Source code changes monitor Markets monitor Source code Impact analyzer Execution traces+ logs analyzer API changes analyzer User reviews + ratings analyzer Models repository Models generator Test cases generator Large Execution Engine Containers Manager Reports repository Reports generator Test cases runner Testing Artifacts Generation Artifacts repository Multi-model generator Multi-model repository Test cases + oracles Domain, GUI, Usage, Contextual, Faults models Multi-model
  70. 70. CEL ARCHITECTURE APIs evolution monitor Changes Monitoring Subsystem Developers User reviews APIs Users On-device usages monitor Source code changes monitor Markets monitor Source code Impact analyzer Execution traces+ logs analyzer API changes analyzer User reviews + ratings analyzer Models repository Models generator Test cases generator Large Execution Engine Containers Manager Reports repository Reports generator Test cases runner Testing Artifacts Generation Artifacts repository Multi-model generator Multi-model repository Test cases + oracles Domain, GUI, Usage, Contextual, Faults models Multi-model
  71. 71. CEL ARCHITECTURE APIs evolution monitor Changes Monitoring Subsystem Developers User reviews APIs Users On-device usages monitor Source code changes monitor Markets monitor Source code Impact analyzer Execution traces+ logs analyzer API changes analyzer User reviews + ratings analyzer Models repository Models generator Test cases generator Large Execution Engine Containers Manager Reports repository Reports generator Test cases runner Testing Artifacts Generation Artifacts repository Multi-model generator Multi-model repository Test cases + oracles Domain, GUI, Usage, Contextual, Faults models Multi-model
  72. 72. DIRECTIONS FOR THE SE RESEARCH COMMUNITY
  73. 73. DIRECTIONS FOR THE SE RESEARCH COMMUNITY • Improved Model-Based Representations of Mobile Apps
  74. 74. DIRECTIONS FOR THE SE RESEARCH COMMUNITY • Improved Model-Based Representations of Mobile Apps • Goal-Oriented Automated Test Case Generation
  75. 75. DIRECTIONS FOR THE SE RESEARCH COMMUNITY • Improved Model-Based Representations of Mobile Apps • Goal-Oriented Automated Test Case Generation • Flexible Open Source Solutions for Large Scale/ Crowdsourced Testing
  76. 76. DIRECTIONS FOR THE SE RESEARCH COMMUNITY • Improved Model-Based Representations of Mobile Apps • Goal-Oriented Automated Test Case Generation • Flexible Open Source Solutions for Large Scale/ Crowdsourced Testing • Derivation of Oracles
  77. 77. DIRECTIONS FOR THE SE RESEARCH COMMUNITY • Improved Model-Based Representations of Mobile Apps • Goal-Oriented Automated Test Case Generation • Flexible Open Source Solutions for Large Scale/ Crowdsourced Testing • Derivation of Oracles • MSR Support for Mobile Testing
  78. 78. DIRECTIONS FOR THE SE RESEARCH COMMUNITY • Improved Model-Based Representations of Mobile Apps • Goal-Oriented Automated Test Case Generation • Flexible Open Source Solutions for Large Scale/ Crowdsourced Testing • Derivation of Oracles • MSR Support for Mobile Testing • Developer Feedback Mechanisms
  79. 79. Any Questions? Thank you! Kevin Moran Ph.D. candidate kpmoran@cs.wm.edu www.kpmoran.com Denys Poshyvanyk Associate Professor denys@cs.wm.edu cs.wm.edu/~denys Mario LinaresVasquez Assistant Professor m.linaresv@uniandes.edu.co sistemas.uniandes.edu.co/~mlinaresv

×