Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Using Robots for App Testing

287 views

Published on

Software robots like monkey provide a quick way to validate your application. With robots on new cloud testing services, it is easier than ever to get started testing your app without even having written any tests. In this talk, I will introduce a few tools from both academia and industry, and then cover the basics of how these tools work. You will learn about the strengths and limitations these tools and how to use them effectively to maximize code coverage and catching failures.

Published in: Engineering
  • Be the first to comment

Using Robots for App Testing

  1. 1. USING ROBOTS FOR ANDROID APP TESTING Shauvik Roy Choudhary PhD, Georgia Tech Founder, MoQuality @shauvik http://shauvik.com http://moquality.com
  2. 2. About me Shauvik Roy Choudhary PhD from Georgia Tech Founder, MoQuality http://shauvik.com PhD Thesis: Cross-platform Testing & Maintenance of Web & Mobile Apps Industry: Google, Yahoo!, FullStory, Fujitsu Labs, IBM Research, HSBC, Goldman Sachs Entrepreneurship: Georgia Tech TI:GER program, VentureLab, ATDC, NSF SBIR
  3. 3. 2015 IEEE International Conference on Automated Software Engineering
  4. 4. Test Structure Setup Teardown Exercise Verify
  5. 5. Testing using a Robot Setup Teardown Exercise Verify start the app stop the app app shall not crash Inputs ???
  6. 6. Android Monkey Tool
  7. 7. $ adb shell monkey -p <your.package.name> -v <number of events>
  8. 8. Useful Monkey Options -s <seed> --throttle <milliseconds> --pct-<event_type> <percent> --ignore-<issue>
  9. 9. Automated Test Input Generation Techniques Dynodroid FSE’13 A3E OOPSLA’13 SwiftHand OOPSLA’13 DroidFuzzer MoMM’13 Orbit FASE’13 Monkey 2008 ACTEve FSE’12 GUIRipper ASE’12 JPF-Android SENotes’12 PUMA Mobisys’14 EvoDroid FSE’14 Null IntentFuzzer WODA’14 IntentFuzzer 2009 Push button techniques Sapienz ISSTA’16 TrimDroid ICSE’16
  10. 10. Goal of Testing: 1. Make the app crash 2. Test different behaviors
  11. 11. Tools Strategies 1. Instrumentation strategy -- App/Platform 2. Events generation strategy -- UI/System 3. Testing strategy -- Black-box/White-box 4. Exploration strategy -- Random/Model-based/Systematic
  12. 12. Exploration Strategy
  13. 13. 1. Random Exploration Strategy Randomly selects an event for exploration Tools: Monkey, Dynodroid Advantages ● Efficiently generates events ● Suitable for stress testing Drawbacks ● Hardly generates specific inputs ● App behavior/coverage agnostic ○ might generate redundant events ● Typically no stopping criterion
  14. 14. 2. Model-based Exploration Strategy Use GUI Model of the app to systematically explore Typically FSMs (states = Activities, edges = Events) Tools: A3E, SwiftHand, GUIRipper, PUMA, Orbit Advantages ● Intuitively more effective ● Can reduce redundant events Drawbacks ● Does not consider events that alter non-GUI state
  15. 15. 3. Systematic Exploration Strategy Use sophisticated techniques (e.g., symbolic execution & evolutionary algorithms) to systematically explore the app Tool: ACTEve and EvoDroid Advantages ● Can explore behavior that is hard to reach by random techniques Drawbacks ● Less scalable compared to other techniques (xleft < $x < xright ) ∧ (ytop < $y < ybottom ) SAT Solver $x = 5; $y = 10
  16. 16. Automated Test Input Generation Techniques Name Doesn’t need Instrumentation Events Exploration Strategy Testing Strategy Platform App UI System Monkey ✔ ✔ ✔ ✖ Random Black-box ACTEve ✖ ✖ ✔ ✔ Systematic White-box Dynodroid ✖ ✔ ✔ ✔ Random Black-box A3E-DF ✔ ✖ ✔ ✖ Model-based Black-box SwiftHand ✔ ✖ ✔ ✖ Model-based Black-box GUIRipper ✔ ✖ ✔ ✖ Model-based Black-box PUMA ✔ ✔ ✔ ✖ Model-based Black-box
  17. 17. Experiments Image Credit: Daily Alchemy
  18. 18. Research Criteria C1. Ease of use C2. Android framework compatibility C3. Code coverage achieved C4. Fault detection ability
  19. 19. Mobile App Benchmarks Combination of all subjects (68) used from F-Droid and other open source repos
  20. 20. Experimental Setup Debian Host Ubuntu Guest 2 cores 6GB RAM VirtualBox Vagrant Android Emulators 4GB RAM Emulators: v2.3 (Gingerbread) v4.1 (Jelly Bean) v4.4 (KitKat) Tools installed on guest: ● Removed default timeouts ● Default config; No special tuning
  21. 21. Experimental Protocol ● Run each tool for 1 hour on each benchmark ● Repeat 10 times to account for non-deterministic behavior ● Collect Results ○ Coverage Report (every 5 min) ○ Logcat -> Extracted Failures Emma HTML Reports Parse and extract statement coverage Logcat Parse and extract unique stack traces (RegEx)
  22. 22. Results Image Credit: ToTheWeb
  23. 23. C1. Ease of Use & C2. Android Compatibility Name Ease of Use Compatibility OS Emulator/Device Monkey NO_EFFORT Any Any ACTEve MAJOR_EFFORT v2.3 Emu (Custom) Dynodroid NO_EFFORT v2.3 Emu (Custom) A3E-Depth-first LITTLE_EFFORT Any Any SwiftHand MAJOR_EFFORT v4.1+ Any GUIRipper MAJOR_EFFORT Any Emulator PUMA LITTLE_EFFORT v4.3+ Any
  24. 24. C3. Overall Code Coverage Achieved
  25. 25. C3. Coverage Analysis by Benchmark App Divide And Conquer Random MusicPlayer k9mail Password MakerPro ... #Applications % Coverage
  26. 26. C3. Code Coverage Achieved Over Time
  27. 27. C4. Fault Detection Ability
  28. 28. Pairwise Comparison: Coverage and Failures Coverage
  29. 29. Pairwise Comparison: Coverage and Failures Failures
  30. 30. Pairwise Comparison: Coverage and Failures Coverage Failures
  31. 31. Observations and Discussion
  32. 32. 1. Random testing can be effective (somehow surprisingly)
  33. 33. 2. Strategy makes a difference (in the behaviors covered)
  34. 34. 3. System events matter (in addition to UI events) Broadcast Receiver Intents SMS Notifications
  35. 35. 4. Restarts should be minimized (for efficient exploration)
  36. 36. 5. Practical considerations matter (for practical usefulness)
  37. 37. 5.1 Practical considerations matter (for practical usefulness) Manual Inputs
  38. 38. 5.2 Practical considerations matter (for practical usefulness) Initial State
  39. 39. Open Issues for Future work Image Credits: Back to the Future (Universal Pictures)
  40. 40. 1. Reproducibility (allow for reproducing observed behaviors) Image Source: http://ncp-e.com
  41. 41. 2. Mocking and Sandboxing (support reproducibility, avoid side effects, ease testing) Source: http://googletesting.blogspot.com
  42. 42. 3. Find problems across platforms (address fragmentation) Image Credit: OpenSignal
  43. 43. Takeaway ● Random approaches can be a good start to automated testing ● No single strategy alone is effective enough to cover all behaviors => A combination is more effective ● Use our test infrastructure: https://moquality.com/robot s coming soon
  44. 44. USING ROBOTS FOR ANDROID APP TESTING Shauvik Roy Choudhary PhD, Georgia Tech Founder, MoQuality @shauvik http://shauvik.com http://moquality.com

×