It doesn’t matter what sort of app you are writing, nothing hurts your chances of success more than a 1-star review. A disproportionate number of poor reviews can really hurt your overall rating, which can turn downloaders off. Not only that, both Google Play and Apple App Store favor apps with more positive reviews when calculating search engine position ranking. Both app stores also take into account your apps uninstall rate. The number 1 reason for a 1-star review is quality. The majority of 1-star reviews are related to installation, launching, performance and crashes – all things that can be corrected prior to release.
The biggest ask is how to achieve that. There are several testing techniques, but the one I will cover today is UI testing. With UI testing, a developer can take a compiled app, install it on a real device, and then run a suite of tests that simulate clicks, data entry and so on by injecting those events into the app. Most of the top app producers implement best practices. For example, Zillow produces a real estate app. Their entire business is built around a mobile app that people looking for a new house take with them as they look at houses. Quality is not something that they do at the end – they build a full suite of testing in at every stage of the development cycle. When a developer is working a bug, he gets access to the device that reported the issue for ad-hoc UI testing. When the fix gets merged in, a set of nightly builds kicks off and does standardized testing of the happy paths through the app on the latest devices. Finally, right before Zillow releases a new version of the app, they test on as many devices as they can get their hands on.
The major problem with testing on real devices is the acquisition and maintenance of the real devices. In Apples ecosystem, this isn’t too bad – there is the iPhone 5, 5S, 5C, 6, 6S and 7 in large and small varieties across generally 3-4 different OS versions. That’s only 30-40 different device / OS combinations. However, in the Android world, there are over 24,000 different devices in the wild and over 200 different OS variants. That’s before we talk about the tablets. If you are doing this yourself, you end up with a small subset of devices, a long QA cycle time, and you still have a significant risk of quality issues. The cost of maintaining a mobile lab is also significant – along with the significant investment in devices, in generally takes 1-2 FTE to manage the lab.
So what’s the answer?
Perhaps we can turn to the cloud for this? After all, the cloud is the new normal. AWS manages a facility we call Device Farm – a large set of device/OS combinations that you can rent by the minute. These devices can be linked to your computer for easily setting up ad-hoc sessions, and they can be integrated into an automated test suite using normal build tools. Let’s take a look at each one of these scenarios.
If you remember when we were discussing the best practices that companies like Zillow deploy, the first testing scenario was a single developer trying to work on a bug with a real device. What if the device that you need is a little known Android device running across a 2G connection provided by China Mobile? Device Farm keeps a large quantity of devices on hand. You can use Remote access to connect a device you choose to your local developer station.
You can then test and debug new functionality under development (ad-hoc or exploratory testing), reproduce customer issues, run the UI test suite on the device for the situation or develop new UI tests to catch a new scenario.
It is simple to use: Choose a device from 100s of different makes and models. Install apps and change settings on the fly. Swipe, gesture, and interact with the device through your web browser as if the device was in your hand.
When you are finished with the device, you return it to the pool of devices ready for another person.
Automated UI testing is simply achieved. You write your UI tests – this can use native testing mechanisms like Espresso or XCUI, or cross-platform testing frameworks like Appium and Calabash. Once you have your UI tests, you upload the app (ideally with debugging turned on), any test data and the UI tests to the Device Farm. You then select any number of device/OS combinations that you want – Device Farm currently has over 350 combinations. Device Farm will run your UI tests on the provided app, and you pay for the device-minutes that the tests are running for. AT the end of the test run, you will see success / failures. You will be able to get the crash logs, device logs, screenshots, and video of each test run so you don’t have to wait around to see the test failure – it’s recorded in great detail allowing you to easily diagnose the problem.
Device Farm supports a wide variety of Android and iOS devices across phone and tablet form factors. Since UI Tests rely on the actual compiled app, UI testing works across native, cross-platform, hybrid and mobile web applications.
Device Farm will wipe and reformat the device after each test run, and the devices are located in a data center which minimizes physical interference between devices.
Device Farm supports all the major testing frameworks, and you get all the reports and artifacts from testing that you need to properly analyze the results.
Device Farm integrates with Jenkins (via the Jenkins plugin) and Android Studio. We also have an SDK and a CLI, allowing you to integrate testing into your own development workflow and work the way you want.
Pricing is based on device minutes, which are determined by the number of devices used and the duration of the tests. AWS Device Farm comes with a free tier of 250 device minutes (currently 1000 device minutes). After that, customers are charged $0.17 per device minute. As testing needs grow, customers can opt for an unmetered testing plan, which allows unlimited testing for a flat monthly fee of $250 per device. No long-term contracts – and customers have the freedom to alter and optimize their pricing plan based on their needs.
There are a number of companies, all with significant mobile properties and a significant investment in their brand. The common thread here is that releasing a bad quality app not only ruins their chances of downloads but damages their brand as well, leaving customers wondering about the quality of their other products as well. For example, Tableau has a continuous deployment model and Device Farm is an integral part of that flow – they don’t release unless they have a clean UI test run.
An interesting name out of this bunch of Rainforest QA. One of the issues we heard over and over is that writing automated UI tests is hard. Rainforest QA uses human testers to execute your test plan on Device Farm. With this model, you don’t have to program the UI tests – you just write down what the test should be.
Mobile UI Testing for Quality
Ad-hoc Developer UI Testing
Remote access to real devices
Routine UI Testing
Nightly UI testing to find regressions
Multiple standard Devices
Pre-Production UI Testing
Test on a broad range of devices prior to release
CC BY 2.0: https://www.flickr.com/photos/adactio/14202296106
Gesture, swipe, and interact with
devices in real time, directly from
your web browser
AWS Device Farm – test on real devices
Test your app in parallel against a
large collection of physical devices
in the AWS Cloud
Select a device View historical sessionsInteract with the device
Improve the quality of your apps by testing against real devices in the AWS Cloud
(native, hybrid, web)
AWS Device Farm
• Android and iOS (native, hybrid, web)
• Scale: Over 350 unique device and OS combinations
• Pay for what you use
• Integration: Jenkins, Android Studio, SDKs, CLI
• Reports: Results, screenshots, logs, performance, video
• Flexibility: Support for many popular frameworks
• Security: Full HW and SW isolation