Automated App Performance Testing… Monkeys Not Included

at June 3rd, 2015


At Groupon we want to give our customers the best possible experience when using our mobile apps. A great experience is a fast experience. Achieving a fast experience requires ongoing performance testing. Our purchase order for an infinite number of monkeys was denied, so we’ve chosen to invest in test automation instead.

Instead of monkeys with stopwatches, we are using a combination of RoboRemote and Testdroid. RoboRemote is a remote control test framework for Android that utilizes UiAutomator and Robotium to control the application under test. RoboRemote sets application state, closes or restarts the application, monitors android debug bridge (ADB) logs and performs other steps necessary for repeatable and reliable tests.

Frameworks like Robotium or UiAutomator are great, but on their own a test automator has limited ability to do fun things like upgrading the app under test, clear app data or even reboot the device/emulator during a test. With RoboRemote (and other remote control frameworks such as Appium) an automator has a lot more freedom with direct access to ADB and the ability to run other commands on the host system. This flexibility comes in handy when you need to do something like setting initial application state for your performance tests.

Testdroid gives us the ability to run our automation on real devices whenever we want to. With the tools provided we can see useful information like cpu and memory usage after our tests conclude. Also we have the peace of mind that the devices will be available and managed for us. No one has to go into a room with a wall full of devices just to reboot one that was behaving erratically. We always know that we can walk into the office in the morning and see the results of our test automation. In other words, Testdroid gives us an efficient and reliable set of monkeys for our app performance test needs.

What is Being Measured?
If this is your first foray into mobile application performance testing then you may be wondering what should actually be measured or how to add markers/data points in your application. There are generally two parts to this.

  1. Define your metrics. For our example we are measuring application startup time, but that is a very general definition of the measurement. There is no universal definition of application startup time. You must define the metric for your application. It should be tied to an event where your user can start interacting with your app in a meaningful way. Using Groupon’s application as an example we would define it as being when the list of deals is shown to a customer. Even this definition is not specific enough. It still leaves room for interpretation. Does it mean we are measuring the time from when the user taps our icon to the main activity being started, or when we received a deal list from the server and rendered it, or when our images actually loaded and rendered? Remember that your choice of what to measure will subtly influence what your team works on optimizing. Choose wisely!
  2. Add the measurement. You may already have a fancy logging platform such as Splunk. In that case it should be as simple as identifying the right start/end point in your code and then log the time to your logging platform. If you don’t have a fancy logging platform you can simply log the results to logcat and post process the information.

How do the Performance Tests Work?
For this example we’ll be focusing on app startup performance, however the methodology can be applied to any part of your application. We’ll first run through the basic steps for the test and then dive into what the tests are measuring.

Step 1 – Setup your application’s basic state. For Groupon, a monkey has set our current location and dismissed any introductory screens.

Step 2 – Start your application.

Step 3 – Wait until your application gets to a known state. In Groupon’s case we wait for our deal list to load.

Step 4 – Exit the application. This step may vary depending on what you are trying to measure. For this example we are simply forcing the application to be closed. If your application caches data then you may needed to take some additional steps to clear that data. This all depends on what you are trying to measure.

Step 5 – Go back to step 2 and repeat X number of times. The more repeats the better.

Step 6 – Query your metrics platform (in our case Splunk) for all of the data generated during your tests. If you don’t have a platform for this then an alternative would be to log the data to the console and collect it via logcat. At Groupon we integrate this step within our Jenkins build by utilizing the Testdroid API to determine when our tests have finished executing and then firing off an API request to Splunk to collect the data.

Step 7 – Graph the data over time and evaluate it for any performance boosts or regressions. A very basic graph is shown below.

Everything you just read is an example of how Groupon approaches mobile performance testing. You should utilize the information to tailor your own strategy to mobile app performance testing. The important things to remember are:

  • Automation on real devices provides repeatability
  • You should have a well thought out metric to test
  • Finance will not let you get an infinite amount of monkeys

If you have questions about mobile performance testing please feel free to contact me at

There are many more great open source projects from the engineers at Groupon such as –

Odo – A mock proxy server for your automation, manual testing or development needs.
DotCI – Bringing the ease of configuration from cloud systems like travisi and the runtime configuration of docker to jenkins.
Testium – Integration testing for nodejs

Additional projects can be found at Groupon’s github page. If you are interested in positions at Groupon or think you can build a more effective automated monkey then please visit

This post has been cross-posted to the Testdroid Blog.

No comments yet

Leave a Reply

Your email address will not be published. Required fields are marked *