2020/12/04 • 4 min read

Quality at Speed for our Mobile Application

In the digital industry, La Redoute and specially all its feature teams invest to accelerate software development to provide the best customer experience. The perpetual improvement of performance, coupled with growing complexities, has led Quality team to invest in tools and services for all teams to improve and manage the quality of their software releases.

La Redoute mobile application increasing in adoption

Currently, for us, IOS and Android applications have a better “conversion rate” than our website. The business volume also increases faster than the web. However, application testing effort unfortunately, do not follow this trend.

Application testing was not optimal, mostly based on manual execution. This was with partial functional and insufficient hardware coverage (devices, OS versions, etc.).

 

Our mobile application testing was lacking coverage and speed

At La Redoute, we use Cerberus Testing tool in order to manage and execute all our automated tests. Test campaigns are executed either daily or on demand depending on the maturity of the application. However, the execution is systematic before any production deploy on all environments combined. Cerberus ensures all non-regression test during deployments. We currently run more than 5000 automatic tests on the Web part. Unfortunately, on the mobile application scope, we had only 140 automated tests so far.

In terms of execution time:

  • On the web 5000 automatic tests take 75 minutes to run,
  • While for 140 mobile application, the tests take 90 minutes.

So, the question was: How can we increase the coverage and improve the quality of the La Redoute mobile application?

La Redoute Mobile App
Figure 1 : La Redoute Mobile App

Could we build our own solution?

The possibility to implement our bespoke application farm began to emerge, as we had the skills to build a farm of phones compatible with our automated testing solution.

Since 2017, we were using Browserstack to run our non-regression tests for web and mobile browsers, which provides us 10 phones (iOS/Android) on their cloud. This experience allowed to have some metrics such as phone availability, execution duration, number of tests ran per non-regression campaign, etc.

The proposal cost from Browserstack was 5 times higher than the internal estimated cost, for a platform made up of 20 phones in a private cloud (10 Android and 10 iOS). Moreover, the execution time of test took much longer than physical devices. We were ready to accept the internal overhead management cost, as forecasted as residual in the cost comparison.

“You Don’t Need A Novel Infrastructure Idea To Be Successful”

Two devices were ready – each one to execute either iOS or Android tests – to execute 70 test cases in 4 hours. Now our goal was to increase the coverage and reach 400 IOS and Android test cases and most importantly not exceed 2 hours.

 

The Solution we implemented

Technical Solution

Here is the technical solution in place that we will describe in the following sections.

Solution Architecture

As explained in the previews paragraph, each OS required specific tools, for that, we have set up two separate technical stacks for both Android and iOS.

Figure 2: Mobile Farm Architecture

 

Interacting with Android app using UI Automator

For the Android, we use the existing platforms and tools, mainly the Android SDK.

In order to establish the connection between the phone and the controller, we are using Android Debug Bridge (ADB). Regarding the test framework used for Android is UI Automator, it allows to interact with web elements (click, type, scroll, etc.).

 

iOS supported by its specific ecosystem

For iOS we are using XCode and WDA (WebDriverAgent). Some configurations are required on XCode, all devices must be registered under Apple Developer account. In addition, the developer team must be registered on XCode with a valid signed App.

Once these conditions are respected, we can now activate ‘Enable UI Automation’ on device settings under ‘Programmer’ options.

For both OS we use Selenium where we set up a device grid. Selenium is a Test Automation framework complemented with Appium. Its main goal is to help interact with Android and iPhone elements.

In detail, for our iPhones we set up a Selenium Grid on a Mac PC containing Xcode. Here we register our available iPhones and each time we execute a test case, the Selenium Grid distributes it to the next available device. The same is done for Android using a normal PC but without XCode.

 

Cerberus is then configured to use the Appium testing farm

Whether for Android or for iOS, we need Appium framework.The interactions with the devices is managed by Appium. It does not matter if it is an Android OS or iOS device using the abstraction available. Each Appium server is configured to target a phone and occupies its own port.

In Cerberus Testing, we have configured a “robot” for Android and a second one for iOS (a set of information that helps target devices for testing). For each robot, we specify which test framework is used: UiAutomator or XCUITest.

When a test is run on a robot, Cerberus sends the orders (that is, the actions that constitute a test case) to the Android or iPhone grid. Here, the test case is distributed to the next node/device by Selenium grid. If no node is available, the test will be on hold until it is executed by the first node that is available again in a method known as Round Robin.

All devices are connected to a USB hub (one for iOS and another one for Android) which automatically detects connected devices and adjusts the output power to preserve battery health and will also manage the synchronization of data transfer.

This Hub also has an application that allow to manage and monitor devices and define the script to be executed (for example to restart a phone).

 

Infrastructure Solution

We have invested in a ventilated multimedia storage cabinet with big dimension to support more than 60 smartphones and with these following features:

  • A ventilated cabinet to prevent the phones from heating up
  • Keep all their functionalities and buttons accessible
  • Easy-to-build support

We bought 26 phones (14 iPhones / 12 Samsung) and two hubs (Android and iOS) with 15 ports for each.

Finally, our Portugal colleague designed 3D supports to put devices.

Figure 3 : The Mobile Application Devices Closet

We can now expand our mobile application tests with a robust and scalable solution

To sum up, we have improved the campaign execution time by 40% for Android and for iOS by more than 25%. This was a real enabler to increase the test coverage passing from 200 to 400 mobile tests running in about 100 minutes. On the long term, in addition to the improvements in coverage and execution duration, we will reduce the mobile application testing farm by 50%.

On the other hand, we kept our Browserstack service “App Automate” to increase the coverage and quality on specific devices. As a result, we are running our tests on multiple devices as each country uses specific ones. For example, 5,9% of Russia customers are using iPhone 7. This allows us to offer better experience to our localized end-users.

Finally, the implementation of the solution was a successful challenge for testing team and especially for Telmo Miranda, Filipe Fazendeiro, and Stephane Dessaint. They managed to implement the application farm during the COVID period within the scheduled time frame. Considering this in-house solution as a first release, but we are planning to improve and enhance in upcoming period.

 

Go back to the blog posts list