Punchh’s automation framework — Enhancing scale and quality of mobile app testing

Punchh Technology Blog
6 min readJul 6, 2020

Authors: Naveen Jangid, Asharam Yadav

Introduction

Automation testing is the process of executing test cases using automation testing tools and frameworks, and generating automated test reports which compare actual results with expected results. There are various test automation tools such as Appium, Calabash, Selenium, Cypress etc. and by using these tools one can build a framework to suit one’s scenarios and requirements. Automation testing improves test coverage, enhances testing speed, early detection improves quality of testing and scores high on re-usability, simultaneity (on multiple devices and mobile applications) of the testing process.

At Punchh, we have 150+ loyalty based mobile apps currently — and we are building mobile applications for new clients, while working on app updates for existing applications for feature enrichment. Moreover each mobile app has at least 70+ screens. Our mobile app QA team had been doing manual testing rigorously which has worked well, but with increasing scale, we found it more efficient and effective to automate a large part of the testing process. Automation testing enables quick turn around for the majority of the known test cases, providing enough time for critical testing and risk coverage. This article will give an overview of what we have done, and what we have achieved from our adoption of automation testing for mobile apps which touch user volumes of greater than 10% of the USA population per day.

Benefits that we achieved

Efficiency

Overall, we have been able to save time, increase the velocity of the testing process and enhance the quality. Through detailed automation reports, we have more effective QA feedback. Automation enables easy sanity testing, which is a positive round of testing for developers before handing over to QA that has generated value in the long run in the app delivery process.

The QA analysts are able to focus more on custom features, or those which need negative testing rather than the central features of our app which can be covered through the automation framework. Also, feedback is faster now — developers are aware of the issues for 60% of the scope through automated detection, and thus accelerated results and reduced costs. Automation has also aided in the regression testing process.

Flexibility

Automation has brought in huge efficiencies through reusability, flexibility and concurrent execution. Automation provides reusability — once scripted, test suites can be executed any number of times in every release, and can run for each device.

Automation provides us the flexibility to run tests at any time — By integrating the automation framework with Bitrise CI tool, execution can be triggered automatically (At a specific time or at the time of build release). Also, parallel device execution concurrently is possible through automation — Execution of test cases on multiple devices at the same time reduces QA time significantly.

How we went about this

Tools & Technologies

We used the following tools and technologies primarily to develop the automation framework :

Automation Tool and Framework — Appium with Java

Test Framework — testNG

CI tool — Bitrise

  • Why we choose Appium

Appium supports multiple frameworks and programming languages. Being an open source framework, it enables development of the framework as per our requirements. It supports native, web and hybrid applications, and also allows cross-platform application testing (across Android and iOS).

Also, it doesn’t need app source code for running the tests. And doesn’t require modifications in the application in order to automate it.

Process

Our list of test cases (around 600 in TestCollab) were translated to automation test scripts. Around 150 end to end automation test cases (all based on our major module — loyalty) cover almost 400 manual test cases of TestCollab, related to sanity coverage, device coverage and regression — These are mostly functional ones, with scope of UI testing limited to checking presence of element (like verbiage and alerts). In one automation test class, we have combined several manual test cases and also several permutations and combinations of user inputs (Appendix A shows an example).

When any mobile app comes to QA with even a small change, both regression testing and device coverage testing has to be done. Automation testing helps with functional testing, regression testing and device coverage. Not just mobile app testing, but replication of server side dashboard operations such as barcode generation, fetching Whitelabel data from our server side is done through the automation scripts. We run these for both new app launches and big app updates.

Execution aspects

Test framework configurations are essential to trigger the tests appropriately — Among the configurations, mobile app related (app name,package, bundle ID) and device related (device name, OS, platform) are primary. Having a diverse set of device configurations (minimum 3 platforms of iOS and 5 platforms for Android, various device types) ensures we cover a large spectrum of mobile devices.

We use separate classes (loosely coupled) for each testing module — in the testng.xml, all the modules which are to be executed are listed. Automation cases are managed on the basis of Tags (Annotations) — the tags used are Sanity, Smoke and Functional. Whenever a tag is chosen, only test cases pertaining to the tag are executed. (Code snippets are shared in the appendix A below for illustration — For each class, the corresponding tags are mentioned)

  1. Sanity Tag — Sanity testing is a subset of regression testing. It is performed to ensure that the code changes that are made are working properly, at a broad level. Both functional and UI cases are included in the sanity tag.
  2. Smoke Tag — We include the critical cases that check the stability of the build, and these act as a gate to proceed for further testing.
  3. Functional Tag — Includes only functional test cases.

By using tags we are able to trigger the appropriate set of test cases through automation, in phases of testing like Build Acceptance, Functional Testing, Regression and Retesting.

As the tests run, the screens on the simulator keep moving correspondingly with a screenshot of the automated step and when errors are found, details are shown (for ready reference as shown in Appendix B.4).

LifeCycle of automation testing

The implementation of Automation Life Cycle executes in parallel with the Software Development Life Cycle(SDLC) process.

Appendix A

Appendix B

B.1 Execution report screenshots (on the console)

B.2 Pass test case steps screenshot

B.3 Failure test case steps screenshot

B.4 App screenshot is also automatically captured in case of test case failure and adds the cause of failure at the bottom of screenshot.

About the Authors

Naveen Jangid leads the automation testing initiative for the mobile delivery team at Punchh.

Asharam Yadav leads the mobile QA team at Punchh.

The authors would like to thank Praveen Pandey (Head of Delivery), Narendra Verma (Engineering Manager, Delivery) and Arun Ginjala (Chief of Staff, Engineering) for their guidance.

--

--

Punchh Technology Blog

Punchh is a marketing & data platform. In the blog site, we will share our learnings from data and technology.