Behavior driven UI test automation with Selenium
The Warehouse Management System (WMS) is the software system that runs nearly all operations in Picnic fulfillment and distribution centers, enabling warehouse workers to pick orders and keep track of stock. At Picnic, we continuously improve our applications. In an earlier blog post, we shared how we built our WMS, one flow at a time.
To keep up with the rapid development of new flows, the need for automated testing increased. On the backend, we use multiple automated testing strategies: unit tests, integration tests, and component tests. For the most part, testing the frontend remained manual work.
That’s why we recently introduced UI test automation for the WMS application. We are now past the stage in which we created an architecture for these tests, and are now moving flows to the new setup. We would like to share how we got to our setup, and what alternatives we considered.
How did we test WMS before?
The WMS system has a Java 11 backend and an Angular 8 frontend. It is extensively covered by all kinds of tests. Before introducing UI tests, we tested the application on the following levels:
1. Java unit tests: We use unit tests to verify the written methods on the lowest level. Our unit tests only test the logic of a single method and mock potential integrations. This way we can test pre- and postconditions of one specific unit.
2. Integration tests: Our integration tests still test at the unit level but include integrating services as well. This way, a set of pre- and postconditions can be tested that verify how the unit should integrate with its related units.
3. Backend component tests: The component tests view the application as a black box, and test the entire application in its running state by calling API endpoints. Our component testing setup consists of behavior-driven tests using the library ‘Behave’ for Python.
4. Angular tests: Angular has its own testing framework (using Jasmine & Karma) to verify the existence of certain UI components for a specified state of the Angular application. We use this in a fashion similar to backend unit testing, to test that a certain webpage fits a set of postconditions given an application state.
We test from all these viewpoints but we felt this was still not enough! As the number of flows grew, regression testing the entire application (including frontend and all) became extremely time-consuming to do manually. To solve that, we have recently introduced UI tests into the testing pipeline. Similar to component tests, they test the entire application in its running state. Instead of calling API endpoints, these tests use a combination of API endpoints and frontend controls to test application features.
How did we automate the WMS frontend?
Now to the fun part: how did we actually implement our UI test automation? We considered a large set of tools in the UI automation scene and eventually chose Selenium because it best suits our tech landscape. Many of the tools in the Selenium landscape use a very confusing naming scheme (Selenium, Selenoid, Selenide, Selene, etc.), but hold tight, all will be clear soon!
Most of you have probably heard of Selenium. It is a framework that handles interactions between an automation script and the web browser. This web browser has to run somewhere, and we made the decision to use Docker containers for it. This makes the test automation script more system independent, and as the WMS systems also run in Docker containers, we created a single docker-compose
file to run everything required for testing. This is where Selenoid comes in. Selenoid handles the creation of Docker containers when a test scenario is started. We chose to spin up a new Docker container for each test scenario, to make them as independent as possible.
For reporting, we use a tool called ‘Allure’. Allure collects data whilst running the tests, and generates an HTML report when finished. Allure integrates nicely with Cucumber, Selenide, and Behave, documenting each step performed in a structured way. For the WMS automation, we configured Selenium to take a screenshot with each step. The generated report then looks as follows:
Python or Java, which one to choose?
The most difficult decision in the history of mankind: “Python or Java?” In Picnic, we mostly use Java for our production applications and Python for data-related scripts. Test scripts are a bit in the “in-between zone”, so we decided to investigate both a Java and a Python solution.
Whoever experimented with Selenium on a modern web application probably knows that finding elements can be quite frustrating. For instance, if elements are dynamically loaded in Selenium might not find them because it is faster than the webpage. That’s why we decided to pull in a wrapper library: Selenide for Java and Selene for Python. They extend Selenium with a set of functionalities that help to write test cases and assertions in Selenium. It has a fluent API to quickly perform assertions and actions at once. Furthermore, it helps make up for deficiencies in Selenium when used in testing-context. These deficiencies include implicit waiting, neater syntax, driver management, etc.
Below code fragments show a comparison of an example page object model for Java and Python.
Java
public class AdminLogin {
@FindBy(xpath = “//*[@formcontrolname=’username’]”)
protected SelenideElement username; @FindBy(xpath = “//*[@formcontrolname=’password’]”)
protected SelenideElement password; public void enterUsername(String input){
username.shouldBe(empty).setValue(input);
} public void enterPassword(String input){
password.shouldBe(empty).setValue(input);
} public void submit(){
password.pressEnter();
}
}
Python
from selene import browser, by, beusername = by.xpath(“//*[@formcontrolname=’username’]”)
password = by.xpath(“//*[@formcontrolname=’password’]”)
def enter_username(input_name):
browser.element(username).should(be.blank).set(input_name)
def enter_password(input_pass):
browser.element(password).should(be.blank).set(input_pass)
def submit():
browser.element(password).press_enter()
In the end, we went for Python, to keep it similar to our backend testing setup. Our backend component tests already implement a full WMS API, and using the same for the UI tests allows us to reuse the API. This way we can use the backend API to build the state, considering these operations are slow to execute via the frontend. Thus, we’ll only perform UI interactions for the flow that is actually under test.
To sum it up
With UI test automation in place, our entire WMS landscape is covered with behavior-driven tests in both front- and backend. This makes both for good documentation of the flows themselves, plus they can be executed to verify the flows yield the intended results. The main job of QA then becomes to write the features, verify the coverage of the tests, and to check that the pre- and postconditions of the tests match the specification.
Automating our tests has greatly reduced pressure on the QA and will continue to do so as the size of the WMS increases. Features generally have a longer development time (because they now include writing test scenarios), but this efficiency is easily regained in the reduction of time spent testing manually.
If your growing regression testing backlog is constantly pulling you back and testing your application becomes tedious repetition, then we strongly suggest that you look into UI test automation. And we hope that our example motivates you to at least try automated UI testing in one of your projects.
Recent blog posts
We're strong believers in learning from each other, so
our employees write about what interests them.