We’ve been working hard in our team to integrate automated tests into our build and deploy pipeline.
Automated tests are executed whenever changes are made. They help us to find issues and rectify them as soon as possible. This is tremendously beneficial in terms of the project’s efficiency and the quality of the deliverable. To achieve this, we have integrated a set of tools as part of the build and deployment process meaning we get the feedback we need almost immediately.
I’ll explain the progress we’ve made and what is involved.
What is the build and deploy process?
During the development of a website, multiple developers simultaneously submit code changes for the tasks they are working on. When we as Quality Assurance (QA) Analysts are ready to test the changes, we ask the build server to compile and deploy the latest version of the code to a local test environment. Once the deployment is complete we can then start to test.
Although the changes are likely to be for new functionality, it’s very possible that the new code has had a knock-on effect on existing functionality that was previously working. So, on top of testing the new features added against their acceptance criteria, it’s important for us to make sure that everything else is still functioning and displaying as it was in the previous version. Failure to spot these issues can impact the project further down the line, and it’s these types of issues where automated tests really come into their own to help prevent that happening.
What type of tests have we automated?
So far in our team we have managed to automate three different aspects of a project:
- Automated endpoint integration testing of a third party API using Postman
- Comparing the visual differences between versions of the code using Wraith
- Asserting that key user journeys are functioning as expected across multiple browsers using Selenium WebDriver
Each of these have been configured as post-deployment tasks that will be executed in turn when a deployment has been successful.
API Endpoint Integration Testing
Often the websites we build communicate to a back-end system via a HTTP API. This is the middle man that sits between our website and a database that’s protected behind a secure firewall. If there’s an issue with any of the API endpoints then this will likely result in areas of the website not functioning. It’s very useful to test these endpoints as it can save time investigating a problem with the website when in fact the issue is down to the API.
To perform the endpoint tests we use Postman. This tool provides the ability to script a set of HTTP requests that we can fire off to the API. We can set the pass/fail criteria for each request, one of which will be that a successful HTTP response code (200) is returned. We can also set conditions in the response data as pass/fail criteria of the test.
When this test step is triggered, Postman fires off our collection of requests and generates a report collating all the results. We can then analyse the results and notify the relevant parties that there is an issue.
Visual differences with Wraith
After the API tests have been completed, the next build step is triggered using Wraith. Wraith is another testing tool we use to compare screenshots of the front end before and after the deployment. QA Analyst George Fraser covered this tool in a previous article, so I won’t go into detail.
Just like the Postman tests, Wraith will generate a report of its results, highlighting pages that have changed since the last version was deployed. It is, of course, entirely possible that pages have changed intentionally as part of the work done, but it’s useful for checking pages that we know should not have changed.
Asserting key user journeys with Selenium WebDriver
Selenium is another tool which George covered in his blog, so you can find more details about the tool there as well. The same process is followed after the Wraith tests have completed. Selenium will run through the suite of tests written by the QA and again, like the other tools, generate a report.
Over the last year we’ve introduced Slack into our project teams. This has vastly improved our efficiency, with much improved communication for all of the project stakeholders. It’s essentially much easier than dealing with multiple email chains. Slack also comes with the added bonus of integrating with all of the various tools we use in our build, deploy, and test pipeline. This means we can configure each component of our pipeline to push detailed notifications into our projects channel. If, for example, an automated test fails, we can be notified instantly of the failure with a link to the test results detailing the reasons for the failure.
After the deployment process has finished, all we have to do is look at our Slack channel to see the results:
We will continue to use these automated reports in conjunction with manual testing for a greater coverage in much less time, but we’re constantly striving to automate more aspects of website testing to limit the chances of any regression defects or new defects making it into the project.
Our ultimate goal will be to automate as much of the front end functionality as possible and also automate the key backend journeys. This would allow us to concentrate on testing the areas of the website which have been changed. We’ll be looking into automating the CMS next so we’ll keep you updated with our progress.
If you’d like to join our team and continue exploring automated testing and the limitations of these tools, get in contact with our Talent Director firstname.lastname@example.org to find out more.