Automation testing offers a wealth of benefits compared to manual testing. It increases efficiency, saves time and reduces the chance of human error, giving us more confidence in the stability of our builds.
Over the last year we’ve made great progress integrating automation within our projects. The approach is constantly improving and I’ve worked relentlessly to create a framework that covers a range of different types of testing. Currently, this includes functional, visual, API, accessibility, security and load tests – all of which I’ll cover in this blog.
We’re using Visual Studio Team Services (VSTS) for these projects to add source control for collaboration and the ability to run the tests. One of the benefits of this is that VSTS clearly shows us a breakdown of results that indicate a pass/fail ratio.
We can also upload the reports which are created during the test run, allowing us to go back to a specific run and see all the results stored against it. These are saved in VSTS, so we can then analyse the results for troublesome areas or identify trends which may help prevent issues on future projects.
The automation framework I’ve created includes all dependencies for an automation project. It features numerous example methods and classes which can be used for reference, as well as a helper class to provide assistance with writing the scripts. The helper class makes reading and writing the scripts very simple and easy to follow.
Everything is packed into a NuGet package which can be easily installed. This means we can create an empty project, install the NuGet package and be completely ready to start writing automated scripts with minimal setup time.
The functional tests being developed are getting much more advanced. We’re now not only testing the front end of the websites we build but also interactions within the CMS.
I’ve created tests which, among many other things, can create and delete pages, edit widgets, and publish changes. I can also interact with a mailbox, meaning I can complete full user journeys that require email. For example, I can write a test that completes a registration form, check the email that’s sent out and use the link in the email to complete the journey to ensure registration is successful.
As you can imagine, this saves us a lot of time when it comes to testing things like ‘forgotten password’ journeys, and also helps to ensure text and links in emails are correct.
I previously wrote about using Postman to test an API. Using an integration called Newman, I can run the Postman tests via the command line. I can add a step to my build process within VSTS to trigger these tests and then attach the results. This way, the results are available for everyone to see.
For a start, the visual comparison tests are written in C# using the Selenium WebDriver, so it integrates with the current framework seamlessly. Like Wraith, it takes screenshots and compares these to a set of baseline images, where the differences are then highlighted.
Then, because Selenium supports many drivers, I can run the same test in lots of different browsers – including Chrome, IE, Firefox, and Edge.
We want to make our websites as user-friendly as possible, which is why I’ve integrated aXe into our automation framework to run high-level accessibility tests. It’s an open source accessibility tool that can run in the browser. On providing aXe a list of URLs, it analyses each page and generates a report with any accessibility errors it finds.
Web accessibility standards span a number of disciplines and facets of web development, which are laid out in the Web Content Accessibility Guidelines (WCAG). There are three different standards, which are A, AA and AAA. In my test I’ve included the ability to set which standard I’m testing against, so it only reports back relevant accessibility issues.
The OWASP Zed Attack Proxy is a very popular free security tool that can scan your web application for different types of security flaws. It can perform these different attacks and assess the application for potential issues, then create a report with its findings.
I’ve implemented the ability to use the OWASP ZAP Proxy with my framework to perform a security scan. When I provide the URL of the web application I want to test, it crawls the site, finding all the different URLs, and then performs multiple attacks on these URLs. It will then publish all results into a report.
During our web development process, we run a performance and load test on the application to understand how well the application will respond in real-world scenarios.
We use JMeter to record and run these tests and can export this to a .jmx file, which has the script, setup and configuration of the load test. VSTS will run the load test using the .jmx file and publish the results, giving clear metrics of the outcome. With this method, we can easily assess if any of the load times increase due to new code.
Improving our testing process
We are constantly looking to streamline and improve our testing process, and introducing and refining the automation has been a big help. It gives us instant feedback on each build and identifies any issues early on. This gives us more time to concentrate on areas that have recently changed, while maintaining confidence that the build is stable.
All of this means we can run our web application deployments on a nightly schedule and perform our tests shortly after. On the new codebase, we can run the automated functional, API, visual, accessibility, security and load tests and see very quickly if we’ve introduced any issues into the project. All the reports from these tests are attached to the test run in VSTS, for everyone’s reference.
For the foreseeable future, we’re looking to continue working on our automation – hopefully until we can push one button and all our testing will be taken care of.