It might seem obvious - by writing feature specs, we end up with fewer errors and oversights in the code, resulting in a higher quality project. While this is the most important gain (particularly for the client), it’s certainly not the only one. When I say ‘the quality of the project’, I don’t only mean that it’s as bug-free as possible, but also that there’s a good project flow and well-presented code that lacks weak spots. In this article, I’d like to talk about some other benefits that I discovered as I developed my automation testing skills in Netguru.
Every project checklist has some points that are ludicrously tedious to test manually because there are so many combinations of properties’ values to be checked. Therefore, these features are more prone to human error than others.
Luckily, most of them can be automated and although writing automation tests is a lengthy process, it eventually saves time later. This is especially useful during heavy app refactoring, where changing one method can affect surprising and unexpected parts of the application.
So, dear QA, do not let yourself be easily convinced that you don’t have time for writing specs - they might save your bottom once or twice and as a result - your project.
As a Junior QA Specialist, I thought that clicking through the app was enough to learn the project well, but I soon found that this was not the case. A feature test is an instruction given to the program; in order to write a good feature test, you need to be thorough and you can’t be thorough if you don’t know what should happen at each step. That’s why you need to ask questions (a lot of them) to learn how a particular feature should work in all possible cases - something that would be very hard to test manually. As a corollary, you get really detailed feature tickets in your project management tool - tickets which are easier to code, fix and check.
What’s more, the QA specialist gets to form even better relationships with the developer and the client, since they’ll frequently discuss possible solutions (especially when it comes to user experience issues that might not have been taken into consideration during scoping sessions).
Another advantage is the ability to write a spec before the developer codes the feature. But from my experience this usually only applies to bugs. When a QA team member finds out that the app behaves differently - they may write a test to indicate what the desired behaviour is, saving the developer the time they would need to figure this out and write a spec on their own.
Asking the right questions also makes a QA specialist sensitive to possible edge cases or corner cases - situations where a combination of specific property values cause an unusual case resulting in app failure. For instance, let’s assume we have a blog platform app:
1) where you can choose if a blog post is visible to friends or to platform users and 2) the platform blogs should not be visible to unregistered users.
The task assigned to the developer only mentions the first condition, while the second one was done before the posts were even coded - so our developer doesn’t spare a thought on it, concentrating only on their ticket. The obvious test here would be to check the visibility of posts to friends and other registered users - as the task description says; but adding a test to check if unregistered users can access the post may be unwittingly omitted. The role of a QA here would be to investigate the specs written to that particular ticket and think ahead - how can the ticket affect other features? Are all the user roles covered?
The most common situations where edge cases may occur are in features containing validation (forms!) and checking the visibility of app elements according to the user role. The reason for this is that the developer concentrates on delivering the feature happy path (every step that the user takes to complete the action and everything goes as expected) but forgets that securing exception paths (where the user cannot complete the action and the app throws errors) is equally important.
There are situations when the committed test passes and everything seems OK, but when you take a closer look (by running the spec step by step, with a browser preview), it turns out that the spec will never fail - meaning that it doesn’t check anything. The most common occurrence would be when your spec checks if some element does NOT appear on the website (
expect(page).to_not have content); which is tricky because your element will also be absent when you land on a wrong page.
For example: There’s a group page that can be accessed by both registered and unregistered users (who receive group updates), but only registered users can see a ‘Leave group’ button. We write a test to check leaving the group - the first test would cover registered users and the happy path for leaving a group. The next test would check to see that the non-registered users don’t see a ‘Leave group’ button. The thing is, while it might not be visible on the group page, it certainly won’t be visible on the sign-in page, where you might accidentally land unawares - unless you run the spec in a browser!
The main role of the QA specialist is to maintain the high quality of both processes and the project itself. One way to do so is to explain the flow to the juniors and to pass on knowledge about good practices in writing specs.
For this reason, and because we at Netguru believe in a self-reliant search for solutions, our QA specialists don’t write specs for tickets delivered by juniors and interns - letting them learn it the hard way. A QA team member will check if all scenarios are covered and if the spec code is well refactored.
The positive side effect is that after a month (which is the period an intern or junior spends on one project) a more experienced QA specialist can tell a lot about the junior’s attitude to work, attention to detail, and ability to foresee potential conflicts when new features are added to the app. This feedback is essential to assess the progress of a junior developer or intern, because it’s the QA specialist that has the closest work rapport with them. Maintaining quality assurance by manual testing wouldn’t give you this much intel. And last, but not least - it often happens that a junior or intern asks the QA for help with debugging a failing spec. Now there’s something that can boost your self-confidence!
When you, a QA specialist, look through the commits to figure out how a feature works and analyze more complex specs written by a developer you can broaden your knowledge through visual memorization. You also end up visualising what happens at each step, which helps put you in the correct algorithmic mindset.
On the other hand, pushing your own specs exposes your code to public scrutiny (i.e. the whole company) through code review. The suggestions you will get through this process are invaluable - more experienced developers will give you advice on best practices for refactoring your code. This is worth so much more than asking your own project’s developer for their opinion or googling the solutions on your own.
Now I hope you understand that automation tests are a means of improving the quality of a project and have much more to offer to the team than simply preventing bugs from occurring. They can also make you more attached the project, even sentimental at times :) However, I’m sure I haven’t covered all advantages that come from writing specs - do you know more? Let us know in the comments!