How Netguru Tracks And Maintains The Quality Of Your Project
If I were to tell you about the differences between working with an agency and working with a group of freelancers, I would definitely mention “process” as one of them.
The same group of people working under an organization and working independently do almost the same job, except for the fact that the agencies have developed a lot of different wrappers and checks along the way. I would like to talk you through a few such checks which we do on a regular basis for our clients.
Why do we want to keep the quality high?
That's a fairly easy question: the better the work we ship - the happier are our clients. And of course, we, as employees, want to take part in well-managed and well-designed projects where we don't have to worry about unexpected crises which might suddenly blow up in our faces.
We do work in many different areas and we've stumbled across a lot of issues - it's impossible to avoid them completely, but it's necessary to learn from the past and not repeat the same mistakes. This why we periodically check our work for signs of potential issues and make sure that we use the best practices we've been enhancing for years.
Recurring Reviews from an outside perspective
We have a bunch of different reviews we do for every project in our company. Our goal is to make sure there is a person from outside the project who checks the most important things and reports back to the team. We believe that the "outsider" point of view is crucial in a process, as the person is objective, meaning that all of the small issues will be spotted and won't be regarded as unimportant and postponable.
How often we do the various reviews? It depends - but usually every 1-3 months. We have daily standups, weekly iterations, code reviews and Continuous Integration (CI) in place. These keep our code quality very high and most of the time the high-level review is just a routine matter.
Reviews we do
Backup review - we make sure your data is safe
We make sure that applications we maintain can be recovered if there is a serial outage, like the server provider is completely down and there is no way we can access production hosts. We check if the database backup is up-to-date and if it is in good enough shape to restore. We also check external applications and the people/places responsible for them in case we have to change some security tokens or connect to a service which isn't accessible from the app directly.
From a high-level perspective it looks like this:
- assume that there is no way to connect to the current server
- create a new server instance
- restore everything
- check if the restored app can be accessed and used like nothing has happened
- document and report all the steps and potential points of failure
- put the report in a repo with a timestamp
Project review - we make sure your product will be on time and on budget
We check the basic application flow: how good the code coverage is, what integration do we have, how complex the code is, etc. We try to spot any "suspicious" part of the application, and we check if there are the proper tools and solutions involved. This helps to transfer the knowledge within the team. The other things we check are:
- Do you have up-to-date libraries?
- Is your project easy to set up?
- Are there any security issues present?
- How does the code coverage look?
- What kind of features are you going to implement soon? (We take the time to discuss them with the team.)
- What errors do we have in our error-reporting tool and how should they be addressed?
QA review - we make sure your product will not fail at a crucial moment
The main goal of this review is to check the QA flow from the perspective of a peer Quality Assurance Specialist. It’s always good to have an objective and fresh pair of eyes to contribute. We ensure that all QA best practices and processes are well maintained and conducted in the project. Our example checkups:
- Is the project’s documentation accessible, up to date and comprehensive?
- Are tickets in JIRA in a good state, with proper descriptions, labels and screenshots?
- What is the average page load time in terms of application speed? (we have a special tool for measuring this)
- Are there automatic tests written by QAs?
- Is there room for improvement in terms of performance and quality? (UX suggestions, for example)
- What is good? What can be better, when taking into consideration a holistic approach to the project?
- Are there any potential problems for a new QA joining the project?
As you can see, there are many things involved in our reviews. We’ve described them from a very general perspective, but it’s clear that we double check a lot of things, just to make sure that our projects are of high quality. Can’t we do everything well without all the checks? Sure, that’s not a problem, but we think it’s much better to have a second opinion from someone outside the project team, even if it only confirms how good everything is. In any other scenario - we can only benefit from the issues raised in the reviews.