How to Ensure Reliability of Cypress Testing

Photo of Artur Duziak

Artur Duziak

Updated Jan 9, 2023 • 12 min read
business documents on office table with smart phone and laptop computer and graph financial with social network diagram and three colleagues discussing data in the background-1

Painfully long test execution times, jobs always failing, the team slowly ignoring the tests all around. We’ve all experienced it or at least heard horror stories about it. The question that always pops up in our heads is, “How do I make my test automation reliable, fast, and…”. Here you’ll find some tips from my experience with making my Cypress testing reliable.

I had the same problems when I joined a project that had already implemented an end-to-end (E2E) Cypress test automation framework, one that wasn’t successful and caused more problems than it solved. To share some numbers: a suite of 100 Cypress tests was taking 30 minutes to run, and it was failing around 70% of the time. This issue meant the whole team was frustrated and kept ignoring the tests as they were mostly false positives.

In this article, I want to share some tips that helped us completely convert web test automation from slow and brittle to fast, resilient, and highly successful. Thanks to this, we were able to speed up the test to just seven minutes. Even with 300 more Cypress tests added to the test suite, these were run around 30 times a day with close to a 99% pass rate.

The system we were testing was made of two websites. One was an admin panel packed with functions to control the whole business/content, and the second was a website for end-users where they could consume all the content made in the admin panel. The project aimed for a continuous delivery approach, which meant the test automation framework had to be top-notch.

The tool we were using in the project was Cypress.io, so the code examples are Cypress testing, but most of the tips are quite universal and will work for any test automation framework or tool.

Question, verify and improve your tests

Starting with one that I cannot emphasize enough, I often see that people writing tests (be it developers, test automation engineers, or anyone else) do not really check if the test actually “investigates” correctly. They assume that the test passing locally means everything is working fine, but when the same test starts running multiple times on CI, it can become flaky and produce false positives.

The result is time wasted on investigating, or, what I believe is worse, the test might miss a real issue in the software and become a false negative.

Here are a few ideas on how to validate your tests locally. These will save you headaches and time wasted on debugging flaky tests in the long run:

  • Run tests a few times in a row.

This one is obvious but still worth mentioning. After you write your test and it passes, rerun the task multiple times just to be sure. I can’t think how many times I’ve found issues by simply rerunning tests. Additionally, ensure that the configuration for locally run tests resembles how the tests will be run on CI. For example, when you execute tests locally on Chrome browser in headed mode, but on CI it runs against Electron in headless mode, this will create the potential for issues to appear.

  • Flip the assertions.

Most of the time, when writing tests, we work on perfectly working software, so we tend to assume that if the assertion passes, then it correctly tests what it was designed to test. Such thinking can be amplified when we watch the test file running – we see the app behaving correctly, so we have confirmation that it is working. Frequently, I have found tests that pass without any issues even though the assertions were inverted, which, when you think about it, shouldn’t happen at all.

// This one should pass

cy.get('#popup').should('not.be.visible');

// This one should not pass (but it passes)

cy.get('#popup').should(be.visible');

This issue is more common when dealing with negative assertions. There is an awesome article by Gleb Bahmutov explaining the risk of using negative assertions. I highly recommend checking it out.

  • Run tests in less ideal conditions.

When we write and run tests on our local machines, we tend to forget how stable such an environment is, but conditions on continuous integration aren’t always that great. The machines tests are run on can randomly slow down due to computing power, or the internet connection can become choppy for a short while. Some of these harsh conditions can be simulated by throttling the network locally and rerunning the tests. We can then observe the behavior of the tests and adjust them if needed.

Avoid UI as much as possible

This advice might be counterintuitive, especially for inexperienced developers and engineers – why avoid using user interfaces if this is one of the main goals of E2E automation frameworks?

The main reason is that UI is the most brittle part of software compared to all other levels. It’s slow, prone to change, and often unstable, which is why we should keep the use of UI to a minimum – there should be only a few tests for UI to provide confidence that everything is working fine.

For example, instead of signing in through UI before every test, you should instead use an API and populate the browser with data. This action will save time and make your tests more reliable as the whole suite won’t rely on one part of the interface to work.

Understand the serial nature of Cypress testing commands

Cypress commands are kind of a gray area. When writing tests, it looks like the code is synchronous, but it isn’t. It’s asynchronous in its own way (called serial by the creators), which means that most commands are run one after another, but in some cases, commands can execute before the previous one is finished.

All of this might confuse people writing the tests (especially the less experienced) and leads to the least obvious and hardest to debug issues during the Cypress test execution. To get rid of this issue, you need to understand and use the Cypress `.then` command, which makes sure that specific steps are run in the order you’d expect. It’s important to understand how the Cypress test execution works before writing the tests. official documentation, there is a great explanation of how and why things work this way.

Keep the tests atomic

Creating long and extensive test scenarios that go through multiple parts of the web application are hard to write, maintain, and debug. Instead of doing this, we created numerous smaller tests that were quick and checked a specific part of the application. Such an approach has a few advantages:

  • Tests become easier to understand
  • Debugging tests after a failure becomes less challenging
  • Maintaining tests requires less effort
  • Multiple quick tests gain more from parallelization

This test design approach is called “atomic” and is a common practice across the industry. Of course, we should approach it with common sense and not reduce the tests too much (splitting every single assertion as a separate test case); otherwise, it will create more trouble than benefit.

Observe your tests

In my experience, we tend to look at test results only after the CI job fails and flashes red, but I’ve found that some issues can be detected long before the job fails. What I did was observe the tests that passed on CI periodically. Even if the tests passed, I could still learn from them.

For example, I noticed that some tests failed the first time but passed on the retry. If this occurred, I often dug deeper into it as such a test, even though it has passed, is still considered flaky and might cause more problems in the long run.

Obviously, you are not forced to fix these issues right away, but it is a good idea to write them down to remember and fix them on the next occasion. The easiest and quickest way to look for strange behavior is by looking at the time it took the tests to run. If the time is longer than usual, it might be a signal that something went wrong.

Get everyone involved

Automation should never be a one-person or purely QA job as it can lead to a situation where it’s hard to keep up with automation in a fast-moving project. This situation will create automation debt or a bottleneck on the software development flow.

Making test automation a team effort allowed us to reduce the possibility of automation debt as developers had no problem understanding, fixing, and contributing to test automation. Additionally, involving other people raised the level of confidence in the tests for the whole team, as everyone understood what was going on under the hood.

"Well-structured, written, and maintained tests should be a core component of every application. It saved us a lot of time and trouble during checking for regression, and ultimately, we were so confident about the stability of our apps that we weren’t afraid to deploy, even on Fridays!"

dariusz-cybulski

Dariusz Cybulski

Senior Frontend Developer at Netguru

Use parallelization (cautiously)

Parallelization is by far the most effective way to speed up your tests, but there is a reason I mention it last. That’s because it isn’t a technique we should blindly add to our existing tests, as we might run into some issues:

  • Tests intervening with each other.

When you run multiple tests at the same time against one environment, what might happen is one test might cause another to fail due to conditions changing.

An example of that might be one test checking resource creation and the second one deleting it. Issues like these are very hard to catch and debug as they might not occur 100% of the time and are hard to reproduce locally. To avoid such problems, you should design your tests with parallelization in mind right from the start.

  • Speeding up slow tests.

Parallelization can give us the false assumption that slow tests got faster, which is true, but by doing that, we are wasting precious resources and money that can be easily saved. What we need to do is first optimize the tests to run as fast as possible and then scale them up with parallelization.

First, you need to create the best possible tests to start thinking about parallelization. It should be a cherry on top of successful test automation.

Paying attention to tests and optimizing pays off

These are the most impactful ideas that helped the team and me. After these were applied, our E2E automation frameworks became highly successful and allowed the team to deploy new changes daily with high confidence.

"Well-written and organized test suites in the project increased my confidence in frequent and safe deployments. The test architecture helped in test suite maintenance and its further development."

sebastian-olko

Sebastian Olko

Senior QA Engineer at Netguru

To put it in numbers:

  • We sped up the tests from 30 minutes to just five minutes (with four times more test cases, starting from 100 and building to 400 as the project grew)
  • Cypress tests were run over 30 times a day with close to a 99% pass rate
  • We were able to deploy daily with very few regression issues appearing.

I hope you learned something new that will allow you to improve your test automation efforts.

Tags

Photo of Artur Duziak

More posts by this author

Artur Duziak

For Artur everything regarding software development processes is interesting and worth checking...
How to build products fast?  We've just answered the question in our Digital Acceleration Editorial  Sign up to get access

We're Netguru!

At Netguru we specialize in designing, building, shipping and scaling beautiful, usable products with blazing-fast efficiency
Let's talk business!

Trusted by: