From undergoing digital transformation, the world has now moved to an urgent need for acceleration.
Every organization has software integrated into their internal and external processes. How does the speed of software delivery impact your entire business? We present the benefits and solutions for building a robust and fast CI/CD pipeline.
Building software is a complex and time-consuming process. Development teams look for ways to better manage it and make it more predictable. It impacts any project type regardless of programming language or cloud platform used.
According to Gartner, worldwide software spending is expected to grow 9.6% to $806.8 billion in 2022 and 11.8% to $902.1 billion in 2023. The main driving factor behind the increase in spending is that the IT labor market continues to tighten, making it difficult to attract and retain talent.
In addition to that problem, CIOs now require more business value from already understaffed development teams. To overcome this challenge, we will focus on one aspect of the software development process that can enhance it greatly –- the CI/CD pipeline.
Types of pipelines in DevOps
DevOps is a constant journey through the software development life cycle that includes agile planning, continuous integration, continuous delivery, continuous deployment, and monitoring of applications. To better understand the current topic let us first expand the definitions of CI and CD.
Continuous integration (CI) is the practice of testing and building code after each successful merge (code changes) to the version control system. It helps us to find any bugs we introduced in our code early in the development cycle, so that we can correct them faster using fewer resources.
Continuous delivery (CD) is the process by which code is built, tested, and ready to be deployed at any given time.
Continuous deployment (also CD) is a subprocess of continuous delivery by which we automatically release our software when all the tests pass.
Why is the speed of CI/CD important in software development?
The answer to that question is rather straightforward: The faster you can execute pipelines, the faster you can ship your software.
Let us check the benefits.
Shortened feedback loop
You should consider the fail-fast approach while running your CI pipelines. If you introduce any bug, or the code quality is poor and your tests catch that problem, the running pipeline should be aborted and you should receive feedback immediately. That way, you can start to work on a solution much faster.
Increased team velocity
The outcome of a shorter feedback loop for each engineer will reflect on the entire team and your velocity in shipping smaller pieces of software much faster and more frequently. You will not be blocked by long-running processes, which is a key benefit in any organization.
Improve organization agility
Ultimately, that will lead you to improve your business agility and ability to deliver better software and make your customers happy, which is a distinct competitive advantage. That is why you have to take it seriously.
How to accelerate CI/CD pipeline?
Before you can improve your pipeline, you need to consider three main factors that will affect it.
Speed of which components you plan to measure:
- Is it the execution time of a single job or a workflow?
- How many times are you able to run the pipeline per hour or per day?
- How long does it take to deploy your application?
How you want to measure it:
- Most often, you will use the built-in feature of your CI/CD tools like Azure DevOps or CircleCi.
- But you can also send your data to external tools like Datadog or Sumo Logic and view your metrics there.
Having a customer-value focus in mind, what goals do you want to achieve?
- These goals should have specific, measurable targets, for example: reduce the time spent on fixing bugs by 50%, or reduce the time needed to deploy a new version to the production environment by 25%.
- You should also answer the question: What goals offer good enough value for the current situation, taking into consideration time and money?
After you define the source of your bottleneck and you set your metrics and goals, you can start to improve your pipeline. Here is a list of six good practices to make your CI/CD pipeline more efficient:
Execution of resources
You need to ask yourself: What kind of resources does your workflow consume the most? Does your application need to be tested on specific operating systems like Linux, macOS, or Windows?
If you can answer those questions, you can then choose appropriate resources. CI/CD tools provide us with virtual machines with default settings, but maybe you need more CPU or RAM or GPU. In that case, you should use a custom VM for your computation-heavy workloads.
Docker image and layer caching
If you containerize your application with Docker, then choosing the right image for your project can have a huge impact on your build time. Depending on the language you use to build your application, there are already basic images created for it with dependencies and tools included.
But maybe you do not need all of them, or you have other requirements. Then you should consider creating your own image according to best practices directly from tool creators.
If you run your workflows, jobs, and tests in a linear fashion, run them simultaneously instead. You can use a matrix strategy to dispatch jobs multiple times with different settings. You can split tests over a range of dedicated environments and run them at the same time.
Depending on whether your tests are isolated or not, you will have to use a different approach. Some tests might depend on each other, some tests might need access to a database or other services. You need to take that into account. A basic rule of thumb is what can run in parallel, should run in parallel.
As a first step, you should examine your workflows and the jobs that are run inside them. If any of these jobs require additional dependencies or files, this is where you likely can use caching.
There is no need to download the same package version all over again during a job.
You can cache it and use it in your job instead of downloading it each time. You can also use it for your build or pre-build artifacts. While using caching, you need to remember that it is possible to create race conditions in caching across jobs in a workflow, and it can also break isolation between jobs.
As your application grows, the number of manual and automated tests will increase and should increase, and you should follow these guidelines to have them under control:
- Split your tests into unit tests, integration tests, e2e tests, smoke, acceptance, and security tests.
- Run them only when it is necessary and not on each run of the pipeline.
- Identify slow, duplicated, or flaky tests, and improve or delete them if necessary.
- Use a profiling tool to check every change before and after.
- Mock external services in your unit tests.
- Run the database in memory for unit tests if possible.
As the last step in your journey to improve your CI/CD, you should decide which jobs really need to run every time:
- Maybe some jobs are only necessary during release and not for each commit or branch.
- Maybe some tests and security checks can be scheduled during the night.
- Maybe you added too many “bells and whistles.”
Last but not least, make sure that your CI/CD tool dependencies are up to date.
What makes a good CI/CD pipeline?
The CI/CD pipeline is usually different for each project, there is no such thing as one-size-fits-all. You need to build and tweak it according to your business case and resources. I encourage you to follow the above process and keep in mind that:
- This process will never end.
- You have to revisit your pipeline configuration from time to time.
- Set your KPIs and goals and monitor them constantly.
- Improve your pipeline step by step and observe results, you do not have to implement it all at once.
Always remember that the goal of accelerating the entire software delivery process is to make your customers and development teams happy!