(String: {%- set hs_blog_post_body -%} {%- set in_blog_post_body = true -%} <span id="hs_cos_wrapper_post_body" class="hs_cos_wrapper hs_cos_wrapper_meta_field hs_cos_wrapper_type_rich_text" style="" data-hs-cos-general-type="meta_field" data-hs-cos-type="rich_text"> <div class="blog-post__lead h2"> <p>Kubernetes is a leading containerized orchestration platform, changing the way software is developed. The open-source solution makes containers even more accessible and popular.</p> </div></span>)

Kubernetes Deployment 101 - What, How, And Why

Photo of Michał Gozdowski

Michał Gozdowski

Jan 18, 2023 • 12 min read
Netguru-Biuro-2018-5355

Kubernetes is a leading containerized orchestration platform, changing the way software is developed. The open-source solution makes containers even more accessible and popular.

As a software developer, an architect, or a DevOps engineer, think about how many areas

need to be covered when software is created, before and after it’s published to end users. Availability, scalability, reliability, security – it’s a never-ending list. But what if some of these things can be automated or at least easier to achieve?

Kubernetes is an open-source containerized orchestration platform that comes with lots of features and capabilities that help cover those areas using built-in and add-on components. In this article, we’ll explore core Kubernetes components, especially one called Deployment.

Basic Kubernetes components

Before diving into what Kubernetes Deployment is and how it works, it’s necessary to understand basic Kubernetes concepts: Pods and Controllers. So, let’s use this glossary chapter to better understand the big picture.

What is a Kubernetes Pod?

A Pod is the smallest unit of workload in Kubernetes. It’s often misunderstood that a Pod and a Container are the same thing, because of the similarities between them. For example, a Pod specification determines that it uses Linux namespaces, groups, and other types of isolation – the same things used for Container isolation.

A Pod is defined as a group of one or more Containers with shared storage and network resources, and a specification for how to run them. In this way, Pods are an additional layer or wrapper over one or more Containers.

To run your containerized application using Kubernetes Pods, after publishing an image to the registry, create a YAML manifest. There, specify which Container has to be run inside the Pod, with mandatory or optional Kubernetes Deployment metadata, labels, etc.

Usually, you don't need to create Pods directly, or even singleton Pods. Instead, you can leverage built-in Kubernetes Controllers and create Pods using workload resources such as Deployments, Jobs, or for stateful applications, StatefulSets.

What is a Kubernetes Controller?

As mentioned above, Pods aren’t usually created manually. Kubernetes creates and manages a Pod’s lifecycle for you, using components called Controllers.

The best way to explain a Kubernetes Controller is to use the common example provided in the Kubernetes official documentation. Imagine you have a thermostat and you’re feeling cold, so you set the room temperature higher.

The temperature in the room at that moment is the actual state, and the temperature you set on the thermostat is the desired state.

The mechanism works, making the actual temperature as close to the desired temperature as possible. In a room with a fully functional thermostat, after some time, the actual temperature is the same as the desired temperature.

The behavior of a Kubernetes Controller is similar: You specify the desired state in the YAML manifest and Kubernetes does its magic to reach that desired state. Controllers run in a loop that checks the current state of the Kubernetes Cluster and makes changes to sync the actual state with the desired state.

A Controller tracks at least one Kubernetes resource type. Now, imagine that in your desired state you’d like to have three Pods running specific applications.

And in a scenario where there’s only one Pod in the Kubernetes Cluster with that application running, the Controller will change the environment, so that two extra Pods are added.

After some time, there will be three Pods running. In case of application failure inside a Pod, the Controller runs in a loop, checking that the desired state is different from the actual state, and tries to add another Pod in place of a failing one.

What are the different types of Kubernetes Controllers?

To make things even more confusing, you can use more than one type of Controller for your application, depending on your needs. There are different types of Controller that enable you to configure behavior on the Kubernetes Cluster, listed below with a short explanation:

  • ReplicaSet – used to guarantee the availability of a specified number of identical Pods. In other words, ReplicaSet ensures a specified number of Pod replicas are running at any given time.
  • Deployment – higher-level concept that manages ReplicaSets and provides declarative updates to Pods along with other useful features that we’ll dive into later in this article.
  • DaemonSet – used to ensure Pods are running on a specified number of nodes in the Kubernetes Cluster (one every node or subset of nodes). A typical use case for DaemonSet is running a logs collector or a node monitoring Daemon on every node.
  • StatefulSet – used for managing the deployment and scaling of a set of Pods, providing guarantees about the ordering and uniqueness of the Pods. Unlike a deployment, a StatefulSet maintains a sticky identity for each Pod. These Pods are created from the same spec, but aren’t interchangeable: Each has a persistent identifier it maintains across rescheduling.
  • Job – for creating Pods that perform tasks until completion. Jobs can also be used for running batch tasks or ad-hoc operations. Jobs differ from other Kubernetes Controllers in that they run tasks until completion, rather than managing the desired state like with Deployments and StatefulSets.

Therefore, except for built-in Controllers, Kubernetes is designed to be automated by writing client programs, so it’s possible to write our own Controller or use a custom Controller.

Deep dive into Kubernetes Deployment

Now we know the basics and have an idea of the big picture, we can dive into the details of one of the main Kubernetes Controllers: Deployment.

What is a Kubernetes Deployment Controller?

Deployment handles a set of pods without a unique identity. It’s used for stateless applications i.e. frontend apps that don’t store any data or a state. Deployment is a higher-level concept that manages ReplicaSets and provides declarative updates to Pods. So, why do you need another Controller over ReplicaSets, which is already a Controller?

Basically, the key difference is that a Kubernetes Deployment Controller gives us a lot more configuration options than ReplicaSet. By using Kubernetes Deployments, you have more control over the life cycle of ReplicaSets and therefore more control over the life cycle of the underlying Pods.

A Kubernetes Deployment Controller also gives you the option to scale, update, and roll back Pods to the previous version.

How to use Kubernetes Deployment

Before diving into details, let’s have a look at the Deployment manifest and how to create it. The Deployment manifest is a YAML file that contains information about Deployment itself and a Pod template, with instructions on how to run Pods.

We can use an example to specify how many replicas you should have, what version of the container image you should run, what labels and annotations should be attached to the Pods etc. Below is an example of a very simple Deployment YAML manifest:


apiVersion: apps/v1
kind: Deployment
metadata:
  name: busybox-deployment
  labels:
    app: busybox
spec:
  replicas: 10
  strategy: 
    type: RollingUpdate
  selector:
    matchLabels:
      app: busybox
  template:
    metadata:
      labels:
        app: busybox
    spec:
      containers:
      - name: busybox
        image: busybox
        imagePullPolicy: IfNotPresent
        command: ['sh', '-c', 'echo Container 1 is Running ; sleep 3600']

To create a Deployment out of YAML, you need to run the below command:
kubectl create -f myDeployment.yaml

It’s worth mentioning that in the case of Pod failure, Deployment checks if the desired state is different from the actual state and tries to add a new Pod to replace the failing one. The failing Pod will be terminated when the new Pod runs.

What are different Deployment strategies?

One of the benefits of using Deployment is that you can choose the update strategy and even perform Deployment rollout status updates with zero downtime. It’s awesome that Kubernetes, based on configuration, handles the whole update process for you.

The default behavior after making changes in Deployment, i.e. by running the below command and updating labels or image versions in the YAML manifest, is that Kubernetes triggers Deployment rollout status updates.

Take care that only a certain number of Pods are down while they’re updated, and only a certain number of Pods are created above the desired number of Pods (by default 25% maximum unavailable and 25% maximum surge).

kubectl edit deployment/foo-deployment

But what if your new image version isn’t stable and your Pods are in an infinite crash loop? Don’t worry, Kubernetes keeps a rollout history, so you can roll back to previous revisions in case of disturbances. You can even pause and resume rollout if you want to apply many changes at the same time.

As mentioned above, the default update strategy is rollout, but there are others. Below is a non-exhaustive list of Deployment update strategies:

  • Rollout Deployment – default strategy where Pods update in a rolling update Deployment fashion. Before the old Pod is terminated, the new Pod needs to be fully running. Additionally, maximum unavailable and maximum surge are configured to control the update process.
  • Recreate Deployment – all existing Pods are terminated before new Pods are created.
  • Canary Deployment – covered by Kubernetes documentation, it’s possible to create separate Deployments with specific labels (i.e. Stable and Canary). Then, using a number of replicas, you can load-balance traffic between Stable/old and Canary/new Deployments.
  • Blue/Green Deployment – deploy a new application version as a separate Deployment called “Green”, having “Blue” deployment still in place in the same Kubernetes Cluster in case of an issue with the new application. After successful deployment, reroute the traffic to “Green”.

Choosing the right Deployment strategy is crucial for maintaining the availability and reliability of applications.

Ensuring scalability

Let’s assume you have one Pod running and you know there will soon be a pick up in the traffic, so that one Pod won’t be enough for your application to maintain availability and reliability.

By using Deployment, you can simply scale the number of replicas to the desired value, either by using Kubernetes CLI or by editing the YAML manifest. Below, is an example of how to scale your Kubernetes Deployment using CLI:

kubectl scale deployment/foo-deployment --replicas=5

You can leverage Deployment by adding one more component to the equation: Horizontal Pod Autoscaler (HPA). In the modern world, there’s no time and manpower to scale everything manually. By using HPA with Deployment, based on specific metrics, you can let Kubernetes scale your application for you.

When there’s a pick in traffic, Kubernetes automatically adds the number of replicas in the Deployment manifest, i.e. by checking CPU utilization is above the configured value, and then removes the number of replicas when there’s normal traffic again. Below is an example of how to autoscale a Kubernetes Deployment using CLI:

kubectl autoscale deployment/foo-deployment --min=1 --max=10 --cpu-percent=80

Deployment vs Statefulset

One of the biggest concerns is when to use Deployment and Statefulset, because these two are the most similar Kubernetes Controllers. The biggest difference is that Deployment is used for stateless applications; Deployment creates Pods without a unique identity and the Pods are interchangeable.

Statefulset, on the other hand, is used for stateful applications (i.e. databases); it creates Pods in sequential order, and the Pods have a unique identity, so they aren’t interchangeable – they maintain their identities after restarts.

Kubernetes Deployment: the lowdown

When it comes to deploying containerized applications into Kubernetes environments, it’s good to understand core concepts and advantages and disadvantages of key components, so that when you deploy your cloud application, it’s available, scalable, reliable, and secure.

In this article, I’ve outlined the different types of built-in Kubernetes Controllers, with a deep dive into Kubernetes Deployment and how it works.

Hopefully, after reading this post, you understand there are many different Controllers and each has its own role. In some scenarios, it’s good to use a Job, in others, Deployment is enough.

And when you want to run your application on all nodes, DaemonSet does the job. Use StatefulSets when you need to stick a unique identity to your Pods – when the pods for Deployment are interchangeable.

Finally, you can leverage Deployment as a Controller to set a suitable Deployment strategy for your application. Additionally, by mixing it with CI/CD pipelines, you can create a powerful solution where you can update your application with zero downtime – for example, using a Canary Deployment strategy.

Photo of Michał Gozdowski

More posts by this author

Michał Gozdowski

DevOps Engineer at Netguru
Fuel your digital growth with cloud solutions  Discover powerful tools to drive revenue in the cloud Learn more

We're Netguru!

At Netguru we specialize in designing, building, shipping and scaling beautiful, usable products with blazing-fast efficiency
Let's talk business!

Trusted by: