In an earlier project (Docker - Uptime Monitoring with Uptime Kuma), I covered how to set up the Docker Engine and Desktop utility. In this project, I wanted to go further and touch on Kubernetes; Kubernetes is an extension of containerised solutions such as Docker. Deploying a hardened NGINX web server hits three marks: realistic, security-relevant, and a project without a cloud bill or a PhD in YAML. It mirrors what you would expect in a production environment to provide access to an internal tool, documentation site, or microservice, and covers almost every Kubernetes primitive along the way.
By default, an NGINX container runs as root, has no resource limits, and can read or write anywhere on the node. This project adds a non-root user, a read-only filesystem, dropped Linux capabilities, and CPU/memory quotas; the kind of controls you would need in a production environment.
Kubernetes is simply the orchestration layer that runs containers at scale, handling automatic restarts, load balancing, rollbacks, self-healing for high-availability, and security controls. For more details on Kubernetes, check out my notes - Docker & Kubernetes. Before touching the terminal, it is worth building a clear mental picture of the four foundational Kubernetes concepts.
The hierarchy flows inward: a Cluster contains one or more Nodes. Nodes run one or more Pods. A Service sits across the cluster and routes traffic to Pods by label; it is not bound to a specific node. This layering gives Kubernetes its resilience: any layer can fail, and the others compensate.
The cluster is the entire Kubernetes environment - the 'datacenter'; the sum of all machines (nodes), networking, and control login that Kubernetes manages. It has 2 parts:
Control Plane (aka Master Node)- handles decisions related to scheduling, health checks, desired state, etc.
Worker Nodes - managed by the control plane and actually run the workloads.
In a production environment, a cluster might span dozens of physical services across multiple availability zones.
For this project, minikube simulates an entire cluster inside a single Docker container.
Within the cluster, a node is a single machine (physical or virtual) that runs the containers. They provide the CPU, memory, and networking resources for pods to run. Each node runs essential components to communicate with the control plane and manage pods locally:
Kublet: an agent that talks to the Control Plan and ensures the containers within pods are running and healthy.
Container Runtime: the software responsible for pulling container images and running the containers.
Kube-Proxy: a network proxy that maintains network rules on the node, enabling network communication to and from pods.
The Control Plane's scheduler decides which node gets each Pod, based on available CPU, memory, and any constraints declared. Developers rarely address nodes directly; they describe what they want, and Kubernetes figures out placement.
It is the smallest deployable unit in Kubernetes; a wrapper around one or more containers that are tightly coupled and need to share resources - network namespace and storage.
Share Resources: Every container within a Pod shares the same IP address, port space, and storage volumes, allowing them to communicate easily using localhost.
Deployment: Pods are rarely created directly; instead, a Deployment is used, which is a controller that manages a set of identical Pods.
Ephemeral Nature: Pods are designed to be temporary; when a pod is terminated, deleted, or a node fails, it is not rescheduled to another node. The Deployment controller creates a new replacement to match the desired replica count - self-healing.
A Service solves the fundamental problem: Pods are ephemeral. Every time a Pod is restarted or replaced, it gets a new IP address. If other parts of your system were talking directly to a Pod's IP, they'd lose the connection every time it restarted.
A Service provides a stable virtual IP and DNS name that always routes to the current healthy Pods, selected by label. It acts as an internal load balancer. There are three common types: ClusterIP (internal only), NodePort (accessible from outside the cluster on a specific port), and LoadBalancer (provisions a cloud load balancer).
Only three tools are needed for this project:
Docker Engine
Minikube - An open-source tool that sets up a lightweight, single-node Kubernetes cluster on your local machine (Windows, macOS, Linux) for easy development, testing, and learning. Think of minikube as a version of Kubernetes.
Kubectl - Is the official command-line interface (CLI) tool for interacting with Kubernetes clusters. It acts as an intermediary, sending commands to the Kubernetes API server to deploy, inspect, manage, and troubleshoot containerised applications, as well as manage cluster resources.
The official Minikube (minikube.sigs.k8s.io) and Kubectl (kubernetes.io/docs/tasks/tools) websites provide installation instructions. I did not need the Docker Desktop utility for this project. Run the following commands to confirm each has been installed successfully:
minikube - minikube version
kubectl - kubectl version
Launch minikube using Docker as the underlying driver. This spins up a single-node Kubernetes cluster inside a Docker container on your machine. Running the kubectl cluster-info command, the Control Plane's API server is queried. The cluster is the invisible foundation that every subsequent command operates against.
# Start minikube with Docker driver
minikube start --driver=docker
# Verify the cluster is up
kubectl cluster-info
The command below shows the single minikube node in Ready status.
kubectl get nodes
Instead of baking the HTML into the image (which would require a Dockerfile), the HTML will be injected at runtime via a ConfigMap (see right - save this file as nginx-content.yaml); an API object used to store non-confidential configuration data as key-value pairs, decoupling configuration artefacts from container images. This is Kubernetes's way of externalising configuration. This is a core security and ops best practice: the image stays generic and untouched. Then execute:
kubectl apply -f nginx-content.yaml
This command tells Kubernetes to reach the 'nginx-content.yaml' file and create a ConfigMap named 'nginx-html' in the default namespace, storing the index.html content as a key-value pair inside the cluster.
This is the most important file - nginx-deployment.yaml (see right - nginx-deployment.yaml). It defines a Deployment that maintains exactly 2 Pods, each running the nginx:1.25-alpine container. The securityContext, volumeMounts, resource limits, and liveness probe are all Pod-level configurations. When you ran kubectl delete pod $POD in Step 6, the Deployment immediately schedules a replacement Pod onto the node. Think of it like renting office space for an employee.
The Deployment is the rental agreement; it tells Kubernetes, "I need 2 identical offices set up, and if one ever gets destroyed, immediately set up a replacement." That's what replicas: 2 means: always maintain exactly two running copies.
The Container is the employee; in this case, it is an NGINX web server, whose only job is to serve your webpage to anyone who requests it. The employee 'hired' is image: nginx:1.25-alpine with the required skillset.
The security settings are the office rules. This is where the "secure" part comes in, written in plain terms:
runAsNonRoot: true - The employee is not allowed to have master keys to the building. They can only access their own office.
readOnlyRootFilesystem: true - The employee cannot rearrange the furniture or bring anything new into the office. They can only work with what's already there.
capabilities: drop: ALL - The employee has no special privileges whatsoever; no access to the server room, no ability to override locks, nothing beyond their basic job.
allowPrivilegeEscalation: false - The employee cannot promote themselves to a higher access level while they are on the job.
The resource limits are the utility caps. An office lease might cap electricity usage so one tenant cannot run up the whole building's bill. CPU: 200m and memory: 128Mi ensure this employee can never consume so many resources that they starve out everything else running on the same machine.
The volume mounts are the filing cabinets. The html-content mount is a locked filing cabinet; the employee can read files from it, but cannot change them.
The liveness probe is the manager doing check-ins. Every 10 seconds, Kubernetes knocks on the door and asks, "Are you still working?" If there is no response from NGINX, Kubernetes assumes something went wrong and replaces that employee with a fresh one automatically, no human intervention needed.
Step 3 is you telling Kubernetes exactly what kind of worker to hire, how many to keep on staff at all times, what rules they must follow, what resources they are allowed to use, and how to know if one has stopped doing their job.
Execute: kubectl apply -f nginx-deployment.yaml
Watch the pods come up: kubectl get pods -w
Up to this point, the two NGINX Pods each have their own internal IP address inside the cluster. But these IPs are unstable; every time a Pod restarts, crashes, or gets replaced, it comes back with a completely different IP. Also, there is no built-in way to reach those Pods from outside the cluster, e.g. from a web browser.
The Service solves both problems at once. A Service gives the Pods a stable IP and DNS name inside the cluster, and optionally exposes them externally. The nginx-service.yaml file (see right) tells Kubernetes this is a Service resource, gives the Service a stable DNS name inside the cluster (other Pods could reach your NGINX by calling nginx-service rather than an IP address), and most importantly, to watch for any Pod in the cluster carrying the label app: nginx-secure. The Deployment above stamps this label onto every Pod it creates; so as Pods die and get replaced, the Service automatically discovers the new ones and starts routing to them. No manual update needed.
Covered under the Four Pillars section, we will use the NodePort to access it from outside the cluster, e.g., using a browser.
Execute: kubectl apply -f nginx-service.yaml
Normally, with NodePort, I would need to manually find the the Node's IP address and the assigned port number, then construct the URL myself. The minikube command (below) does all of that automatically, looks up the Node IP, finds the NodePort that was assigned, builds the full URL, and opens it in the default browser in one step.
minikube service nginx-service
minikube service command opens up the default browser with with the correct node IP, port, and URL.
The End-to-End Traffic Flow
When the browser loads the page, the request travels as follows:
Browser hits the minikube Node IP on the NodePort (e.g. 192.168.49.2:32666)
The Node forwards it to the Service on port 80
The Service checks its selector, finds both healthy NGINX Pods, and picks one using round-robin load balancing
The chosen Pod's NGINX process receives the request on port 80
NGINX reads index.html from the ConfigMap volume mount and sends it back
Every subsequent request may land on a different Pod; the Service is quietly load-balancing between the two replicas the entire time.
Next, I confirmed the security settings defined within the Deployment file are in effect. These commands would probably be what a security or DevOps engineer would run during an audit.
Get the Pod name (user-friendly variable): POD=$(kubectl get pods -l app=nginx-secure -o jsonpath='{.items[0].metadata.name}')
Try to write to the root filesystem (should FAIL): kubectl exec $POD -- touch /test-write
Check what user the process runs as (nginx, not root): kubectl exec $POD -- whoami
Confirm resource limits are set: kubectl describe pod $POD | grep -A4 "Limits"
Describe the deployment for full spec: kubectl describe deployment nginx-secure
This is by far the most satisfying feature of Kubernetes: delete a pod and watch it come back automatically. This is the core value proposition of a Deployment over bare containers.
Assuming the same terminal session is still active from the previous step, you can delete a pod using the command: kubectl delete pod $POD (see first screenshot below), else you will need to retrieve the pod name using the command kubectl get pods -A followed by kubectl delete pod <pod-name>. In the first screenshot below, you can see two terminal sessions. In the first, I kill one pod, and in the second terminal, you see one pod being terminated and another being created to take its place. This is self-healing. The Deployment controller's desired state says "2 replicas." The moment one pod dies, the controller creates a new one. No manual intervention needed.
In the above screenshot, in the second terminal, you can see the messages indicating the pod nginx-secure-66f5867769-j8dj6 has been terminated, and replaced with the nginx-secure-66f5867769-lv9qb. The screenshot below shows snapshots of the process.
💡SIDE NOTE: You can actively monitor pods using the command kubectl get events -A --watch (second terminal above) or kubectl get pods -w.
Pod nginx-secure-66f5867769-j8dj6 terminated after running for 22 minutes and was replaced with nginx-secure-66f5867769-lv9qb.
Another valuable operational feature of Kubernetes is the ability to update a live application without downtime and instantly reverse a bad update. Both are built into the Deployment controller; there is no need for extra tools. For this step, I updated the ConfigMap and reapplied it. I then instruct Kubernetes to perform a rolling update, bringing new pods up before taking old ones down; the Web server never goes offline.
To edit the ConfigMap (change the HTML content), execute the:
kubectl edit configmap nginx-html
You will be presented with the file displayed in the right screenshot within the VIM editor. Update the file and save the changes, and the ConfigMap is updated inside the Cluster immediately (without affecting the original nginx-content.yaml file).
However, the pods do not automatically reload it. The ConfigMap was mounted into the container's filesystem at startup, and the running NGINX process has no awareness that the source data changed. The Pods are still serving the old HTML.
To force Pods to pick up the new content, you trigger a rolling restart:
kubectl rollout restart deployment nginx-secure
This doesn't delete all Pods at once. It tells the Deployment controller to cycle through the Pods gradually; this is how rolling updates work.
Edited the original ConfigMap HTML content to now include an image.
How a Rolling Update Works
The Deployment controller follows a controlled replacement strategy. With the 2 replicas, the default behaviour is:
Spin up one new Pod with the updated configuration
Wait for that new Pod to pass its liveness probe and reach Running status
Only then, terminate one old Pod
Repeat until all old Pods are replaced
At no point are both Pods down simultaneously. There is always at least one Pod serving traffic during the entire process. You can view the rollout process using the command below, it reports progress in real time and tells you when the rollout is fully complete.
kubectl rollout status deployment nginx-secure or kubectl get pods -w
If the rollout never completes, for example, because the new Pods are crash-looping, the command keeps waiting. That is your signal that something is wrong, at which point you should check kubectl get pods and kubectl logs to investigate.
Successful rollout of HTML content update to include the Kubernetes image.
ConfigMap update and rollout.
How Rollback Works
Kubernetes automatically saves your Deployment's previous configuration as a revision every time you apply a change. When you run:
kubectl rollout undo deployment nginx-secure
The Deployment controller performs the same rolling process in reverse; it brings the previous revision's Pods up before taking the current ones down. Your application returns to its prior state with no downtime, usually within seconds.
You can see how many revisions are saved with: kubectl rollout history deployment nginx-secure
And roll back to a specific revision rather than just the previous one: kubectl rollout undo deployment nginx-secure --to-revision=1
By the time two NGINX pods were running stably, passing their liveness probes, and serving a custom HTML page, this project had replicated the exact operational pattern used by engineering teams running production workloads on Kubernetes every day. Concepts covered include:
Cluster and Node gave the mental model that underpins everything else. Understanding that the Control Plane makes decisions while Nodes do the actual work explains why Kubernetes can recover from failures automatically; the two responsibilities are separated by design.
ConfigMap teaches the right way to inject content and configuration into containers without modifying the image. This separation between code and configuration is a foundational DevOps principle, and it is what makes the same Docker image deployable across development, staging, and production environments without rebuilding it.
Deployment is where Kubernetes earns its name as an orchestrator. Defining 'replicas:2' and watching the cluster enforce that number, shifting infrastructure management paradigms from imperative ("do this") to declarative ("maintain this").
SecurityContext and resource limits connected the project to real-world cybersecurity practice. Running as a non-root user, dropping all Linux capabilities, enforcing a read-only filesystem, and capping CPU and memory are not advanced topics; they are baseline hardening that every production workload should have.
Service solved the network problem that makes containers hard to use in practice: the instability of Pod IPs. Understanding that a Service selects Pods by label rather than by IP address is the insight that makes Kubernetes networking click. It also introduced load balancing as a feature you got for free, without any additional configuration.
Rolling updates and rollback demonstrated that Kubernetes is an environment for running containers safely. Zero-downtime deployments and one-command rollbacks are the operational capabilities that make continuous delivery practical at scale.
Below is a quick summary of the security concepts I tried to cover within this project.
This project did not work on the first try, and that is worth documenting. The two errors encountered during this project are common, instructive, and directly caused by the security controls I intentionally applied.
After applying the initial deployment, both pods entered CrashLoopBackOff (see the screenshot on the right). Running kubectl logs <pod-name> --previous revealed the following:
10-listen-on-ipv6-by-default.sh: info: can not modify /etc/nginx/conf.d/default.conf (read-only file system?) 2025/9/21 06:38:37 [emerg] 1#1: mkdir() "/var/cache/nginx/client_temp" failed (30: Read-only file system)nginx: [emerg] mkdir() "/var/cache/nginx/client_temp" failed (30: Read-only file system)Setting readOnlyRootFilesystem: true in the security context makes the entire container filesystem read-only, exactly as intended. However, NGINX does not just read files. At startup, it creates a set of temporary directories under /var/cache/nginx/ to buffer client request data, proxy responses, and FastCGI output. Because the root filesystem was read-only and we had not mounted a writable volume at that path, NGINX immediately crashed trying to create those directories.
The fix: Add a fourth emptyDir volume mounted at /var/cache/nginx. An emptyDir volume is an empty, writable, temporary directory that Kubernetes creates fresh for each Pod. It is wiped when the Pod is deleted, which is exactly what you want for cache data. This preserves the read-only root filesystem everywhere else while giving NGINX precisely the writable path it needs.
Security principle reinforced: A read-only root filesystem is one of the most effective container hardening controls available. It prevents an attacker who gains code execution inside a container from writing malware, dropping a web shell, or modifying application binaries. The correct response to this error is never to disable the control; it is to identify exactly which paths the application legitimately needs to write to and expose only those as writable volumes.
After fixing the filesystem error, both pods reached Running status, but kept restarting every 30 seconds. The previous logs showed NGINX starting cleanly and then shutting down gracefully via SIGQUIT, with exit code 0. NGINX was not crashing. Kubernetes was killing it.
2025/9/21 06:59:37 [notice] 1#1: start worker processes2025/9/21 06:59:37 [notice] 1#1: start worker process 212025/9/21 07:00:06 [notice] 1#1: signal 3 (SIGQUIT) received, shutting down2025/9/21 07:00:06 [notice] 1#1: worker process 21 exited with code 02025/9/21 07:00:06 [notice] 1#1: exitThe liveness probe was configured to send HTTP requests to port 8080. But the standard nginx:1.25-alpine image listens on port 80 by default. Every probe check was hitting a closed port, receiving no response, and being counted as a failure. After three consecutive failures (Kubernetes' default threshold), the control plane concluded the Pod was unhealthy and sent SIGQUIT to restart it, even though NGINX was serving traffic perfectly on port 80 the entire time.
The fix: Update the liveness probe port, the container port declaration (nginx-deployment.yaml), and the Service's targetPort (nginx-service.yaml) to all consistently use port 80.
Security principle reinforced: A misconfigured liveness probe can be a major problem in a production environment, as it causes unnecessary Pod churn, increases the attack surface during restarts, and can trigger cascading failures if many Pods restart simultaneously. Probes must be configured to match the application's actual behaviour. The diagnostic process here, ruling out application crashes by reading the logs, identifying the graceful SIGQUIT shutdown, and tracing the cause back to the probe, is a realistic example of the kind of reasoning a SOC analyst applies when triaging unexpected container restarts in a live environment.
Cluster Architecture - https://kubernetes.io/docs/concepts/architecture/
Understanding the Basic Concepts of a Kubernetes(k8s) Cluster - https://www.prakashbhandari.com.np/posts/understanding-the-basic-concepts-of-kubernetes-cluster/
What is Minikube? - https://www.sysdig.com/learn-cloud-native/what-is-minikube