The joy of Kubernetes

The article explores Kubernetes through practical examples in a joyful manner.

The content of the article was structured to ensure clarity and depth while maintaining focus on real-world use cases. By leveraging the hands-on approach advocated through…


This content originally appeared on DEV Community and was authored by Ježek

The article explores Kubernetes through practical examples in a joyful manner.

The content of the article was structured to ensure clarity and depth while maintaining focus on real-world use cases. By leveraging the hands-on approach advocated throughout the publication, readers will gain an enhanced understanding of core concepts in Kubernetes such as pod management, service discovery, etc.

The article was inspired by the book Kubernetes in Action by Marko Lukša, and in the process of preparing this article, the official Kubernetes Documentation was utilized as a primary reference material. Thus, I insistently recommend that you familiarize yourself with the above-mentioned references in advance.

Enjoy!

Table Of Contents

  • Kubernetes in Docker
  • Pods
  • Namespaces
  • ReplicaSet
  • DaemonSet
  • Jobs
  • CronJob
  • Service
  • Ingress
  • Probes
  • Volumes
  • ConfigMaps
  • Secrets
  • StatefulSet (TODO)

Kubernetes in Docker

kind is a tool for running local Kubernetes clusters using Docker container nodes.

Create a cluster

# kind-cluster.yaml

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  apiServerPort: 6443
nodes:
- role: control-plane
- role: worker
  extraPortMappings:
  - containerPort: 30666
    hostPort: 40000
- role: worker
  extraPortMappings:
  - containerPort: 30666
    hostPort: 40001
- role: worker
  extraPortMappings:
  - containerPort: 30666
    hostPort: 40002
$ kind create cluster --config kind-cluster.yaml
Creating cluster "kind" ...
 • Ensuring node image (kindest/node:v1.33.1) 🖼  ...
 ✓ Ensuring node image (kindest/node:v1.33.1) 🖼
 • Preparing nodes 📦 📦 📦 📦   ...
 ✓ Preparing nodes 📦 📦 📦 📦
 • Writing configuration 📜  ...
 ✓ Writing configuration 📜
 • Starting control-plane 🕹️  ...
 ✓ Starting control-plane 🕹️
 • Installing CNI 🔌  ...
 ✓ Installing CNI 🔌
 • Installing StorageClass 💾  ...
 ✓ Installing StorageClass 💾
 • Joining worker nodes 🚜  ...
 ✓ Joining worker nodes 🚜
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Cluster info

$ kind get clusters
kind
$ kubectl cluster-info --context kind-kind
Kubernetes control plane is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

Cluster nodes

$ kubectl get nodes
NAME                 STATUS   ROLES           AGE   VERSION
kind-control-plane   Ready    control-plane   39m   v1.33.1
kind-worker          Ready    <none>          39m   v1.33.1
kind-worker2         Ready    <none>          39m   v1.33.1
kind-worker3         Ready    <none>          39m   v1.33.1

Pods

Pods are the smallest deployable units of computing that you can create and manage in Kubernetes.

Create a pod

Imperative way

$ kubectl run kubia --image=luksa/kubia --port=8080
pod/kubia created
$ kubectl get pods
NAME    READY   STATUS    RESTARTS   AGE
kubia   1/1     Running   0          5m26s

Declarative way

# pod-basic.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kubia
spec:
  containers:
  - image: luksa/kubia
    name: kubia
    ports:
    - containerPort: 8080
      protocol: TCP
$ kubectl create -f pod-basic.yaml
pod/kubia created
$ kubectl get po
NAME    READY   STATUS    RESTARTS   AGE
kubia   1/1     Running   0          9s

Logs

$ kubectl logs kubia
Kubia server starting...

Logs from specific container in pod:

$ kubectl logs kubia -c kubia
Kubia server starting...

Port forwarding from host to pod

$ kubectl port-forward kubia 30000:8080
Forwarding from 127.0.0.1:30000 -> 8080
Forwarding from [::1]:30000 -> 8080
$ curl -s localhost:30000
You've hit kubia

Labels and Selectors

Labels are key/value pairs that are attached to objects such as Pods.

Labels

# pod-labels.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kubia-labels
  labels:
    tier: backend
    env: dev
spec:
  containers:
  - image: luksa/kubia
    name: kubia
    ports:
    - containerPort: 8080
      protocol: TCP
$ kubectl create -f pod-labels.yaml
pod/kubia-labels created
$ kubectl get po --show-labels
NAME           READY   STATUS    RESTARTS   AGE     LABELS
kubia          1/1     Running   0          4d22h   <none>
kubia-labels   1/1     Running   0          30s     env=dev,tier=backend
$ kubectl get po --label-columns tier,env
NAME           READY   STATUS    RESTARTS   AGE     TIER      ENV
kubia          1/1     Running   0          4d22h
kubia-labels   1/1     Running   0          20m     backend   dev
$ kubectl label po kubia-labels env=test
error: 'env' already has a value (dev), and --overwrite is false

$ kubectl label po kubia-labels env=test --overwrite
pod/kubia-labels labeled

$ kubectl get po --label-columns tier,env
NAME           READY   STATUS    RESTARTS   AGE     TIER      ENV
kubia          1/1     Running   0          4d22h
kubia-labels   1/1     Running   0          24m     backend   test

Selectors

$ kubectl get po -l 'env' --show-labels
NAME           READY   STATUS    RESTARTS   AGE     LABELS
kubia-labels   1/1     Running   0          3h25m   env=test,tier=backend

$ kubectl get po -l '!env' --show-labels
NAME    READY   STATUS    RESTARTS   AGE    LABELS
kubia   1/1     Running   0          5d1h   <none>

$ kubectl get po -l tier=backend --show-labels
NAME           READY   STATUS    RESTARTS   AGE     LABELS
kubia-labels   1/1     Running   0          3h28m   env=test,tier=backend

Annotations

You can use annotations to attach arbitrary non-identifying metadata to objects.

# pod-annotations.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kubia-annotations
  annotations:
    imageregistry: "https://hub.docker.com/"
spec:
  containers:
  - image: luksa/kubia
    name: kubia
    ports:
    - containerPort: 8080
      protocol: TCP
$ kubectl create -f pod-annotations.yaml
pod/kubia-annotations created

$ kubectl describe pod kubia-annotations | grep Annotations
Annotations:      imageregistry: https://hub.docker.com/
$ kubectl annotate pod/kubia-annotations imageregistry=nexus.org --overwrite
pod/kubia-annotations annotated

$ kubectl describe pod kubia-annotations | grep Annotations
Annotations:      imageregistry: nexus.org

Namespaces

Namespaces provide a mechanism for isolating groups of resources within a single cluster.

$ kubectl get ns
NAME                 STATUS   AGE
default              Active   5d2h
kube-node-lease      Active   5d2h
kube-public          Active   5d2h
kube-system          Active   5d2h
local-path-storage   Active   5d2h
$ kubectl get pods --namespace=default
NAME                READY   STATUS    RESTARTS   AGE
kubia               1/1     Running   0          5d2h
kubia-annotations   1/1     Running   0          19m
kubia-labels        1/1     Running   0          4h15m
$ kubectl create namespace custom-namespace
namespace/custom-namespace created

$ kubectl get pods --namespace=custom-namespace
No resources found in custom-namespace namespace.
$ kubectl run nginx --image=nginx --namespace=custom-namespace
pod/nginx created

$ kubectl get pods --namespace=custom-namespace
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          61s
$ kubectl config set-context --current --namespace=custom-namespace
Context "kind-kind" modified.

$ kubectl get pods
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          2m57s

$ kubectl config set-context --current --namespace=default
Context "kind-kind" modified.

$ kubectl get pods
NAME                READY   STATUS    RESTARTS   AGE
kubia               1/1     Running   0          5d2h
kubia-annotations   1/1     Running   0          30m
kubia-labels        1/1     Running   0          4h26m
$ kubectl delete ns custom-namespace
namespace "custom-namespace" deleted

$ kubectl get pods --namespace=custom-namespace
No resources found in custom-namespace namespace.

$ kubectl get ns
NAME                 STATUS   AGE
default              Active   5d3h
kube-node-lease      Active   5d3h
kube-public          Active   5d3h
kube-system          Active   5d3h
local-path-storage   Active   5d3h
$ kubectl delete po --all
pod "kubia" deleted
pod "kubia-annotations" deleted
pod "kubia-labels" deleted

$ kubectl get ns
NAME                 STATUS   AGE
default              Active   5d3h
kube-node-lease      Active   5d3h
kube-public          Active   5d3h
kube-system          Active   5d3h
local-path-storage   Active   5d3h

$ kubectl get pods --namespace=default
No resources found in default namespace.

ReplicaSet

A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.

# replicaset.yaml

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: kubia
spec:
  replicas: 3
  selector:
    matchLabels:
      app: kubia
  template:
    metadata:
      labels:
        app: kubia
    spec:
      containers:
      - name: kubia
        image: luksa/kubia

$ kubectl create -f replicaset.yaml
replicaset.apps/kubia created

$ kubectl get po
NAME          READY   STATUS    RESTARTS   AGE
kubia-5l82z   1/1     Running   0          5s
kubia-bkjwk   1/1     Running   0          5s
kubia-k78j5   1/1     Running   0          5s

$ kubectl get rs
NAME    DESIRED   CURRENT   READY   AGE
kubia   3         3         3       64s
$ kubectl delete rs kubia
replicaset.apps "kubia" deleted

$ kubectl get rs
No resources found in default namespace.

$ kubectl get po
NAME          READY   STATUS        RESTARTS   AGE
kubia-5l82z   1/1     Terminating   0          5m30s
kubia-bkjwk   1/1     Terminating   0          5m30s
kubia-k78j5   1/1     Terminating   0          5m30s

DaemonSet

A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them.

# daemonset.yaml

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
spec:
  selector:
    matchLabels:
      app: fluentd
  template:
    metadata:
      labels:
        app: fluentd
    spec:
      nodeSelector:
        disk: ssd
      containers:
      - name: fluentd
        image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1
$ kubectl create -f daemonset.yaml
daemonset.apps/fluentd created

$ kubectl get ds
NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
fluentd   0         0         0       0            0           disk=ssd        115s

$ kubectl get po
No resources found in default namespace.

$ kubectl get node
NAME                 STATUS   ROLES           AGE     VERSION
kind-control-plane   Ready    control-plane   5d21h   v1.33.1
kind-worker          Ready    <none>          5d21h   v1.33.1
kind-worker2         Ready    <none>          5d21h   v1.33.1
kind-worker3         Ready    <none>          5d21h   v1.33.1

$ kubectl label node kind-worker3 disk=ssd
node/kind-worker3 labeled

$ kubectl get ds
NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
fluentd   1         1         1       1            1           disk=ssd        3m49s

$ kubectl get po
NAME            READY   STATUS    RESTARTS   AGE
fluentd-cslcb   1/1     Running   0          39s
$ kubectl delete ds fluentd
daemonset.apps "fluentd" deleted

$ kubectl get ds
No resources found in default namespace.

$ kubectl get po
No resources found in default namespace.

Jobs

Jobs represent one-off tasks that run to completion and then stop.

# job.yaml

apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  template:
    spec:
      containers:
      - name: pi
        image: perl:5.34.0
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never
  backoffLimit: 4
$ kubectl create -f job.yaml
job.batch/pi created

$ kubectl get jobs
NAME   STATUS    COMPLETIONS   DURATION   AGE
pi     Running   0/1           34s        34s

$ kubectl get jobs
NAME   STATUS     COMPLETIONS   DURATION   AGE
pi     Complete   1/1           54s        62s

$ kubectl get po
NAME       READY   STATUS      RESTARTS   AGE
pi-8rdmn   0/1     Completed   0          2m1s

$ kubectl events pod/pi-8rdmn
LAST SEEN   TYPE     REASON             OBJECT         MESSAGE
3m44s       Normal   Scheduled          Pod/pi-8rdmn   Successfully assigned default/pi-8rdmn to kind-worker2
3m44s       Normal   Pulling            Pod/pi-8rdmn   Pulling image "perl:5.34.0"
3m44s       Normal   SuccessfulCreate   Job/pi         Created pod: pi-8rdmn
2m59s       Normal   Pulled             Pod/pi-8rdmn   Successfully pulled image "perl:5.34.0" in 44.842s (44.842s including waiting). Image size: 336374010 bytes.
2m59s       Normal   Created            Pod/pi-8rdmn   Created container: pi
2m59s       Normal   Started            Pod/pi-8rdmn   Started container pi
2m50s       Normal   Completed          Job/pi         Job completed
$ kubectl delete job/pi
job.batch "pi" deleted

$ kubectl get po
No resources found in default namespace.

CronJob

CronJob starts one-time Jobs on a repeating schedule.

# cronjob.yaml

apiVersion: batch/v1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "* * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox:1.28
            imagePullPolicy: IfNotPresent
            command:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure
$ kubectl create -f cronjob.yaml
cronjob.batch/hello created

$ kubectl get cronjobs
NAME    SCHEDULE    TIMEZONE   SUSPEND   ACTIVE   LAST SCHEDULE   AGE
hello   * * * * *   <none>     False     0        8s              55s

$ kubectl get pods
NAME                   READY   STATUS      RESTARTS   AGE
hello-29223074-gsztp   0/1     Completed   0          30s

$ kubectl get pods
NAME                   READY   STATUS      RESTARTS   AGE
hello-29223074-gsztp   0/1     Completed   0          106s
hello-29223075-9r7kx   0/1     Completed   0          46s
$ kubectl delete cronjobs/hello
cronjob.batch "hello" deleted

$ kubectl get cronjobs
No resources found in default namespace.

$ kubectl get pods
No resources found in default namespace.

Service

Service is a method for exposing a network application that is running as one or more Pods in your cluster.

There several Service types supported in Kubernetes:

  • ClusterIP
  • NodePort
  • ExternalName
  • LoadBalancer

ClusterIP

Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default that is used if you don't explicitly specify a type for a Service. You can expose the Service to the public internet using an Ingress or a Gateway.

# pod-labels.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kubia-labels
  labels:
    tier: backend
    env: dev
spec:
  containers:
  - image: luksa/kubia
    name: kubia
    ports:
    - containerPort: 8080
      protocol: TCP
# service-basic.yaml

apiVersion: v1
kind: Service
metadata:
  name: kubia-svc
spec:
  selector:
    tier: backend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
$ kubectl create -f pod-labels.yaml
pod/kubia-labels created

$ kubectl create -f service-basic.yaml
service/kubia-svc created

$ kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP   21d
kubia-svc    ClusterIP   10.96.158.86   <none>        80/TCP    5s

$ kubectl get po
NAME           READY   STATUS    RESTARTS   AGE
kubia-labels   1/1     Running   0          116s

$ kubectl exec kubia-labels -- curl -s http://10.96.158.86:80
You've hit kubia-labels
$ kubectl delete -f service-basic.yaml
service "kubia-svc" deleted

$ kubectl delete -f pod-labels.yaml
pod "kubia-labels" deleted
# pod-nginx.yaml

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app.kubernetes.io/name: proxy
spec:
  containers:
  - name: nginx
    image: nginx:stable
    ports:
      - containerPort: 80
        name: http-web-svc
# service-nginx.yaml

apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
spec:
  selector:
    app.kubernetes.io/name: proxy
  ports:
    - name: http-port
      protocol: TCP
      port: 8080
      targetPort: http-web-svc
$ kubectl create -f pod-nginx.yaml
pod/nginx created

$ kubectl create -f service-nginx.yaml
service/nginx-svc created

$ kubectl get po
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          5m51s

$ kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP    21d
nginx-svc    ClusterIP   10.96.230.243   <none>        8080/TCP   32s

$ kubectl exec nginx -- curl -sI http://10.96.230.243:8080
HTTP/1.1 200 OK
Server: nginx/1.28.0
Date: Thu, 07 Aug 2025 12:09:24 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Wed, 23 Apr 2025 11:48:54 GMT
Connection: keep-alive
ETag: "6808d3a6-267"
Accept-Ranges: bytes

$ kubectl exec nginx -- curl -sI http://nginx-svc:8080 | head -n 2
HTTP/1.1 200 OK
Server: nginx/1.28.0

$ kubectl exec nginx -- curl -sI http://nginx-svc.default:8080 | head -n 2
HTTP/1.1 200 OK
Server: nginx/1.28.0

$ kubectl exec nginx -- curl -sI http://nginx-svc.default.svc:8080 | head -n 2
HTTP/1.1 200 OK
Server: nginx/1.28.0

$ kubectl exec nginx -- curl -sI http://nginx-svc.default.svc.cluster.local:8080 | head -n 2
HTTP/1.1 200 OK
Server: nginx/1.28.0
$ kubectl delete -f service-nginx.yaml
service "nginx-svc" deleted

$ kubectl delete -f pod-nginx.yaml
pod "nginx" deleted

ExternalName

Maps the Service to the contents of the externalName field (for example, to the hostname api.foo.bar.example). The mapping configures your cluster's DNS server to return a CNAME record with that external hostname value. No proxying of any kind is set up.

# service-ext.yaml

apiVersion: v1
kind: Service
metadata:
  name: httpbin-service
spec:
  type: ExternalName
  externalName: httpbin.org
$ kubectl create -f service-ext.yaml
service/httpbin-service created

$ kubectl create -f pod-basic.yaml
pod/kubia created

$ kubectl get svc
NAME              TYPE           CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
httpbin-service   ExternalName   <none>       httpbin.org   <none>    4m17s
kubernetes        ClusterIP      10.96.0.1    <none>        443/TCP   22d

$ kubectl exec kubia -- curl -sk -X GET https://httpbin-service/uuid -H "accept: application/json"
{
  "uuid": "6a48fe51-a6b6-4e0a-9ef2-381ba7ea2c69"
}
$ kubectl delete -f pod-basic.yaml
pod "kubia" deleted

$ kubectl delete -f service-ext.yaml
service "httpbin-service" deleted

NodePort

Exposes the Service on each Node's IP at a static port (the NodePort). To make the node port available, Kubernetes sets up a cluster IP address, the same as if you had requested a Service of type: ClusterIP.

# service-nginx-nodeport.yaml

apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
spec:
  type: NodePort
  selector:
    app.kubernetes.io/name: proxy
  ports:
  - port: 8080
    targetPort: http-web-svc
    nodePort: 30666
$ kubectl create -f pod-nginx.yaml
pod/nginx created

$ kubectl create -f service-nginx-nodeport.yaml
service/nginx-svc created

$ kubectl get svc nginx-svc
NAME        TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
nginx-svc   NodePort   10.96.252.35   <none>        8080:30666/TCP   9s

$ docker ps
CONTAINER ID   IMAGE                  COMMAND                  CREATED       STATUS       PORTS                      NAMES
da2c842ddfd6   kindest/node:v1.33.1   "/usr/local/bin/entr…"   3 weeks ago   Up 3 weeks   0.0.0.0:40000->30666/tcp   kind-worker
16bf718b93b6   kindest/node:v1.33.1   "/usr/local/bin/entr…"   3 weeks ago   Up 3 weeks   127.0.0.1:6443->6443/tcp   kind-control-plane
bb18cefdb180   kindest/node:v1.33.1   "/usr/local/bin/entr…"   3 weeks ago   Up 3 weeks   0.0.0.0:40002->30666/tcp   kind-worker3
42cea7794f0b   kindest/node:v1.33.1   "/usr/local/bin/entr…"   3 weeks ago   Up 3 weeks   0.0.0.0:40001->30666/tcp   kind-worker2

$ curl -sI http://localhost:40000 | head -n 2
HTTP/1.1 200 OK
Server: nginx/1.28.0

$ curl -sI http://localhost:40001 | head -n 2
HTTP/1.1 200 OK
Server: nginx/1.28.0

$ curl -sI http://localhost:40002 | head -n 2
HTTP/1.1 200 OK
Server: nginx/1.28.0
$ kubectl delete -f service-nginx-nodeport.yaml
service "nginx-svc" deleted

$ kubectl delete -f pod-nginx.yaml
pod "nginx" deleted

LoadBalancer

Exposes the Service externally using an external load balancer. Kubernetes does not directly offer a load balancing component; you must provide one, or you can integrate your Kubernetes cluster with a cloud provider.

Let's a look how to get service of type LoadBalancer working in a kind cluster using Cloud Provider KIND.

# service-lb-demo.yaml

kind: Pod
apiVersion: v1
metadata:
  name: foo-app
  labels:
    app: http-echo
spec:
  containers:
  - command:
    - /agnhost
    - serve-hostname
    - --http=true
    - --port=8080
    image: registry.k8s.io/e2e-test-images/agnhost:2.39
    name: foo-app

---
kind: Pod
apiVersion: v1
metadata:
  name: bar-app
  labels:
    app: http-echo
spec:
  containers:
  - command:
    - /agnhost
    - serve-hostname
    - --http=true
    - --port=8080
    image: registry.k8s.io/e2e-test-images/agnhost:2.39
    name: bar-app

---
kind: Service
apiVersion: v1
metadata:
  name: http-echo-service
spec:
  type: LoadBalancer
  selector:
    app: http-echo
  ports:
  - port: 5678
    targetPort: 8080
$ kubectl create -f service-lb-demo.yaml
pod/foo-app created
pod/bar-app created
service/http-echo-service created

$ kubectl get svc http-echo-service
NAME                TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
http-echo-service   LoadBalancer   10.96.97.99   172.18.0.6    5678:31196/TCP   58s

$ kubectl get svc http-echo-service -o=jsonpath='{.status.loadBalancer.ingress[0].ip}'
172.18.0.6

$ for _ in {1..4}; do curl -s 172.18.0.6:5678; echo; done
foo-app
bar-app
bar-app
foo-app
$ kubectl delete -f service-lb-demo.yaml
pod "foo-app" deleted
pod "bar-app" deleted
service "http-echo-service" deleted

Ingress

Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.

Ingress Controller

In order for an Ingress to work in your cluster, there must be an Ingress Controller running.

You have to run Cloud Provider KIND to enable the loadbalancer controller which Nginx Ingress controller will use through the loadbalancer API in a kind cluster.

$ kubectl apply -f https://kind.sigs.k8s.io/examples/ingress/deploy-ingress-nginx.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
$ kubectl wait --namespace ingress-nginx \
>   --for=condition=ready pod \
>   --selector=app.kubernetes.io/component=controller \
>   --timeout=90s
pod/ingress-nginx-controller-86bb9f8d4b-4hg7w condition met
$ kubectl get all -n ingress-nginx
NAME                                            READY   STATUS      RESTARTS   AGE
pod/ingress-nginx-admission-create-ldc97        0/1     Completed   0          2m25s
pod/ingress-nginx-admission-patch-zzlh7         0/1     Completed   0          2m25s
pod/ingress-nginx-controller-86bb9f8d4b-4hg7w   1/1     Running     0          2m25s

NAME                                         TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
service/ingress-nginx-controller             LoadBalancer   10.96.146.11   172.18.0.6    80:30367/TCP,443:31847/TCP   2m25s
service/ingress-nginx-controller-admission   ClusterIP      10.96.50.204   <none>        443/TCP                      2m25s

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-nginx-controller   1/1     1            1           2m25s

NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-nginx-controller-86bb9f8d4b   1         1         1       2m25s

NAME                                       STATUS     COMPLETIONS   DURATION   AGE
job.batch/ingress-nginx-admission-create   Complete   1/1           11s        2m25s
job.batch/ingress-nginx-admission-patch    Complete   1/1           12s        2m25s

Ingress resources

The Ingress concept lets you map traffic to different backends based on rules you define via the Kubernetes API. Traffic routing is controlled by rules defined on the Ingress resource.

Basic usage

# pod-foo-bar.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kubia-foo
  labels:
    app: foo
spec:
  containers:
  - image: luksa/kubia
    name: kubia
    ports:
    - containerPort: 8080
      protocol: TCP
      name: http-port

---
apiVersion: v1
kind: Pod
metadata:
  name: kubia-bar
  labels:
    app: bar
spec:
  containers:
  - image: luksa/kubia
    name: kubia
    ports:
    - containerPort: 8080
      name: http-port
# service-foo-bar.yaml

apiVersion: v1
kind: Service
metadata:
  name: kubia-foo-svc
spec:
  selector:
    app: foo
  ports:
    - name: http-port
      protocol: TCP
      port: 8080
      targetPort: http-port

---
apiVersion: v1
kind: Service
metadata:
  name: kubia-bar-svc
spec:
  selector:
    app: bar
  ports:
    - name: http-port
      protocol: TCP
      port: 8080
      targetPort: http-port
# ingress-basic.yaml 

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kubia
spec:
  rules:
  - http:
      paths:
      - path: /foo
        pathType: Prefix
        backend:
          service:
            name: kubia-foo-svc
            port:
              number: 80
      - path: /bar
        pathType: Prefix
        backend:
          service:
            name: kubia-bar-svc
            port:
              number: 80
$ kubectl create -f pod-foo-bar.yaml
pod/kubia-foo created
pod/kubia-bar created

$ kubectl create -f service-foo-bar.yaml
service/kubia-foo-svc created
service/kubia-bar-svc created

$ kubectl create -f ingress-basic.yaml
ingress.networking.k8s.io/kubia created
$ kubectl get svc
NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes      ClusterIP   10.96.0.1       <none>        443/TCP   22d
kubia-bar-svc   ClusterIP   10.96.230.115   <none>        80/TCP    4m12s
kubia-foo-svc   ClusterIP   10.96.49.21     <none>        80/TCP    4m13s

$ kubectl get ingress
NAME    CLASS    HOSTS   ADDRESS     PORTS   AGE
kubia   <none>   *       localhost   80      67s

$ kubectl -n ingress-nginx get svc
NAME                                 TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.96.146.11   172.18.0.6    80:30367/TCP,443:31847/TCP   63m
ingress-nginx-controller-admission   ClusterIP      10.96.50.204   <none>        443/TCP                      63m
$ kubectl get services \
>    --namespace ingress-nginx \
>    ingress-nginx-controller \
>    --output jsonpath='{.status.loadBalancer.ingress[0].ip}'
172.18.0.6

$ curl -s http://172.18.0.6:80/foo
You've hit kubia-foo

$ curl -s http://172.18.0.6:80/bar
You've hit kubia-bar

$ curl -s http://172.18.0.6:80/baz
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>

In order to use ingress address localhost (curl http://localhost/foo) you should define extraPortMapping in kind cluster configuration as described in Extra Port Mappings:

$ kubectl delete ingress/kubia
ingress.networking.k8s.io "kubia" deleted

Using a host

# ingress-hosts.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kubia
spec:
  rules:
  - host: foo.kubia.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: kubia-foo-svc
            port:
              number: 80
  - host: bar.kubia.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: kubia-bar-svc
            port:
              number: 80
$ kubectl create -f ingress-hosts.yaml
ingress.networking.k8s.io/kubia created

$ kubectl get ingress/kubia
NAME    CLASS    HOSTS                         ADDRESS     PORTS   AGE
kubia   <none>   foo.kubia.com,bar.kubia.com   localhost   80      103s
$ curl -s http://172.18.0.6
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>

$ curl -s http://172.18.0.6 -H 'Host: foo.kubia.com'
You've hit kubia-foo                               '

$ curl -s http://172.18.0.6 -H 'Host: bar.kubia.com'
You've hit kubia-bar
$ kubectl delete ingress/kubia
ingress.networking.k8s.io "kubia" deleted

TLS

You can secure an Ingress by specifying a Secret that contains a TLS private key and certificate.

$ openssl genrsa -out tls.key 2048
Generating RSA private key, 2048 bit long modulus (2 primes)
............................................+++++
............+++++
e is 65537 (0x010001)

$ openssl req -new -x509 -key tls.key -out tls.crt -days 360 -subj //CN=foo.kubia.com

$ kubectl create secret tls tls-secret --cert=tls.crt --key=tls.key
secret/tls-secret created
# ingress-tls.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kubia
spec:
  tls:
  - hosts:
      - foo.kubia.com
    secretName: tls-secret
  rules:
  - host: foo.kubia.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: kubia-foo-svc
            port:
              number: 80
$ kubectl create -f ingress-tls.yaml
ingress.networking.k8s.io/kubia created

$ kubectl get ingress/kubia
NAME    CLASS    HOSTS           ADDRESS     PORTS     AGE
kubia   <none>   foo.kubia.com   localhost   80, 443   2m13s

$ curl -sk https://172.18.0.6:443 -H 'Host: foo.kubia.com'
You've hit kubia-foo
$ kubectl delete ingress/kubia
ingress.networking.k8s.io "kubia" deleted

$ kubectl delete secret/tls-secret
secret "tls-secret" deleted

$ kubectl delete -f pod-foo-bar.yaml
pod "kubia-foo" deleted
pod "kubia-bar" deleted

$ kubectl delete -f service-foo-bar.yaml
service "kubia-foo-svc" deleted
service "kubia-bar-svc" deleted

Probes

A probe is a diagnostic performed periodically by the kubelet on a container. To perform a diagnostic, the kubelet either executes code within the container, or makes a network request.

livenessProbe

Indicates whether the container is running. If the liveness probe fails, the kubelet kills the container

# pod-liveness-probe.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kubia-liveness
spec:
  containers:
  - image: luksa/kubia-unhealthy
    name: kubia
    livenessProbe:
      httpGet:
        path: /
        port: 8080
$ kubectl create -f pod-liveness-probe.yaml
pod/kubia-liveness created

$ kubectl get po
NAME             READY   STATUS    RESTARTS   AGE
kubia-liveness   1/1     Running   0          42s

$ kubectl events pod/kubia-liveness
LAST SEEN          TYPE      REASON      OBJECT               MESSAGE
113s               Normal    Scheduled   Pod/kubia-liveness   Successfully assigned default/kubia-liveness to kind-worker3
112s               Normal    Pulling     Pod/kubia-liveness   Pulling image "luksa/kubia-unhealthy"
77s                Normal    Pulled      Pod/kubia-liveness   Successfully pulled image "luksa/kubia-unhealthy" in 34.865s (34.865s including waiting). Image size: 263841919 bytes.
77s                Normal    Created     Pod/kubia-liveness   Created container: kubia
77s                Normal    Started     Pod/kubia-liveness   Started container kubia
2s (x3 over 22s)   Warning   Unhealthy   Pod/kubia-liveness   Liveness probe failed: HTTP probe failed with statuscode: 500
2s                 Normal    Killing     Pod/kubia-liveness   Container kubia failed liveness probe, will be restarted

$ kubectl get po
NAME             READY   STATUS    RESTARTS      AGE
kubia-liveness   1/1     Running   1 (20s ago)   2m41s

readinessProbe

Indicates whether the container is ready to respond to requests. If the readiness probe fails, the EndpointSlice controller removes the Pod's IP address from the EndpointSlices of all Services that match the Pod

# pod-readiness-probe.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kubia-readiness
  labels:
    app: kubia
spec:
  containers:
  - image: luksa/kubia
    name: kubia
    readinessProbe:
      exec:
        command:
        - cat
        - /tmp/ready
      initialDelaySeconds: 10
      periodSeconds: 5
    ports:
    - containerPort: 8080
      name: http-web
# service-readiness-probe.yaml

apiVersion: v1
kind: Service
metadata:
  name: kubia-svc
spec:
  type: LoadBalancer
  selector:
    app: kubia
  ports:
  - port: 80
    targetPort: http-web
$ kubectl create -f pod-readiness-probe.yaml
pod/kubia-readiness created

$ kubectl create -f service-readiness-probe.yaml
service/kubia-svc created

$ kubectl get po
NAME              READY   STATUS    RESTARTS   AGE
kubia-readiness   0/1     Running   0          23s

$ kubectl get svc
NAME         TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1      <none>        443/TCP        23d
kubia-svc    LoadBalancer   10.96.150.51   172.18.0.7    80:31868/TCP   33s
$ kubectl exec kubia-readiness -- curl -s http://localhost:8080
You've hit kubia-readiness'

$ kubectl exec kubia-readiness -- curl -s http://kubia-svc:80
command terminated with exit code 7

$ curl -sv http://172.18.0.7:80
*   Trying 172.18.0.7:80...
* Connected to 172.18.0.7 (172.18.0.7) port 80 (#0)
> GET / HTTP/1.1
> Host: 172.18.0.7
> User-Agent: curl/7.79.1
> Accept: */*
>
* Empty reply from server
* Closing connection 0
$ kubectl exec kubia-readiness -- touch tmp/ready

$ kubectl get po
NAME              READY   STATUS    RESTARTS   AGE
kubia-readiness   1/1     Running   0          2m38s

$ kubectl exec kubia-readiness -- curl -s http://kubia-svc:80
You've hit kubia-readiness

$ curl -s http://172.18.0.7:80
You've hit kubia-readiness
$ kubectl delete -f pod-readiness-probe.yaml
pod "kubia-readiness" deleted

$ kubectl delete -f service-readiness-probe.yaml
service "kubia-svc" deleted

startupProbe

Indicates whether the application within the container is started. All other probes are disabled if a startup probe is provided, until it succeeds. If the startup probe fails, the kubelet kills the container.

ports:
- name: liveness-port
  containerPort: 8080

livenessProbe:
  httpGet:
    path: /healthz
    port: liveness-port
  failureThreshold: 1
  periodSeconds: 10

startupProbe:
  httpGet:
    path: /healthz
    port: liveness-port
  failureThreshold: 30
  periodSeconds: 10

For more information about configuring probes, see Configure Liveness, Readiness and Startup Probes

Volumes

Kubernetes volumes provide a way for containers in a pod to access and share data via the filesystem. Data sharing can be between different local processes within a container, or between different containers, or between Pods.

Kubernetes supports several types of volumes.

Ephemeral Volumes

Ephemeral volumes are temporary storage that are intrinsically linked to the lifecycle of a Pod. Ephemeral volumes are designed for scenarios where data persistence is not required beyond the life of a single Pod.

Kubernetes supports several different kinds of ephemeral volumes for different purposes: emptyDir, configmap, downwardAPI, secret, image, CSI

emptyDir

For a Pod that defines an emptyDir volume, the volume is created when the Pod is assigned to a node. The emptyDir volume is initially empty.

# pod-volume-emptydir.yaml

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx:stable
    volumeMounts:
    - mountPath: /tmp-cache
      name: tmp
  volumes:
  - name: tmp
    emptyDir: {}
$ kubectl create -f pod-volume-emptydir.yaml
pod/nginx created

$ kubectl exec nginx -- ls -l | grep cache
drwxrwxrwx   2 root root 4096 Aug 11 08:13 tmp-cache
$ kubectl delete -f pod-volume-emptydir.yaml
pod "nginx" deleted

You can create a volume in memory using using tmpfs file system:

  - name: tmp
    emptyDir:
      sizeLimit: 500Mi
      medium: Memory

Projected Volumes

A projected volume maps several existing volume sources into the same directory.

Currently, the following types of volume sources can be projected: secret, downwardAPI, configMap, serviceAccountToken, clusterTrustBundle

Persistent Volumes

Persistent volumes offer durable storage, meaning the data stored within them persists even after the associated Pods are deleted, restarted, or rescheduled.

PersistentVolume

A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes.

PersistentVolume types are implemented as plugins. Kubernetes currently supports the following plugins: csi, fc, iscsi, local, nfs, hostPath

hostPath

A hostPath volume mounts a file or directory from the host node's filesystem into your Pod.

# pod-volume-hostpath.yaml

$ kubectl create -f pod-volume-hostpath.yaml
pod/nginx created

$ kubectl exec nginx -- ls -l | grep cache
drwxr-xr-x   2 root root 4096 Aug 11 12:27 cache
$ kubectl delete -f pod-volume-hostpath.yaml
pod "nginx" deleted
# pv-hostpath.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-redis
spec:
  capacity: 
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
    - ReadOnlyMany
  persistentVolumeReclaimPolicy: Retain
  hostPath:
    path: /data/redis
$ kubectl create -f pv-hostpath.yaml
persistentvolume/pv-redis created

$ kubectl get pv
NAME         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pv-redis   1Gi        RWO,ROX        Retain           Available                          <unset>                          44s

PersistentVolumeClaim

A PersistentVolumeClaim (PVC) is a request for storage by a user. A PersistentVolumeClaim volume is used to mount a PersistentVolume into a Pod.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-redis
spec:
  resources:
    requests:
      storage: 0.5Gi
  accessModes:
  - ReadWriteOnce
  storageClassName: ""
$ kubectl create -f pvc-basic.yaml
persistentvolumeclaim/pvc-redis created

$ kubectl get pvc
NAME        STATUS   VOLUME     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
pvc-redis   Bound    pv-redis   1Gi        RWO,ROX                       <unset>                 6s

$ kubectl get pv
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pv-redis   1Gi        RWO,ROX        Retain           Bound    default/pvc-redis                  <unset>                          28s
$ kubectl create -f pod-pvc.yaml
pod/redis created

$ kubectl get po redis -o jsonpath='{.spec.volumes[?(@.name == "redis-rdb")]}'
{"name":"redis-rdb","persistentVolumeClaim":{"claimName":"pvc-redis"}}
$ kubectl exec redis -- redis-cli save
OK

$ kubectl get po redis -o jsonpath='{.spec.nodeName}'
kind-worker2

$ docker exec kind-worker2 ls -l tmp/redis
total 4
-rw------- 1 999 systemd-journal 102 Aug 11 14:47 dump.rdb
$ kubectl delete po/redis
pod "redis" deleted

$ kubectl delete pvc/pvc-redis
persistentvolumeclaim "pvc-redis" deleted

$ kubectl get pvc
No resources found in default namespace.

$ kubectl get pv
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM               STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pv-redis   1Gi        RWO,ROX        Retain           Released   default/pvc-redis                  <unset>                          37m

$ kubectl create -f pvc-basic.yaml
persistentvolumeclaim/pvc-redis created

$ kubectl get pvc
NAME        STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
pvc-redis   Pending                                                     <unset>                 9s

$ kubectl get pv
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM               STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pv-redis   1Gi        RWO,ROX        Retain           Released   default/pvc-redis                  <unset>                          40m

$ kubectl create -f pod-pvc.yaml
pod/redis created

$ kubectl get po
NAME    READY   STATUS    RESTARTS   AGE
redis   0/1     Pending   0          92s

$ kubectl events pod/redis
LAST SEEN             TYPE      REASON             OBJECT                            MESSAGE
37m                   Normal    Scheduled          Pod/redis                         Successfully assigned default/redis to kind-worker2
37m                   Normal    Pulling            Pod/redis                         Pulling image "redis:6.2"
37m                   Normal    Pulled             Pod/redis                         Successfully pulled image "redis:6.2" in 5.993s (5.993s including waiting). Image size: 40179474 bytes.
37m                   Normal    Created            Pod/redis                         Created container: redis
37m                   Normal    Started            Pod/redis                         Started container redis
6m57s                 Normal    Killing            Pod/redis                         Stopping container redis
2m4s                  Warning   FailedScheduling   Pod/redis                         0/4 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/4 nodes are available: 4 Preemption is not helpful for scheduling.
8s (x16 over 3m51s)   Normal    FailedBinding      PersistentVolumeClaim/pvc-redis   no persistent volumes available for this claim and no storage class is set
$ kubectl delete pv/pv-redis
persistentvolume "pv-redis" deleted

$ kubectl get pvc
NAME        STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
pvc-redis   Pending                                                     <unset>                 61s

$ kubectl create -f pv-hostpath.yaml
persistentvolume/pv-redis created

$ kubectl get pvc
NAME        STATUS   VOLUME     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
pvc-redis   Bound    pv-redis   1Gi        RWO,ROX                       <unset>                 2m2s
$ kubectl delete pod/redis
pod "redis" deleted

$ kubectl delete pvc/pvc-redis
persistentvolumeclaim "pvc-redis" deleted

$ kubectl delete pv/pv-redis
persistentvolume "pv-redis" deleted

Dynamic Volume Provisioning

Dynamic volume provisioning allows storage volumes to be created on-demand. Without dynamic provisioning, cluster administrators have to manually make calls to their cloud or storage provider to create new storage volumes, and then create PersistentVolume objects to represent them in Kubernetes.

StorageClass

A StorageClass provides a way for administrators to describe the classes of storage they offer.

$ kubectl get storageclass
NAME                 PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
standard (default)   rancher.io/local-path   Delete          WaitForFirstConsumer   false                  25d
# storageclass-local-path.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: storageclass-redis
  annotations:
    storageclass.kubernetes.io/is-default-class: "false"
provisioner: rancher.io/local-path
volumeBindingMode: Immediate
$ kubectl create -f storageclass-local-path.yaml
storageclass.storage.k8s.io/storageclass-redis created

$ kubectl get sc
NAME                 PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
standard (default)   rancher.io/local-path   Delete          WaitForFirstConsumer   false                  26d
storageclass-redis   rancher.io/local-path   Delete          Immediate              false                  5m43s                 26s
# pvc-sc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-dynamic-redis
  annotations:
    volume.kubernetes.io/selected-node: kind-worker
spec:
  resources:
    requests:
      storage: 0.5Gi
  accessModes:
  - ReadWriteOnce
  storageClassName: storageclass-redis
$ kubectl create -f pvc-sc.yaml
persistentvolumeclaim/pvc-dynamic-redis created

$ kubectl get pvc
NAME                STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS         VOLUMEATTRIBUTESCLASS   AGE
pvc-dynamic-redis   Pending                                      storageclass-redis   <unset>                 8s

$ kubectl get pvc
NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS         VOLUMEATTRIBUTESCLASS   AGE
pvc-dynamic-redis   Bound    pvc-0d78a617-e1ee-4d1e-8e59-37502fc711a9   512Mi      RWO            storageclass-redis   <unset>                 26s
                         0s
$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                       STORAGECLASS         VOLUMEATTRIBUTESCLASS   REASON   AGE
pvc-0d78a617-e1ee-4d1e-8e59-37502fc711a9   512Mi      RWO            Delete           Bound    default/pvc-dynamic-redis   storageclass-redis   <unset>                          47s
$ kubectl delete sc/st
sc/standard            sc/storageclass-redis

$ kubectl delete sc/storageclass-redis
storageclass.storage.k8s.io "storageclass-redis" deleted

$ kubectl get pv
No resources found

ConfigMaps

A ConfigMap is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.

Creating ConfigMaps

Imperative way

# application.properties
server.port=8080
spring.profiles.active=development
$ kubectl create configmap my-config \
    --from-literal=foo=bar \
    --from-file=app.props=application.properties
configmap/my-config created
$ kubectl get cm/my-config
NAME        DATA   AGE
my-config   2      61s

$ kubectl get cm/my-config -o yaml
apiVersion: v1
data:
  app.props: |-
    # application.properties
    server.port=8080
    spring.profiles.active=development
  foo: bar
kind: ConfigMap
metadata:
  creationTimestamp: "2025-09-15T20:20:44Z"
  name: my-config
  namespace: default
  resourceVersion: "3636455"
  uid: 9c68ecb1-55ca-469a-b09e-3e1b625cd69b
$ kubectl delete cm my-config
configmap "my-config" deleted

Declarative way

# cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: my-config
data:
  app.props: |
    server.port=8080
    spring.profiles.active=development
  foo: bar
$ kubectl apply -f cm.yaml
configmap/my-config created
$ kubectl get cm/my-config
NAME        DATA   AGE
my-config   2      19s

$ kubectl get cm/my-config -o yaml
apiVersion: v1
data:
  app.props: |
    server.port=8080
    spring.profiles.active=development
  foo: bar
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"app.props":"server.port=8080\nspring.profiles.active=development\n","foo":"bar"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"my-config","namespace":"default"}}
  creationTimestamp: "2025-09-15T20:27:51Z"
  name: my-config
  namespace: default
  resourceVersion: "3637203"
  uid: a8d9fce1-f2bd-470c-93a2-3a7fcc560bbc

Using ConfigMaps

Consuming an environment variable by a reference key

# pod-cm-env.yaml
apiVersion: v1
kind: Pod
metadata:
  name: env-configmap
spec:
  containers:
    - name: app
      command: ["printenv", "MY_VAR"]
      image: busybox:latest
      env:
        - name: MY_VAR
          valueFrom:
            configMapKeyRef:
              name: my-config
              key: foo
$ kubectl apply -f pod-cm-env.yaml
pod/env-configmap created

$ kubectl logs pod/env-configmap
bar
$ kubectl delete -f pod-cm-env.yaml
pod "env-configmap" deleted

Consuming all environment variables from the ConfigMap

# pod-cm-envfrom.yaml
apiVersion: v1
kind: Pod
metadata:
  name: env-from-configmap
spec:
  containers:
    - name: app
      command: ["printenv", "config_foo"]
      image: busybox:latest
      envFrom:
        - prefix: config_
          configMapRef:
            name: my-config
$ kubectl apply -f pod-cm-envfrom.yaml
pod/env-from-configmap created

$ kubectl logs pod/env-from-configmap
bar
$ kubectl delete -f pod-cm-envfrom.yaml
pod "env-from-configmap" deleted

Using configMap volume

# pod-cm-volumemount.yaml
apiVersion: v1
kind: Pod
metadata:
  name: configmap-volumemount
spec:
  containers:
    - name: app
      command: ["cat", "/etc/props/app.props"]
      image: busybox:latest
      volumeMounts:
        - name: app-props
          mountPath: "/etc/props"
          readOnly: true
  volumes:
  - name: app-props
    configMap:
      name: my-config
$ kubectl apply -f pod-cm-volumemount.yaml
pod/configmap-volumemount created

$ kubectl logs pod/configmap-volumemount
server.port=8080
spring.profiles.active=development
$ kubectl delete -f pod-cm-volumemount.yaml
pod "configmap-volumemount" deleted

Using configMap volume with items

# pod-cm-volume-items.yaml
apiVersion: v1
kind: Pod
metadata:
  name: configmap-volume-items
spec:
  containers:
    - name: app
      command: ["cat", "/etc/configs/app.conf"]
      image: busybox:latest
      volumeMounts:
        - name: config
          mountPath: "/etc/configs"
          readOnly: true
  volumes:
    - name: config
      configMap:
        name: my-config
        items:
          - key: foo
            path: app.conf
$ kubectl apply -f pod-cm-volume-items.yaml
pod/configmap-volume-items created

$ kubectl logs pod/configmap-volume-items
bar
$ kubectl delete -f pod-cm-volume-items.yaml
pod "configmap-volume-items" deleted

Secrets

A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. Such information might otherwise be put in a Pod specification or in a container image. Using a Secret means that you don't need to include confidential data in your application code..

Default Secrets in a Pod

# pod-basic.yaml
apiVersion: v1
kind: Pod
metadata:
  name: kubia
spec:
  containers:
  - image: luksa/kubia
    name: kubia
    ports:
    - containerPort: 8080
      protocol: TCP
$ kubectl apply -f pod-basic.yaml
pod/kubia created

$ kubectl get po/kubia -o=jsonpath='{.spec.containers[0].volumeMounts}'
[{"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount","name":"kube-api-access-jd9vq","readOnly":true}]

$ kubectl get po/kubia -o=jsonpath='{.spec.volumes[?(@.name == "kube-api-access-jd9vq")].projected.sources}'
[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"items":[{"key":"ca.crt","path":"ca.crt"}],"name":"kube-root-ca.crt"}},{"downwardAPI":{"items":[{"fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"},"path":"namespace"}]}}]

$ kubectl exec po/kubia -- ls /var/run/secrets/kubernetes.io/serviceaccount/
ca.crt
namespace
token

$ kubectl delete -f pod-basic.yaml
pod "kubia" deleted

Creating Secrets

Imperative way

Opaque Secrets
$ kubectl create secret generic empty-secret
secret/empty-secret created

$ kubectl get secret empty-secret
NAME           TYPE     DATA   AGE
empty-secret   Opaque   0      9s

$ kubectl get secret/empty-secret -o yaml
apiVersion: v1
kind: Secret
metadata:
  creationTimestamp: "2025-11-05T17:19:07Z"
  name: empty-secret
  namespace: default
  resourceVersion: "6290557"
  uid: 031d7f8d-e96d-4e03-a90f-2cb96308354b
type: Opaque

$ kubectl delete secret/empty-secret
secret "empty-secret" deleted
$ openssl genrsa -out tls.key
Generating RSA private key, 2048 bit long modulus (2 primes)
...............................................................+++++
.................................+++++
e is 65537 (0x010001)

$ openssl req -new -x509 -key tls.key -out tls.crt -subj /CN=kubia.com

$ kubectl create secret generic kubia-secret --from-file=tls.key --from-file=tls.crt
secret/kubia-secret created

$ kubectl get secret/kubia-secret -o yaml
apiVersion: v1
data:
  tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tDQpNSUlEQ1RDQ0FmR2dBd0lCQWdJVUxxWEJaRn...LS1FTkQgQ0VSVElGSUNBVEUtLS0tLQ0K
  tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQ0KTUlJRXBBSUJBQUtDQVFFQXR4UlRYMD...U2VQK3N3PT0NCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tDQo=
kind: Secret
metadata:
  creationTimestamp: "2025-11-05T17:26:21Z"
  name: kubia-secret
  namespace: default
  resourceVersion: "6291327"
  uid: a06d4be4-3e21-47ea-8009-d300c1c449f9
type: Opaque

$ kubectl delete secret/kubia-secret
secret "kubia-secret" deleted
$ kubectl create secret generic test-secret --from-literal='username=admin' --from-literal='password=39528$vdg7Jb'
secret/test-secret created

$ kubectl get secret/test-secret -o yaml
apiVersion: v1
data:
  password: Mzk1MjgkdmRnN0pi
  username: YWRtaW4=
kind: Secret
metadata:
  creationTimestamp: "2025-11-05T18:21:28Z"
  name: test-secret
  namespace: default
  resourceVersion: "6297117"
  uid: 215daac1-7305-43f4-91c6-c7dbdeca2802
type: Opaque

$ kubectl delete secret/test-secret
secret "test-secret" deleted
TLS Secrets
$ kubectl create secret tls my-tls-secret --key=tls.key --cert=tls.crt
secret/my-tls-secret created

$ kubectl get secret/my-tls-secret -o yaml
apiVersion: v1
data:
  tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tDQpNSUlEQ1RDQ0FmR2dBd0lCQWdJVUxxWEJaRn...LS1FTkQgQ0VSVElGSUNBVEUtLS0tLQ0K
  tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQ0KTUlJRXBBSUJBQUtDQVFFQXR4UlRYMD...U2VQK3N3PT0NCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tDQo=
kind: Secret
metadata:
  creationTimestamp: "2025-11-05T17:37:45Z"
  name: my-tls-secret
  namespace: default
  resourceVersion: "6292515"
  uid: f15b375e-2404-4ca0-a08f-014a0efeec70
type: kubernetes.io/tls

$ kubectl delete secret/my-tls-secret
secret "my-tls-secret" deleted
Docker config Secrets
$ kubectl create secret docker-registry my-docker-registry-secret --docker-username=robert --docker-password=passw123 --docker-server=nexus.registry.com:5000
secret/my-docker-registry-secret created

$ kubectl get secret/my-docker-registry-secret -o yaml
apiVersion: v1
data:
  .dockerconfigjson: eyJhdXRocyI6eyJuZXh1cy5yZWdpc3RyeS5jb206NTAwMCI6eyJ1c2VybmFtZSI6InJvYmVydCIsInBhc3N3b3JkIjoicGFzc3cxMjMiLCJhdXRoIjoiY205aVpYSjBPbkJoYzNOM01USXoifX19
kind: Secret
metadata:
  creationTimestamp: "2025-11-05T17:44:10Z"
  name: my-docker-registry-secret
  namespace: default
  resourceVersion: "6293203"
  uid: c9d05ef7-8c8c-4e2b-bf6f-27f80a45d545
type: kubernetes.io/dockerconfigjson

$ kubectl delete secret/my-docker-registry-secret
secret "my-docker-registry-secret" deleted

Declarative way

Opaque Secrets
$ echo -n 'my-app' | base64
bXktYXBw

$ echo -n '39528$vdg7Jb' | base64
Mzk1MjgkdmRnN0pi
# opaque-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: opaque-secret
data:
  username: bXktYXBw
  password: Mzk1MjgkdmRnN0pi
$ kubectl apply -f opaque-secret.yaml
secret/opaque-secret created

$ kubectl get secrets
NAME          TYPE     DATA   AGE
opaque-secret   Opaque   2      4s

$ kubectl delete -f opaque-secret.yaml
secret "opaque-secret" deleted
Docker config Secrets
# dockercfg-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: secret-dockercfg
type: kubernetes.io/dockercfg
data:
  .dockercfg: |
    eyJhdXRocyI6eyJodHRwczovL2V4YW1wbGUvdjEvIjp7ImF1dGgiOiJvcGVuc2VzYW1lIn19fQo=
$ kubectl apply -f dockercfg-secret.yaml
secret/secret-dockercfg created

$ kubectl get secrets
NAME               TYPE                      DATA   AGE
secret-dockercfg   kubernetes.io/dockercfg   1      3s

$ kubectl describe secret/secret-dockercfg
Name:         secret-dockercfg
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  kubernetes.io/dockercfg

Data
====
.dockercfg:  56 bytes

$ kubectl delete -f dockercfg-secret.yaml
secret "secret-dockercfg" deleted
Basic authentication Secret
# basicauth-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: secret-basic-auth
type: kubernetes.io/basic-auth
stringData:
  username: admin
  password: pass1234
$ kubectl apply -f basicauth-secret.yaml
secret/secret-basic-auth created

$ kubectl get secrets
NAME                TYPE                       DATA   AGE
secret-basic-auth   kubernetes.io/basic-auth   2      3s

$ kubectl describe secret/secret-basic-auth
Name:         secret-basic-auth
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  kubernetes.io/basic-auth

Data
====
password:  8 bytes
username:  5 bytes

$ kubectl delete -f basicauth-secret.yaml
secret "secret-basic-auth" deleted

Using Secrets

Secrets can be mounted as data volumes or exposed as environment variables to be used by a container in a Pod.

$ kubectl create secret generic test-secret --from-literal='username=admin' --from-literal='password=39528$vdg7Jb'
secret/test-secret created

$ kubectl describe secret test-secre
Name:         test-secret
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
password:  12 bytes
username:  5 bytes

Using Secrets as files from a Pod

# pod-secret-volumemount.yaml
apiVersion: v1
kind: Pod
metadata:
  name: secret-test-pod
spec:
  containers:
    - name: test-container
      image: nginx
      volumeMounts:
        - name: secret-volume
          mountPath: /etc/secret-volume
          readOnly: true
  volumes:
    - name: secret-volume
      secret:
        secretName: test-secret
$ kubectl apply -f pod-secret-volumemount.yaml
pod/secret-test-pod created

$ kubectl get pod secret-test-pod
NAME              READY   STATUS    RESTARTS   AGE
secret-test-pod   1/1     Running   0          30s

$ kubectl exec secret-test-pod -- ls /etc/secret-volume
password
username

$ kubectl exec secret-test-pod -- head /etc/secret-volume/{username,password}
==> /etc/secret-volume/username <==
admin
==> /etc/secret-volume/password <==
39528$vdg7Jb

$ kubectl delete -f pod-secret-volumemount.yaml
pod "secret-test-pod" deleted
Project Secret keys to specific file paths
# pod-secret-volume-items.yaml
apiVersion: v1
kind: Pod
metadata:
  name: secret-test-pod
spec:
  containers:
    - name: test-container
      image: nginx
      volumeMounts:
        - name: secret-volume
          mountPath: /etc/secret-volume
          readOnly: true
  volumes:
    - name: secret-volume
      secret:
        secretName: test-secret
        items:
          - key: username
            path: my-group/my-username
$ kubectl apply -f pod-secret-volume-items.yaml
pod/secret-test-pod created

$ kubectl exec secret-test-pod -- ls /etc/secret-volume
my-group

$ kubectl exec secret-test-pod -- ls /etc/secret-volume/my-group
my-username

$ kubectl exec secret-test-pod -- head /etc/secret-volume/my-group/my-username
admin

$ kubectl delete -f pod-secret-volume-items.yaml
pod "secret-test-pod" deleted

Using Secrets as environment variables

Define a container environment variable with data from a single Secret
# pod-secret-env-var.yaml
apiVersion: v1
kind: Pod
metadata:
  name: secret-test-pod
spec:
  containers:
    - name: test-container
      image: nginx
      env:
      - name: SECRET_PASSWORD
        valueFrom:
          secretKeyRef:
            name: test-secret
            key: password
$ kubectl apply -f pod-secret-env-var.yaml
pod/secret-test-pod created

$ kubectl exec secret-test-pod -- /bin/sh -c 'echo $SECRET_PASSWORD'
39528$vdg7Jb

$ kubectl delete -f pod-secret-env-var.yaml
pod "secret-test-pod" deleted
Define all of the Secret's data as container environment variables
# pod-secret-envfrom.yaml
apiVersion: v1
kind: Pod
metadata:
  name: secret-test-pod
spec:
  containers:
    - name: test-container
      image: nginx
      envFrom:
      - secretRef:
          name: test-secret
$ kubectl apply -f pod-secret-envfrom.yaml
pod/secret-test-pod created

$ kubectl exec secret-test-pod -- /bin/sh -c 'echo "username: $username\npassword: $password\n"'
username: admin
password: 39528$vdg7Jb

$ kubectl delete -f pod-secret-envfrom.yaml
pod "secret-test-pod" deleted
$ kubectl delete secrets test-secret
secret "test-secret" deleted

StatefulSet

TODO


This content originally appeared on DEV Community and was authored by Ježek


Print Share Comment Cite Upload Translate Updates
APA

Ježek | Sciencx (2025-11-05T20:24:58+00:00) The joy of Kubernetes. Retrieved from https://www.scien.cx/2025/11/05/the-joy-of-kubernetes/

MLA
" » The joy of Kubernetes." Ježek | Sciencx - Wednesday November 5, 2025, https://www.scien.cx/2025/11/05/the-joy-of-kubernetes/
HARVARD
Ježek | Sciencx Wednesday November 5, 2025 » The joy of Kubernetes., viewed ,<https://www.scien.cx/2025/11/05/the-joy-of-kubernetes/>
VANCOUVER
Ježek | Sciencx - » The joy of Kubernetes. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2025/11/05/the-joy-of-kubernetes/
CHICAGO
" » The joy of Kubernetes." Ježek | Sciencx - Accessed . https://www.scien.cx/2025/11/05/the-joy-of-kubernetes/
IEEE
" » The joy of Kubernetes." Ježek | Sciencx [Online]. Available: https://www.scien.cx/2025/11/05/the-joy-of-kubernetes/. [Accessed: ]
rf:citation
» The joy of Kubernetes | Ježek | Sciencx | https://www.scien.cx/2025/11/05/the-joy-of-kubernetes/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.