Loading Posts...

How To Create A Persistent Volume In A Kubernetes POD

This is another one of my Kubernetes efforts and, I wanted to publish this in my blog with the steps which I followed and, how to ensure a persistent storage volume to your Kubernetes POD, let’s see how to create a persistent volume in a Kubernetes POD in this article. If you are just starting with the Kubernetes, I hope my previous Kubernetes deployment article would be a great resource and just check it out.

Persistent volume subsystem provides the API to administer the provided storage and its usage. Basically, there are two API resources involved with this and we have to work with these two APIs in order to provide a successful storage volume to the running Kubernetes PODs.

Kubernetes PersistentVolume

PersistentVolume” is a space of a storage provisioned or dynamically allocated by a Storage Classes. PV is the abbreviation for the PersistentVolumes and it is much alike volumes with an independent lifecycle with PODs.

Kubernetes PersistentVolumeClaims

PersistentVolumeClaims” are storage allocation requests which is much similar to PODs. PVC is the general abbreviation to PersistentVolumeClaims and these Claims can be requested in size and access modes such as read/write and many times read-only.

Let’s see the action in my lab

If you were follow along, I had previously created my Kubernetes cluster with four nodes which has a Master and three worker nodes. I’m using the same four nodes cluster to create my PV and PVC.

I created the below yaml file to create a PV, I claimed the /mnt/data local storage with 1 GB in size with ReadWriteOnce access mode in this PV.

--- 
apiVersion: v1
kind: PersistentVolume
metadata: 
  name: mongodb-pv
spec: 
  accessModes: 
    - ReadWriteOnce
  capacity: 
    storage: 1Gi
  hostPath: 
    path: /mnt/data
  storageClassName: local-storage

Saved the file and created the PV with below command

kubectl apply -f PATH_TO_FILE

To see the status of the PV used the below command

kubectl get pv

Here is my output of the PV creation

Persistent Volume In A Kubernetes POD PV

Created the PVC. This was the PVC which I used to claim the allocated storage to my POD. Used the same command to create the PVC along with the file. This is the yaml file for my PVC.

--- 
apiVersion: v1
kind: PersistentVolumeClaim
metadata: 
  name: mongodb-pvc
spec: 
  accessModes: 
    - ReadWriteOnce
  resources: 
    requests: 
      storage: 1Gi
  storageClassName: local-storage

Used the below command to get the status of the PVC, you can see the status, volume name, capacity, access mode and the storage class in the output

kubectl get pvc
Persistent Volume In A Kubernetes POD PVC

PODs need to be created in order to claim the allocated resources, I had created a POD with the name of “mongodb-pod” with the mount path /data/db of the container. The host /mnt/data mount intent to be mapped in the /data/db path in the container. Here is my yaml file to the POD.

--- 
apiVersion: v1
kind: Pod
metadata: 
  name: mongodb
spec: 
  containers: 
    - 
      image: mongo
      name: mongodb
      ports: 
        - 
          containerPort: 27017
          protocol: TCP
      volumeMounts: 
        - 
          mountPath: /data/db
          name: mongodb-data
  volumes: 
    - 
      name: mongodb-data
      persistentVolumeClaim: 
        claimName: mongodb-pvc

Container mapping was specified as below

Persistent Volume In A Kubernetes POD mounts

Checked the POD status, it took few seconds to up and run the POD

Persistent Volume In A Kubernetes POD running pods

I noted the running node with the below command, since I used the local storage I wanted to make sure the same content, after the POD deletion and with a new POD.

kubectl get pods -o wide

Accessed the shell and verified the content of the mount location

To access the shell, I used the below command

kubectl exec POD_NAME -it -- sh

Browsed the location and checked the content

ls /data/db

Created a file in the same location, it created only to track the location

touch /data/db/TC_TEST_FILE.txt
Persistent Volume In A Kubernetes POD files

Checked the /mnt/data location in the node 4, that is the reason behind noting the running node of the POD in my previous step

Persistent Volume In A Kubernetes POD TEST File

Deleting the POD, PVC and PV

I have deleted the POD, PVC and PV to check the availability of the data. To delete the POD ran the below command

kubectl delete pod POD_NAME 

To delete the PVC used the below command

kubectl delete pvc PVC_NAME

Finally, ran the below command to delete the PV

kubectl delete pv PV_NAME

Here is the complete steps which I followed in my cluster, I used the kubectl get commands to find the exact objects

Persistent Volume In A Kubernetes POD delete

In my cluster, I had three worker nodes and I wanted to run the POD again in the same node. So I disabled the POD scheduling with the node drain.

I used the below command with ignoring the daemon sets

kubectl drain NODE_NAME --ignore-daemonsets
Persistent Volume In A Kubernetes POD schedule disable

Follow the same procedure to all the nodes, except node 4 which I need to schedule the POD again. Checked the status of the nodes and you can see the node scheduling status as “SchedulingDisabled

Persistent Volume In A Kubernetes POD node status

I created the PV,PVC and the POD again using the same yaml files. Also, checked the files using the container shell. All the files were in the same location.

Persistent Volume In A Kubernetes POD re create

If you need to re-enable the scheduling just uncordon the nodes using the below commands and nodes will be available to run the PODs.

kubectl uncordon NODE_NAME
Persistent Volume In A Kubernetes POD re enable

I hope this article helps to understand the Persistent Volumes in Kubernetes PODs.

Click to rate this post!
[Total: 9 Average: 5]

Aruna Fernando

"Sharing knowledge doesn't put your job at risk - iron sharpen iron" I heard this and it's true.

Get Updates Directly To Your Inbox!

   

Leave a Comment

Loading Posts...