Ad Code

Responsive Advertisement

Ticker

6/recent/ticker-posts

kubernetes-session-7--Storage(Empty Dir,hostpath,PV,PVC,)

Ref: https://kubernetes.io/docs/concepts/storage/volumes/

**For EKS storage:https://repost.aws/knowledge-center/eks-persistent-storage

=================================Empty Dir=================================

 In Kubernetes, an emptyDir volume is a temporary storage volume that is initially empty and created when a Pod is assigned to a node. It exists as long as that Pod is running on that node, and data in the volume is deleted when the Pod is removed from the node.

This emptyDir volume is useful for scenarios where you need temporary storage within the same Pod, such as sharing data between containers in the same Pod or storing temporary files during the execution of a job.

EmptyDir volumes are particularly useful for scenarios like caching or sharing data between containers within the same Pod. However, if you need persistent storage across Pod restarts or rescheduling, you should consider other types of volumes like Persistent Volumes and Persistent Volume Claims.

Remember that the data in the emptyDir volume is ephemeral and tied to the lifecycle of the Pod. If a Pod is deleted or rescheduled, the data in the emptyDir volume will be lost. If you need persistent storage, consider using other types of volumes like Persistent Volumes and Persistent Volume Claims.

# nginx-busybox-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-busybox-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-busybox
  template:
    metadata:
      labels:
        app: nginx-busybox
    spec:
      containers:
      - name: nginx-container
        image: nginx
        volumeMounts:
        - name: shared-volume
          mountPath: /usr/share/nginx/html
      - name: busybox-container
        image: busybox
        command: ["/bin/sh", "-c", "while true; do echo 'Hello from BusyBox' > /usr/share/nginx/html/busybox-index.html; sleep 10; done"]
        volumeMounts:
        - name: shared-volume
          mountPath: /usr/share/nginx/html
      volumes:
      - name: shared-volume
        emptyDir: {}

=========================================================================

==========================Host Path====================================

A hostPath volume mounts a file or directory from the host node's filesystem into your Pod. This is not something that most Pods will need, but it offers a powerful escape hatch for some applications.

Warning:Ref:https://kubernetes.io/docs/concepts/storage/volumes/

Using the hostPath volume type presents many security risks. If you can avoid using a hostPath volume, you should. For example, define a local PersistentVolume, and use that instead.

If you are restricting access to specific directories on the node using admission-time validation, that restriction is only effective when you additionally require that any mounts of that hostPath volume are read only. If you allow a read-write mount of any host path by an untrusted Pod, the containers in that Pod may be able to subvert the read-write host mount.

Take care when using hostPath volumes, whether these are mounted as read-only or as read-write, because:

Access to the host filesystem can expose privileged system credentials (such as for the kubelet) or privileged APIs (such as the container runtime socket), that can be used for container escape or to attack other parts of the cluster.

Pods with identical configuration (such as created from a PodTemplate) may behave differently on different nodes due to different files on the nodes.

Some uses for a hostPath are:

running a container that needs access to node-level system components (such as a container that transfers system logs to a central location, accessing those logs using a read-only mount of /var/log)
making a configuration file stored on the host system available read-only to a static pod; unlike normal Pods, static Pods cannot access ConfigMaps
This is not something that most Pods will need, but it offers a powerful escape hatch for some applications.
Deploying some Node Specific files through pod
Running a container that needs access to Docker internals; use a hostPath of /var/lib/docker
Allowing a Pod to specify whether a given hostPath should exist prior to the Pod running, whether it should be created, and what it should exist as

Ex: 

apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  - image: k8s.gcr.io/test-webserver
    name: test-container
    volumeMounts:
    - mountPath: /test-pd
      name: test-volume
  volumes:
  - name: test-volume
    hostPath:
      # directory location on host
      path: /data
      # this field is optional
      type: DirectoryOrCreate
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-busybox-deployment-with-hostpath
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-busybox
  template:
    metadata:
      labels:
        app: nginx-busybox
    spec:
      containers:
      - name: nginx-container
        image: nginx
        volumeMounts:
        - name: shared-volume
          mountPath: /usr/share/nginx/html
      - name: busybox-container
        image: busybox
        command: ["/bin/sh", "-c", "while true; do echo 'Hello from BusyBox' > /usr/share/nginx/html/busybox-index.html; sleep 10; done"]
        volumeMounts:
        - name: shared-volume
          mountPath: /usr/share/nginx/html
        - name: hostpath-volume
          mountPath: /data
      volumes:
      - name: shared-volume
        emptyDir: {}
      - name: hostpath-volume
        hostPath:
          path: /hostpath-data
          type: DirectoryOrCreate

docker container-d example for hostpath:

volumeMounts:
- name: nfs
mountPath: "/mountpath"
- name: dockersock
mountPath: "/var/run/docker.sock"
volumes:
- name: nfs
persistentVolumeClaim:
claimName: nfs
- name: dockersock
hostPath:
path: /var/run/docker.sock

=========================Storage class================================

https://repost.aws/knowledge-center/eks-persistent-storage

Without storage class if u want to use the aws ebs.

create a ebs volume manually 


kubernetscluster = yourclustername

After that in which az you created that ebs volume in the same az belongs to ec2 machine you can attach to instance to ebs volume


In above way there was dependency with volume-id and the node which needs be in same AZ & region as ebs is region specific.

For EKS ebs attachment

https://www.stacksimplify.com/aws-eks/kubernetes-storage/create-kubernetes-storageclass-persistentvolumeclain-configmap-for-mysql-database/

-----------------------------eks ebs volume--------------------------------------

---  
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ebs-sc
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ebs-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: ebs-sc
  resources:
    requests:
      storage: 10Gi

---
apiVersion: v1
kind: Pod
metadata:
  name: ebs-pod
spec:
  containers:
    - name: ebs-container
      image: nginx
      volumeMounts:
        - mountPath: "/data"
          name: ebs-volume
  volumes:
    - name: ebs-volume
      persistentVolumeClaim:
        claimName: ebs-pvc

AWS EBS volumes do not support ReadWriteMany access mode in Kubernetes, which means you cannot directly share an EBS volume across multiple pods simultaneously. EBS volumes are designed for ReadWriteOnce access mode, meaning they can be attached to a single node at a time.

i.e ebs volume will support at a time only single node or pod if you want to share multiple pods then you need to use the efs/nfs

If you need to share data between multiple pods, you may consider alternatives like using Amazon EFS (Elastic File System) or configuring shared storage at the application layer.

If you still want to use EBS and share data between pods, you could consider using a network file system (NFS) server. You can set up an EC2 instance with an attached EBS volume, configure it as an NFS server, and then mount the NFS share in your pods. This, however, adds complexity and potential performance overhead.

------------------------------------------------EFS---------------------------------------------------------------

***For multi availability zone deployment EFS is the best way.

To share an EFS volume across multiple pods, you need to ensure that the PersistentVolumeClaim (PVC) is configured with the appropriate access mode, such as ReadWriteMany. Here's an example YAML file that creates an EFS StorageClass, a PersistentVolumeClaim with ReadWriteMany access mode, and a Pod that uses the PVC:

# dynamic-efs-pvc-pod.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: efs-sc
provisioner: efs.csi.aws.com

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-efs-pvc
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: efs-sc
  resources:
    requests:
      storage: 5Gi

---

apiVersion: v1
kind: Pod
metadata:
  name: my-efs-pod-1
spec:
  volumes:
    - name: my-efs-volume
      persistentVolumeClaim:
        claimName: my-efs-pvc
  containers:
    - name: my-efs-container
      image: nginx
      volumeMounts:
        - mountPath: "/data"
          name: my-efs-volume

---

apiVersion: v1
kind: Pod
metadata:
  name: my-efs-pod-2
spec:
  volumes:
    - name: my-efs-volume
      persistentVolumeClaim:
        claimName: my-efs-pvc
  containers:
    - name: my-efs-container
      image: nginx
      volumeMounts:
        - mountPath: "/data"
          name: my-efs-volume



Post a Comment

0 Comments

Ad Code

Responsive Advertisement