Ref: https://kubernetes.io/docs/concepts/storage/volumes/
**For EKS storage:https://repost.aws/knowledge-center/eks-persistent-storage
=================================Empty Dir=================================
In Kubernetes, an emptyDir volume is a temporary storage volume that is initially empty and created when a Pod is assigned to a node. It exists as long as that Pod is running on that node, and data in the volume is deleted when the Pod is removed from the node.
This emptyDir volume is useful for scenarios where you need temporary storage within the same Pod, such as sharing data between containers in the same Pod or storing temporary files during the execution of a job.
EmptyDir volumes are particularly useful for scenarios like caching or sharing data between containers within the same Pod. However, if you need persistent storage across Pod restarts or rescheduling, you should consider other types of volumes like Persistent Volumes and Persistent Volume Claims.
Remember that the data in the emptyDir volume is ephemeral and tied to the lifecycle of the Pod. If a Pod is deleted or rescheduled, the data in the emptyDir volume will be lost. If you need persistent storage, consider using other types of volumes like Persistent Volumes and Persistent Volume Claims.
=========================================================================
==========================Host Path====================================
A hostPath volume mounts a file or directory from the host node's filesystem into your Pod. This is not something that most Pods will need, but it offers a powerful escape hatch for some applications.
Warning:Ref:https://kubernetes.io/docs/concepts/storage/volumes/
Using the hostPath volume type presents many security risks. If you can avoid using a hostPath volume, you should. For example, define a local PersistentVolume, and use that instead.
If you are restricting access to specific directories on the node using admission-time validation, that restriction is only effective when you additionally require that any mounts of that hostPath volume are read only. If you allow a read-write mount of any host path by an untrusted Pod, the containers in that Pod may be able to subvert the read-write host mount.
Take care when using hostPath volumes, whether these are mounted as read-only or as read-write, because:
Access to the host filesystem can expose privileged system credentials (such as for the kubelet) or privileged APIs (such as the container runtime socket), that can be used for container escape or to attack other parts of the cluster.
Pods with identical configuration (such as created from a PodTemplate) may behave differently on different nodes due to different files on the nodes.
Some uses for a hostPath are:
running a container that needs access to node-level system components (such as a container that transfers system logs to a central location, accessing those logs using a read-only mount of /var/log)
making a configuration file stored on the host system available read-only to a static pod; unlike normal Pods, static Pods cannot access ConfigMaps
This is not something that most Pods will need, but it offers a powerful escape hatch for some applications.
Deploying some Node Specific files through pod
Running a container that needs access to Docker internals; use a hostPath of /var/lib/docker
Allowing a Pod to specify whether a given hostPath should exist prior to the Pod running, whether it should be created, and what it should exist as
Ex:
docker container-d example for hostpath:
volumeMounts:
- name: nfs
mountPath: "/mountpath"
- name: dockersock
mountPath: "/var/run/docker.sock"
volumes:
- name: nfs
persistentVolumeClaim:
claimName: nfs
- name: dockersock
hostPath:
path: /var/run/docker.sock
=========================Storage class================================
https://repost.aws/knowledge-center/eks-persistent-storage
Without storage class if u want to use the aws ebs.
create a ebs volume manually
kubernetscluster = yourclustername
After that in which az you created that ebs volume in the same az belongs to ec2 machine you can attach to instance to ebs volume
In above way there was dependency with volume-id and the node which needs be in same AZ & region as ebs is region specific.
For EKS ebs attachment
https://www.stacksimplify.com/aws-eks/kubernetes-storage/create-kubernetes-storageclass-persistentvolumeclain-configmap-for-mysql-database/
-----------------------------eks ebs volume--------------------------------------
AWS EBS volumes do not support ReadWriteMany
access mode in Kubernetes, which means you cannot directly share an EBS volume across multiple pods simultaneously. EBS volumes are designed for ReadWriteOnce
access mode, meaning they can be attached to a single node at a time.
i.e ebs volume will support at a time only single node or pod if you want to share multiple pods then you need to use the efs/nfs
If you need to share data between multiple pods, you may consider alternatives like using Amazon EFS (Elastic File System) or configuring shared storage at the application layer.
If you still want to use EBS and share data between pods, you could consider using a network file system (NFS) server. You can set up an EC2 instance with an attached EBS volume, configure it as an NFS server, and then mount the NFS share in your pods. This, however, adds complexity and potential performance overhead.
------------------------------------------------EFS---------------------------------------------------------------
***For multi availability zone deployment EFS is the best way.
To share an EFS volume across multiple pods, you need to ensure that the PersistentVolumeClaim
(PVC) is configured with the appropriate access mode, such as ReadWriteMany
. Here's an example YAML file that creates an EFS StorageClass
, a PersistentVolumeClaim
with ReadWriteMany
access mode, and a Pod
that uses the PVC:
0 Comments