1. Aradhya Pitlawar (ThunderSmoker)
2. Tushar Rathod (calto16)
3. Jay Shirgupe (Jay7221)
This project implements a Container Storage Interface (CSI) driver for mounting S3 compatible object to Kubernetes pods as a file-system via FUSE.
Click here to watch
- Kubernetes 1.17+
- Kubernetes has to allow privileged containers
- Docker daemon must allow shared mounts (systemd flag
MountFlags=shared
) - Running Minio Server refer this gist.
To allow shared mounts in the Docker daemon, you can follow these steps:
-
Locate the Docker systemd unit file: The Docker systemd unit file is usually located at
/etc/systemd/system/docker.service
or/lib/systemd/system/docker.service
. Alternatively, you can run "systemctl cat docker.service" to locate file path. -
Edit the Docker systemd unit file: Open the Docker systemd unit file in a text editor with sudo permissions.
-
Add
MountFlags=shared
option: Add theMountFlags=shared
option to the[Service]
section of the unit file. If theMountFlags
option already exists, appendshared
to the list of flags.[Service] ... MountFlags=shared ...
After making changes, reload the systemd configuration to apply the changes:
sudo systemctl daemon-reload
Restart the Docker service to apply the new configuration:
sudo systemctl restart docker
apiVersion: v1
kind: Secret
metadata:
name: csi-s3-secret
# Namespace depends on the configuration in the storageclass.yaml
namespace: kube-system
stringData:
accessKeyID: <YOUR_ACCESS_KEY_ID>
secretAccessKey: <YOUR_SECRET_ACCESS_KEY>
# For AWS set it to "https://s3.<region>.amazonaws.com", for example https://s3.eu-central-1.amazonaws.com
endpoint: <https://example.net>
# For AWS set it to AWS region
#region: ""
The region can be empty if you are using some other S3 compatible storage.
Create that secret.yaml above:
cd deploy/
kubectl create -f csi-secret.yaml
kubectl create -f provisioner.yaml
kubectl create -f driver.yaml
kubectl create -f csi-s3.yaml
kubectl create -f pod-configuration/storageclass.yaml
-
Create a pvc using the new storage class:
kubectl create -f pod-configuration/pvc.yaml
-
Check if the PVC has been bound:
$ kubectl get pvc csi-s3-pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-s3-pvc Bound pvc-c5d4634f-8507-11e8-9f33-0e243832354b 2Gi RWO csi-s3 9s
-
Create a test pod which mounts your volume:
kubectl create -f pod-configuration/pod.yaml
If the pod can start, everything should be working.
-
Test the mount
$ kubectl exec -ti csi-s3-test-nginx bash $ mount | grep fuse pvc-035763df-0488-4941-9a34-f637292eb95c: on /usr/share/nginx/html/s3 type fuse.geesefs (rw,nosuid,nodev,relatime,user_id=65534,group_id=0,default_permissions,allow_other) $ touch /mnt/s3/hello_world
If something does not work as expected, check the troubleshooting section below.
By default, csi-s3 will create a new bucket per volume. The bucket name will match that of the volume ID. If you want your volumes to live in a precreated bucket, you can simply specify the bucket in the storage class parameters:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: csi-s3-existing-bucket
provisioner: ru.yandex.s3.csi
parameters:
mounter: geesefs
options: "--memory-limit 1000 --dir-mode 0777 --file-mode 0666"
bucket: some-existing-bucket-name
If the bucket is specified, it will still be created if it does not exist on the backend. Every volume will get its own prefix within the bucket which matches the volume ID. When deleting a volume, also just the prefix will be deleted.
We are using GeeseFS.
- Almost full POSIX compatibility
- Good performance for both small and big files
- Does not store file permissions and custom modification times
- By default runs outside of the csi-s3 container using systemd, to not crash
mountpoints with "Transport endpoint is not connected" when csi-s3 is upgraded
or restarted. Add
--no-systemd
toparameters.options
of theStorageClass
to disable this behaviour.
Check the logs of the provisioner:
kubectl logs -l app=csi-provisioner-s3 -c csi-s3
- Ensure feature gate
MountPropagation
is not set tofalse
- Check the logs of the s3-driver:
kubectl logs -l app=csi-s3 -c csi-s3