We are a preferred AWS consultant and offers the best cloud AWS consulting service. Our AWS-certified expert consultants conduct a thorough review and evaluation of your existing IT infrastructure and service interaction model to provide top-notch solutions.
- Create an EC2 Linux (I have used Ubuntu in this demo) instance. Keep everything as default and add the below user data script to install awscli and s3fs utlity from advance section of wizard
- Create an IAM user for s3fs
- Give the user a unique name and enable programmatic access
Set permission --> create a new policy
Select the service as S3 and include below access levels
Give the policy a unique name and click Create policy
Once the policy is created, go back to the IAM tab and hit refresh so that newly created policy is included in the list , filter by policy name and hit the enable checkbox to add the policy to our IAM user.
Hit create user
Once the user is created, download the credentials. We are going to use it later.
- Login to your Ec2 Instance
Go to your home directory and run below commands to create a new directory and to generate some sample files Next step is to create an S3 bucket.
- Go to S3 service and create a new bucket give it a unique name and leave reast of the settings as default. Block public access to this bucket should be enabled by default.
Hit create bucket.
- Once the bucket is created, go to the ssh session and configure our AWS credentails for authentication using the IAM account that we have created.
Use the command
and provide the credential details that we have downloaded before
- create the credential file for s3fs.
s3fs supports the standard AWS credentials file stored in
file should have the below content:
You can run the below command as well:
- Now you can run the command to mount S3 bucket as a filesystem.
- Once it is mounted successfully, you can verify by running the command
- Add the entry in fstab using the below command so that the changes become persistent after the server reboot as well:
- Now the moment of truth, go to your S3 bucket and hit refresh, you should see the files that were present in your file system
- Let's now verify whether it's getting synced properly after a object delete/addition
Go to your S3 bucket, and upload a new file
Go to your ssh session and do ls in the same directory
Same way you can test the delete file operation. And it works both ways i.e if you perform any file operation on your filesystem, it will sync to your S3 bucket as well.