Skip to content

Commit

Permalink
CASMPET-7209 add instructions to mount admin-tools bucket if desired (#…
Browse files Browse the repository at this point in the history
…5314)

* CASMPET-7209 add instructions to mount admin-tools bucket if desired

* CASMPET-7209 add script to mount admin-tools bucket

* CASMPET-7209 Apply script improvement suggestions from code review

Co-authored-by: Russell Bunch <doomslayer@hpe.com>
Signed-off-by: Lindsay Eliasen <87664908+leliasen-hpe@users.noreply.github.com>

* Apply suggestions from code review

Signed-off-by: Russell Bunch <doomslayer@hpe.com>

* CASMPET-7209 fix syntax in mount-admin-tools-bucket.sh

---------

Signed-off-by: Lindsay Eliasen <87664908+leliasen-hpe@users.noreply.github.com>
Signed-off-by: Russell Bunch <doomslayer@hpe.com>
Co-authored-by: Russell Bunch <doomslayer@hpe.com>
  • Loading branch information
leliasen-hpe and rustydb authored Sep 6, 2024
1 parent 0aed5a3 commit 18b44e9
Show file tree
Hide file tree
Showing 2 changed files with 124 additions and 3 deletions.
21 changes: 18 additions & 3 deletions operations/utility_storage/Troubleshoot_S3FS_Mounts.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,12 @@ Master nodes should host the following three mount points:

```bash
/var/opt/cray/config-data (config-data S3 bucket)
/var/lib/admin-tools (admin-tools S3 bucket)
/var/opt/cray/sdu/collection-mount (sds S3 bucket)
```

> NOTE: the mount `/var/lib/admin-tools (admin-tools S3 bucket)` is no longer mounted in CSM 1.4+.
> If it is desired to have this bucket mounted, please see [Mount 'admin-tools' S3 bucket](#mount-admin-tools-s3-bucket).
## Worker Node Mount Points

Worker nodes should host the following mount point:
Expand All @@ -31,7 +33,6 @@ Run the following command on master nodes to ensure the mounts are present:
```bash
ncn-m: # mount | grep 's3fs on'
s3fs on /var/opt/cray/config-data type fuse.s3fs (rw,nosuid,nodev,relatime,user_id=0,group_id=0)
s3fs on /var/lib/admin-tools type fuse.s3fs (rw,relatime,user_id=0,group_id=0,allow_other)
s3fs on /var/opt/cray/sdu/collection-mount type fuse.s3fs (rw,relatime,user_id=0,group_id=0,allow_other)
```

Expand All @@ -57,7 +58,6 @@ Ensure the `/etc/fstab` contains the following content:
```bash
ncn-m: # grep fuse.s3fs /etc/fstab
sds /var/opt/cray/sdu/collection-mount fuse.s3fs _netdev,allow_other,passwd_file=/root/.sds.s3fs,url=http://rgw-vip.nmn,use_path_request_style,use_cache=/var/lib/s3fs_cache,check_cache_dir_exist,use_xattr,uid=2370,gid=2370,umask=0007,allow_other 0 0
admin-tools /var/lib/admin-tools fuse.s3fs _netdev,allow_other,passwd_file=/root/.admin-tools.s3fs,url=http://rgw-vip.nmn,use_path_request_style,use_cache=/var/lib/s3fs_cache,check_cache_dir_exist,use_xattr 0 0
config-data /var/opt/cray/config-data fuse.s3fs _netdev,allow_other,passwd_file=/root/.config-data.s3fs,url=http://rgw-vip.nmn,use_path_request_style,use_xattr 0 0
```

Expand All @@ -84,3 +84,18 @@ ncn-mw: # mount -a

If the above command fails, then the error likely indicates that there is an issue communicating with Ceph's `Radosgw` endpoint (`rgw-vip`).
In this case the [Troubleshoot an Unresponsive S3 Endpoint](Troubleshoot_an_Unresponsive_S3_Endpoint.md) procedure should be followed to ensure the endpoint is healthy.

## Mount 'admin-tools' S3 bucket

In CSM 1.2 and CSM 1.3, `/var/lib/admin-tools (admin-tools S3 bucket)` was a mounted S3 bucket. Starting in CSM 1.4, the `admin-tools` S3 bucket is no longer mounted.
It is not necessary for this bucket to be mounted for system operations.
However, if it is desired to have the `admin-tools` S3 bucket mounted, please run the following script on all master nodes where the `admin-tools` bucket should be mounted.

(`ncn-m#`) Mount the `admin-tools` S3 bucket.

```bash
/usr/share/doc/csm/scripts/mount-admin-tools-bucket.sh
```

> NOTE: This mount will not be recreated after a node upgrade or rebuild.
> This procedure will need to be redone in the case of a node upgrade or rebuild.
106 changes: 106 additions & 0 deletions scripts/mount-admin-tools-bucket.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
#!/bin/bash
# MIT License
#
# (C) Copyright [2024] Hewlett Packard Enterprise Development LP
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
# ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
# OTHER DEALINGS IN THE SOFTWARE.
set -euo pipefail
# Mount admin-tools S3 bucket
s3_bucket="${S3_BUCKET:-admin-tools}"
s3fs_mount_dir="${S3FS_MOUNT_DIR:-/var/lib/admin-tools}"
s3_user="${S3_USER:-admin-tools}"
s3fs_cache_dir="${S3FS_CACHE_DIR:-/var/lib/s3fs_cache}"
if [ -d ${s3fs_cache_dir} ]; then
s3fs_opts="use_path_request_style,use_cache=${s3fs_cache_dir},check_cache_dir_exist,use_xattr"
else
s3fs_opts="use_path_request_style,use_xattr"
fi

echo "Configuring for ${s3_bucket} S3 bucket at ${s3fs_mount_dir} for ${s3_user} S3 user"

if [ ! -d "${s3fs_mount_dir}" ]; then
mkdir -pv "${s3fs_mount_dir}"
fi

pwd_file="/root/.${s3_user}.s3fs"
secret_name="${s3_user}-s3-credentials"
s3_user_credentials="$(
if ! kubectl get secret "$secret_name" -o json 2> /dev/null | jq -r '.data' 2> /dev/null; then
echo >&2 "Failed to obtain credential data for user: [$s3_user]"
fi
)"
if [ -z "$s3_user_credentials" ]; then
echo "Exiting."
exit 1
fi
access_key="$(jq -n -r --argjson s3_user_credentials "$s3_user_credentials" '$s3_user_credentials.access_key' | base64 -d)"
secret_key="$(jq -n -r --argjson s3_user_credentials "$s3_user_credentials" '$s3_user_credentials.secret_key' | base64 -d)"
s3_endpoint="$(jq -n -r --argjson s3_user_credentials "$s3_user_credentials" '$s3_user_credentials.http_s3_endpoint' | base64 -d)"
if [ "$access_key" = 'null' ] || [ "$secret_key" = 'null' ] || [ "$s3_endpoint" = 'null' ]; then
echo >&2 "Failed to find access_key, secret_key, or http_s3_endpoint for [$s3_user]"
exit 1
fi

echo "${access_key}:${secret_key}" > ${pwd_file}
chmod 600 ${pwd_file}

echo "Mounting bucket: ${s3_bucket} at ${s3fs_mount_dir}"
if ! s3fs ${s3_bucket} ${s3fs_mount_dir} -o passwd_file=${pwd_file},url=${s3_endpoint},${s3fs_opts}; then
echo "Error: Check that ${s3_bucket} is not already mounted and ${s3fs_mount_dir} is empty."
exit 1
fi

echo "Adding fstab entry for ${s3_bucket} S3 bucket at ${s3fs_mount_dir} for ${s3_user} S3 user"
if ! grep "${s3_bucket}" /etc/fstab | grep -q "fuse.s3fs"; then
echo "${s3_bucket} ${s3fs_mount_dir} fuse.s3fs _netdev,allow_other,passwd_file=${pwd_file},url=${s3_endpoint},${s3fs_opts} 0 0" >> /etc/fstab
else
echo "An entry in /etc/fstab already exists for ${s3_bucket} ${s3fs_mount_dir} fuse.s3fs"
fi

echo "Set cache pruning for ${s3_bucket} to 5G of the 200G volume (every 2nd hour)"
echo "0 */2 * * * root /usr/bin/prune-s3fs-cache.sh ${s3_bucket} ${s3fs_cache_dir} 5368709120 -silent" > /etc/cron.d/prune-s3fs-${s3_bucket}-cache

echo -e "Done mounting ${s3_bucket} S3 bucket\n"

# Validate S3 bucket has been mounted

echo "/etc/fstab has the following content:"
grep fuse.s3fs /etc/fstab
exit_code=0
if grep "$s3_bucket" /etc/fstab | grep -q "fuse.s3fs"; then
echo -e "${s3_bucket} was successfully added to /etc/fstab\n"
else
echo -e "Error: ${s3_bucket} fuse.s3fs mount was not added to the /etc/fstab file\n"
exit_code=1
fi
echo "The following s3fs mounts exist:"
mount | grep 's3fs on'
if mount | grep -q 's3fs on '"$s3fs_mount_dir"; then
echo -e "$s3fs_mount_dir is an s3fs mount\n"
else
echo -e "Error: ${s3fs_mount_dir} is not an s3fs mount.\n"
exit_code=1
fi

if [ "$exit_code" -eq 0 ]; then
echo "Successfully mounted ${s3_bucket} bucket"
else
echo "Errors encountered. Please review script output."
fi
exit $exit_code

0 comments on commit 18b44e9

Please sign in to comment.