This page describes how to deploy Alfresco Content Services (ACS) Enterprise or Community using Helm onto EKS.
Amazon's EKS (Elastic Container Service for Kubernetes) makes it easy to deploy, manage, and scale containerized applications using Kubernetes on AWS. EKS runs the Kubernetes management infrastructure for you across multiple AWS availability zones to eliminate a single point of failure.
The Enterprise configuration will deploy the following system:
The Community configuration will deploy the following system:
- You've read the projects main README prerequisites section
- You've read the main Helm README page
- You are proficient in AWS and Kubernetes
Follow the AWS EKS Getting Started
Guide
to create a cluster and prepare your local machine to connect to the cluster.
Use the "Managed nodes - Linux" option and specify a --node-type
. Most common
choices are m5.xlarge
and t3.xlarge
.
As we'll be using Helm to deploy the ACS chart follow the Using Helm with EKS instructions to setup helm on your local machine.
Optionally, to help troubleshoot issues with your cluster either follow the tutorial to deploy the Kubernetes Dashboard to your cluster or download and use the Lens application from your local machine.
Now we have an EKS cluster up and running there are a few one time steps we need to perform to prepare the cluster for ACS to be installed.
-
Create a hosted zone in Route53 using these steps if you don't already have one available.
-
Create a public certificate for the hosted zone created in step 1 in Certificate Manager using these steps if you don't have one already available and make a note of the certificate ARN for use later.
-
Create a file called
external-dns.yaml
with the text below (replacingYOUR-DOMAIN-NAME
with the domain name you created in step 1). This manifest defines a service account and a cluster role for managing DNS.apiVersion: v1 kind: ServiceAccount metadata: name: external-dns --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: external-dns rules: - apiGroups: [""] resources: ["services","endpoints","pods"] verbs: ["get","watch","list"] - apiGroups: ["extensions"] resources: ["ingresses"] verbs: ["get","watch","list"] - apiGroups: [""] resources: ["nodes"] verbs: ["list","watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: external-dns-viewer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: external-dns subjects: - kind: ServiceAccount name: external-dns namespace: kube-system --- apiVersion: apps/v1 kind: Deployment metadata: name: external-dns spec: strategy: type: Recreate selector: matchLabels: app: external-dns template: metadata: labels: app: external-dns spec: serviceAccountName: external-dns containers: - name: external-dns image: registry.opensource.zalan.do/teapot/external-dns:latest args: - --source=service - --domain-filter=YOUR-DOMAIN-NAME - --provider=aws - --policy=sync - --aws-zone-type=public - --registry=txt - --txt-owner-id=acs-deployment - --log-level=debug
-
Use the kubectl command to deploy the external-dns service.
kubectl apply -f external-dns.yaml -n kube-system
-
Find the name of the nodegroup created by running the following command (replacing
YOUR-CLUSTER-NAME
with the name you gave your cluster):eksctl get nodegroup --cluster=`YOUR-CLUSTER-NAME`
-
Find the name of the role used by the nodes by running the following command (replacing
YOUR-CLUSTER-NAME
with the name you gave your cluster, andYOUR-NODE-GROUP
with the nodegroup from the step above):aws eks describe-nodegroup --cluster-name YOUR-CLUSTER-NAME --nodegroup-name YOUR-NODE-GROUP --query "nodegroup.nodeRole" --output text
-
In the IAM console find the role discovered in the previous step and attach the "AmazonRoute53FullAccess" managed policy as shown in the screenshot below:
-
Create an Elastic File System in the VPC created by EKS using these steps ensuring a mount target is created in each subnet. Make a note of the File System ID (circled in the screenshot below).
-
Find The ID of VPC created when your cluster was built using the command below (replacing
YOUR-CLUSTER-NAME
with the name you gave your cluster):aws eks describe-cluster --name YOUR-CLUSTER-NAME --query "cluster.resourcesVpcConfig.vpcId" --output text
-
Find The CIDR range of VPC using the command below (replacing
VPC-ID
with the ID retrieved in the previous step):aws ec2 describe-vpcs --vpc-ids VPC-ID --query "Vpcs[].CidrBlock" --output text
-
Go to the Security Groups section of the VPC Console and search for the VPC using the ID retrieved in step 2, as shown in the screenshot below:
-
Click on the default security group for the VPC (highlighted in the screenshot above) and add an inbound rule for NFS traffic from the VPC CIDR range as shown in the screenshot below:
-
Deploy the AWS EFS csi storage driver using the following commands, replacing
fs-SOMEUUID
with the string "file-system-id" where file-system-id is the ID retrieved in step 1 and aws-region is the region you're using e.g. "fs-72f5e4f1" (this step replace previous deployment of the now obsolete nfs-client-provisioner):cat > aws-efs-values.yml <<EOT storageClasses: - mountOptions: - tls name: nfs-client parameters: directoryPerms: "700" uid: 33000 gid: 1000 fileSystemId: fs-SOMEUUID provisioningMode: efs-ap reclaimPolicy: Retain volumeBindingMode: Immediate EOT helm repo add aws-efs-csi-driver https://kubernetes-sigs.github.io/aws-efs-csi-driver helm upgrade --install aws-efs-csi-driver --namespace kube-system aws-efs-csi-driver/aws-efs-csi-driver -f aws-efs-values.yml
Note: the
storageClass
is set toRetain
for obvious safety reasons. That however means kubernetes administrator need to take care of volume cleanup.
Now the EKS cluster is setup we can deploy ACS.
Namespaces in Kubernetes isolate workloads from each other, create a namespace to host ACS inside the cluster using the following command (we'll then use the alfresco
namespace throughout the rest of the tutorial):
kubectl create namespace alfresco
-
Create a file called
ingress-rbac.yaml
with the text below:apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: acs:psp namespace: alfresco rules: - apiGroups: - policy resourceNames: - kube-system resources: - podsecuritypolicies verbs: - use --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: acs:psp:default namespace: alfresco roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: acs:psp subjects: - kind: ServiceAccount name: default namespace: alfresco --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: acs:psp:acs-ingress namespace: alfresco roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: acs:psp subjects: - kind: ServiceAccount name: acs-ingress namespace: alfresco
-
Use the kubectl command to create the cluster roles required by the ingress service.
kubectl apply -f ingress-rbac.yaml -n alfresco
-
Deploy the ingress using the following commands (replacing
ACM_CERTIFICATE_ARN
andYOUR-DOMAIN-NAME
with the ARN of the certificate and hosted zone created earlier in the DNS section):helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update helm install acs-ingress ingress-nginx/ingress-nginx --version=4.0.18 \ --set controller.scope.enabled=true \ --set controller.scope.namespace=alfresco \ --set rbac.create=true \ --set controller.config."proxy-body-size"="100m" \ --set controller.service.targetPorts.https=80 \ --set controller.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-backend-protocol"="http" \ --set controller.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-ssl-ports"="https" \ --set controller.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-ssl-cert"="ACM_CERTIFICATE_ARN" \ --set controller.service.annotations."external-dns\.alpha\.kubernetes\.io/hostname"="acs.YOUR-DOMAIN-NAME" \ --set controller.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-ssl-negotiation-policy"="ELBSecurityPolicy-TLS-1-2-2017-01" \ --set controller.publishService.enabled=true \ --set controller.admissionWebhooks.enabled=false \ --atomic \ --namespace alfresco
NOTE: The command will wait until the deployment is ready so please be patient.
This repository allows you to either deploy a system using released stable artefacts or the latest in-progress development artefacts.
To use a released version of the Helm chart add the stable repository using the following command:
helm repo add alfresco https://kubernetes-charts.alfresco.com/stable
helm repo update
Alternatively, to use the latest in-progress development version of the Helm chart add the incubator repository using the following command:
helm repo add alfresco https://kubernetes-charts.alfresco.com/incubator
helm repo update
Now decide whether you want to install the latest version of ACS (Enterprise or Community) or a previous version and follow the steps in the relevant section below.
See the registry authentication page to configure credentials to access the Alfresco Enterprise registry.
Deploy the latest version of ACS by running the following command (replacing YOUR-DOMAIN-NAME
with the hosted zone you created earlier):
helm install acs alfresco/alfresco-content-services \
--set externalPort="443" \
--set externalProtocol="https" \
--set externalHost="acs.YOUR-DOMAIN-NAME" \
--set repository.persistence.enabled=true \
--set repository.persistence.storageClass="nfs-client" \
--set filestore.persistence.enabled=true \
--set filestore.persistence.storageClass="nfs-client" \
--set global.alfrescoRegistryPullSecrets=quay-registry-secret \
--set global.tracking.sharedsecret=$(openssl rand -hex 24) \
--atomic \
--timeout 10m0s \
--namespace=alfresco
NOTE: The command will wait until the deployment is ready so please be patient.
-
Download the Community values file from here.
-
Deploy ACS Community by running the following command (replacing
YOUR-DOMAIN-NAME
with the hosted zone you created earlier):helm install acs alfresco/alfresco-content-services \ --values=community_values.yaml \ --set externalPort="443" \ --set externalProtocol="https" \ --set externalHost="acs.YOUR-DOMAIN-NAME" \ --set repository.persistence.enabled=true \ --set repository.persistence.storageClass="nfs-client" \ --atomic \ --timeout 10m0s \ --namespace=alfresco
NOTE: The command will wait until the deployment is ready so please be patient.
-
Download the version specific values file you require from this folder.
-
Deploy the specific version of ACS by running the following command (replacing
YOUR-DOMAIN-NAME
with the hosted zone you created earlier andMAJOR
&MINOR
with the appropriate values):helm install acs alfresco/alfresco-content-services \ --values=MAJOR.MINOR.N_values.yaml \ --set externalPort="443" \ --set externalProtocol="https" \ --set externalHost="acs.YOUR-DOMAIN-NAME" \ --set repository.persistence.enabled=true \ --set repository.persistence.storageClass="nfs-client" \ --set filestore.persistence.enabled=true \ --set filestore.persistence.storageClass="nfs-client" \ --set global.alfrescoRegistryPullSecrets=quay-registry-secret \ --set global.tracking.sharedsecret=$(openssl rand -hex 24) \ --atomic \ --timeout 10m0s \ --namespace=alfresco
NOTE: The command will wait until the deployment is ready so please be patient.
When the deployment has completed the following URLs will be available (replacing YOUR-DOMAIN-NAME
with the hosted zone you created earlier):
- Repository:
https://acs.YOUR-DOMAIN-NAME/alfresco
- Share:
https://acs.YOUR-DOMAIN-NAME/share
- API Explorer:
https://acs.YOUR-DOMAIN-NAME/api-explorer
If you deployed Enterprise you'll also have access to:
- ADW:
https://acs.YOUR-DOMAIN-NAME/workspace/
- Sync Service:
https://acs.YOUR-DOMAIN-NAME/syncservice/healthcheck
If you requested an extended trial license navigate to the Admin Console and apply your license:
- https://acs.YOUR-DOMAIN-NAME/alfresco/service/enterprise/admin/admin-license
- Default username and password is
admin
- See Uploading a new license for more details
By default, this tutorial installs an out-of-the-box setup, however there are many configurations options described here. There are also several examples covering various use cases.
This deployment is also not fully secured by default, to learn about and apply further restrictions including pod security, network policies etc. please refer to the EKS Best Practices for Security.
-
Remove the
acs
andacs-ingress
deployments by running the following command:helm uninstall -n alfresco acs acs-ingress
-
Delete the Kubernetes namespace using the command below:
kubectl delete namespace alfresco
-
Go to the EFS Console, select the file system we created earlier and press the "Delete" button to remove the mount targets and file system.
-
Go to the IAM console and remove the AmazonRoute53FullAccess managed policy we added to the NodeInstanceRole in the File System section otherwise the cluster will fail to delete in the next step.
-
Finally, delete the EKS cluster using the command below (replacing
YOUR-CLUSTER-NAME
with the name you gave your cluster):eksctl delete cluster --name YOUR-CLUSTER-NAME