Skip to content
This repository has been archived by the owner on Jan 4, 2022. It is now read-only.

Latest commit

 

History

History

cassandra

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 

Cassandra Helm Kubernetes Demo

This demo is designed to show VxFlex OS integration with Kubernetes to support a single-instance stateful application. In this case, Cassandra Database.

Requirements

This demo assumes the existence of a Kubernetes cluster with the following requirements:

  • A Kubernetes cluster with three or more worker nodes (this is required to see the data move between hosts)
  • VxFlex OS based Default Storage Class (this can be either the "in-tree" or CSI-based driver).
  • Helm Kubernetes Package Manager
  • A user with credentials to deploy a Helm release and access

Tip: With Helm deployed, you can easily install VxFlex CSI integration with the VxFlex OS CSI chart

Instructions

1. Download the csstress_script.sh for easy access to kubectl several kubectl commands later in the demo.

$ wget https://github.com/VxFlex-OS/kubernetes-demo-scripts/master/cassandra/csstress_script.sh
$ chmod +x csstress_script.sh

2. Check that the storage class exists and is the default

$ kubectl get storageclass
NAME               PROVISIONER   AGE
vxflex (default)   csi-scaleio   9h

This is an example. Your own storage class name and provisioner may differ.

3. Add Helm repository

This repository need to be added before running helm install incubator/cassandra.

$ helm repo add https://kubernetes-charts-incubator.storage.googleapis.com/
$ helm repo update

4. Install Cassandra using Helm:

The "release name" below (muddled-molly) is autogenerated by helm. Yours will differ.

$ helm install incubator/cassandra
NAME:   muddled-molly
LAST DEPLOYED: Tue May 22 09:50:13 2018
NAMESPACE: default
STATUS: DEPLOYED

LAST DEPLOYED: Tue Oct 9 04:23:24 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Pod(related)
NAME                       READY STATUS   RESTARTS  AGE
muddled-molly-cassandra-0  0/1    Pending  0         0s

==> v1/Service
NAME                     TYPE       CLUSTER-IP EXTERNAL-IP  PORT(S)                                       AGE
muddled-molly-cassandra ClusterIP  None        <none>       7000/TCP,7001/TCP,7199/TCP,9042/TCP,9160/TCP  0s

==> v1beta1/StatefulSet
NAME                     DESIRED CURRENT  AGE
muddled-molly-cassandra  3        1        0s


NOTES:
Cassandra CQL can be accessed via port 9042 on the following DNS name from within your cluster:
Cassandra Thrift can be accessed via port 9160 on the following DNS name from within your cluster:

If you want to connect to the remote instance with your local Cassandra CQL cli. To forward the API port to localhost:9042 run the following:
- kubectl port-forward --namespace default $(kubectl get pods --namespace default -l app=cassandra,release=muddled-molly -o jsonpath='{ .items[0].metadata.name }') 9042:9042

If you want to connect to the Cassandra CQL run the following:
- kubectl port-forward --namespace default $(kubectl get pods --namespace default -l "app=cassandra,release=muddled-molly" -o jsonpath="{.items[0].metadata.name}") 9042:9042
  echo cqlsh 127.0.0.1 9042

You can also see the cluster status by run the following:
- kubectl exec -it --namespace default $(kubectl get pods --namespace default -l app=cassandra,release=muddled-molly -o jsonpath='{.items[0].metadata.name}') nodetool status

To tail the logs for the Cassandra pod run the following:
- kubectl logs -f --namespace default $(kubectl get pods --namespace default -l app=cassandra,release=muddled-molly -o jsonpath='{ .items[0].metadata.name }')

5. Ensure that the pod is running

Tip: This typically happens within a minute. If its taking longer troubleshoot the storage by looking at either the kube-controller-manager (for in-tree driver) or the csi-controller (for CSI).

$ kubectl get all -l release="muddled-molly"
NAME                            READY     STATUS    RESTARTS   AGE
pod/muddled-molly-cassandra-0   1/1       Running   0          16m
pod/muddled-molly-cassandra-1   1/1       Running   0          13m
pod/muddled-molly-cassandra-2   1/1       Running   0          11m

NAME                              TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                                        AGE
service/muddled-molly-cassandra   ClusterIP   None         <none>        7000/TCP,7001/TCP,7199/TCP,9042/TCP,9160/TCP   16m

NAME                                       DESIRED   CURRENT   AGE
statefulset.apps/muddled-molly-cassandra   3         3         16m

6. Verify the persistent volumes are created

$ kubectl get pvc
NAME                             STATUS    VOLUME             CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-muddled-molly-cassandra-0   Bound     vxvol-97082578cb   16Gi       RWO            vxflex         11m
data-muddled-molly-cassandra-1   Bound     vxvol-ea81dab6cb   16Gi       RWO            vxflex         9m
data-muddled-molly-cassandra-2   Bound     vxvol-3160ea70cb   16Gi       RWO            vxflex         7m

7. Initialize the Cassandra database using the script

$ ./csstress_script.sh muddled-molly init
/usr/bin/kubectl run --namespace default muddled-molly-cassandra-stress-init --restart=Never --rm --tty -i --image cassandra --command -- cassandra-stress write duration=60s -rate threads=100 -node muddled-molly-cassandra
If you don't see a command prompt, try pressing enter.
******************** Stress Settings ********************
Command:
  Type: write
  Count: -1
  Duration: 60 SECONDS
...
total,       1188893,   18730,   18730,   18730,     5.3,     3.4,    10.4,    64.3,   127.2,   139.7,   59.0,  0.03889,      0,      0,       0,       0,       0,       0
total,       1205945,   17052,   17052,   17052,     5.8,     3.4,    27.5,    49.2,    64.3,    67.5,   60.0,  0.03847,      0,      0,       0,       0,       0,       0

8. Benchmark the storage using cassandra-stress:

./csstress_script.sh muddled-molly bench
/usr/bin/kubectl run --namespace default muddled-molly-cassandra-stress-bench --restart=Never --rm --tty -i --image cassandra --command -- cassandra-stress read duration=60s -rate threads=100 -node muddled-molly-cassandra
If you don't see a command prompt, try pressing enter.
******************** Stress Settings ********************
Command:
  Type: read
  Count: -1
  Duration: 60 SECONDS
  No Warmup: false
  Consistency Level: LOCAL_ONE
  Target Uncertainty: not applicable
  Key Size (bytes): 10
  Counter Increment Distibution: add=fixed(1)
Rate:
  Auto: false
  Thread Count: 100
  OpsPer Sec: 0
Population:
  Distribution: Gaussian:  min=1,max=1000000,mean=500000.500000,stdev=166666.500000
  Order: ARBITRARY
  Wrap: false
...

9. You can open a shell to the database, if you'd like to poke around.

$ ./csstress_script.sh muddled-molly shell
/usr/bin/kubectl run --namespace default muddled-molly-cassandra-stress-shell --restart=Never --rm --tty -i --image cassandra --command -- cqlsh muddled-molly-cassandra
If you don't see a command prompt, try pressing enter.
cqlsh> select * from system_schema.keyspaces;

 keyspace_name      | durable_writes | replication
--------------------+----------------+-------------------------------------------------------------------------------------
        system_auth |           True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '1'}
      system_schema |           True |                             {'class': 'org.apache.cassandra.locator.LocalStrategy'}

10. In another shell, watch the state of the containers, so that we can see what happens when we kill it.

$ kubectl get pods -o wide | grep 'muddled-molly'
muddled-molly-cassandra-0                1/1       Running   0          38m       10.244.3.22     k8s-slave3   <none>
muddled-molly-cassandra-1                1/1       Running   0          35m       10.244.1.32     k8s-slave1   <none>
muddled-molly-cassandra-2                1/1       Running   0          33m       10.244.2.40     k8s-slave2   <none>

11. Now, execute a couple of commands to kill the container, and ensure that it moves to another host.

$ ./csstress_script.sh muddled-molly kill-and-move
/usr/bin/kubectl taint node k8s-slave3 key=value:NoSchedule && /usr/bin/kubectl delete pod muddled-molly-cassandra-0
node/k8s-slave3 tainted
pod "muddled-molly-cassandra-0" deleted

12. In our other shell, we'll see that container be terminated, and recreated on another host.

$ kubectl get pods -o wide | grep 'muddled-molly'
muddled-molly-cassandra-0                1/1       Running   0          1h        10.244.3.22     k8s-slave3   <none>
muddled-molly-cassandra-0                0/1       Terminating   0          1h        10.244.3.22     k8s-slave3   <none>
muddled-molly-cassandra-1                1/1       Running       0          1h        10.244.1.32     k8s-slave1   <none>
muddled-molly-cassandra-2                1/1       Running       0          58m       10.244.2.40     k8s-slave2   <none>
muddled-molly-cassandra-0                0/1       ContainerCreating   0          7s        <none>          k8s-slave4   <none>

13. Feel free to validate that your data is still available.

$ ./csstress_script.sh muddled-molly shell
/usr/bin/kubectl run --namespace default muddled-molly-cassandra-stress-shell --restart=Never --rm --tty -i --image cassandra --command -- cqlsh muddled-molly-cassandra
If you don't see a command prompt, try pressing enter.
cqlsh> select * from system_schema.keyspaces;

 keyspace_name      | durable_writes | replication
--------------------+----------------+-------------------------------------------------------------------------------------
        system_auth |           True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '1'}
      system_schema |           True |                             {'class': 'org.apache.cassandra.locator.LocalStrategy'}
          keyspace1 |           True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '1'}
 system_distributed |           True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'}
             system |           True |                             {'class': 'org.apache.cassandra.locator.LocalStrategy'}
      system_traces |           True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '2'}

(6 rows)
cqlsh>

14. Stateful containers with persistent storage, FTW.

Troubleshooting

1. Getting error with helm install incubator/cassandra

Check the repo "https://kubernetes-charts-incubator.storage.googleapis.com/" exists in helm and Try updating the helm repo.

$ helm update

2. Cassandra pods stuck in "Pending state"

When Cassandra pods stuck in pending state for longer time, one of the possible reason MDM certificates need to be approved (if you haven't already) in VxFlexOS Gateway. Since CSI driver connects to gateway for PVC (PersistentVolumeClaim) creation. Also this can be identified by checking the status of PVC.

$ kubectl get pvc
NAME                             STATUS    VOLUME               CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-muddled-molly-cassandra-0    Pending                                                  vxflexos       12m

If there is certificate issue, events will contain these messages "unsecure connection not allowed", "Error getting storage pool Error Authenticating".

$ kubectl describe pvc data-muddled-molly-cassandra-0
Name:          data-muddled-molly-cassandra-0
Namespace:     default
StorageClass:  vxflex
Status:        Pending
Volume:
Labels:        app=cassandra
               release=muddled-molly
Annotations:   control-plane.alpha.kubernetes.io/leader={"holderIdentity":"46a429d9-9f59-11e8-a2e4-62df12916020                                                            ","leaseDurationSeconds":15,"acquireTime":"2018-12-06T19:41:34Z","renewTime":"2018-12-06T19:58:39Z","lea...
               volume.beta.kubernetes.io/storage-provisioner=csi-scaleio
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
  Type     Reason                Age                 From                                                                                                                                    Message
  ----     ------                ----                ----                                                                                                                                    -------
  Warning  ProvisioningFailed    19m                 csi-scaleio vxflexos-csi-controller-0 46a429d9-9f59-11e8-a2e4-62df12916020  Failed to provision volume with StorageClass "vxflexos": rpc error: code = Internal desc = error when creating volume: Error getting storage pool Error Authenticating: Commands sent on an unsecure connection are not allowed.
  Warning  ProvisioningFailed    7m (x12 over 19m)   csi-scaleio vxflexos-csi-controller-0 46a429d9-9f59-11e8-a2e4-62df12916020  Failed to provision volume with StorageClass "vxflexos": rpc error: code = Internal desc = error when creating volume: Error getting storage pool Error Authenticating: A timeout occurred waiting for a volume to be created, either by external provisioner "csi-scaleio" or manually created by system administrator
  Normal   Provisioning          2m (x15 over 19m)   csi-scaleio vxflexos-csi-controller-0 46a429d9-9f59-11e8-a2e4-62df12916020  External provisioner is provisioning volume for claim "default/data-muddled-molly-cassandra-0"

Delete the helm cassandra release and install again with helm after the MDM certificate approved in Gateway server.

$ helm delete muddled-molly
$ helm install incubator/cassandra