Akka cluster running in docker swarm mode. The swarm can use ARM devices to deploy the app such as a Raspberry Pi. This project uses Java.
Cluster : Group of nodes.
Node: An actor system running a JVM. It is possible to have multiple actor systems in one JVM.
Frontend: Node/System creator of tasks.
Backend: Node/System worker of tasks coming from Frontend.
Swarm: A swarm is a group of machines that are running Docker and joined into a cluster.
Stack: A stack is a group of interrelated services that share dependencies, and can be orchestrated and scaled together. A single stack is capable of defining and coordinating the functionality of an entire application (though very complex applications may want to use multiple stacks).
For more definitions and how to setup environment follow the official documentation at:
-
Have a docker setup running in swarm mode. This means that all nodes must have docker installed.
Please refer to: Getting started with swarm mode
-
One or more linux machines (will function as manager of the swarm)
-
Alternatively, ARM devices, such as Raspberry Pi model 3B+ (arm32v7l)
IMPORTANT: All commands must be executed on the manager node of the swarm
####Setup docker swarm
In general follow these steps: Create a swarm However it could be as simple as:
- Go to the manager to be node, which is the linux machine, and execute:
docker swarm init
This will create the swarm with the node as manager. It will also display the command to use to join a worker node in the form of:
docker swarm join \
--token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c \
192.168.99.100:2377
So go to the worker nodes, in this case the raspberry pi, and use that command to join the cluster.
Now back to the manager node, clone the project and go to the location of the project.
Deploy the stack to the swarm:
docker stack deploy --compose-file docker-compose.yml akkaclusterswarm
(The "akkaclusterswarm" is just the name of the stack and can be changed)
This will start three services:
- seed1 (on the ARM device, which is a worker node, see the image name and constraints on the docker-compose file)
- seed2 (on manager node)
- frontend (on manager node)
IMPORTANT: The deployed service "seed1", uses a different base image:
seed1:
image: marcelodock/akkaclusterarm32v7
This is an special image that uses: "arm32v7/gradle" which is made for working on a Raspberry Pi model 3 B+ which has a arm32v7l architecture.
The other services, namely backend1 and frontend, use the base image:
image: marcelodock/akkacluster
Which uses the standard gradle image with jdk8. Please refer to:
Both of these images are located on the public dockerhub, so it can be downloaded directly from any node.
Of course, this configuration can be changed as required by modifying the docker-compose file.
The hostnames and ports of the nodes are being set as environment variables on the docker-compose.yml file. The application.conf use these variables using the the HOCON syntax.
On the node manager to see the deployed stack:
docker stack ls
This should show all the stacks, look for the one named "akkaclusterswarm" or whatever is the name of the stack.
To see the services deployed by the stack:
docker stack ps akkaclusterswarm
This should list all the 3 services that are listed on the docker-compose.yml file along with its state
###See logs/details of the state of a service
Once the command for deploying is used, only a single line indicating that the service is deployed (one line for each service) is shown, which is not useful at all.
To see the state of each service deployed use the previous command to see the services of the stack, look for the column ID of the service and then:
docker inspect serviceid
This will show full information about the service, look at the property: "Status", which will show useful info in case there is a problem starting the service such as:
"No such image" or "No suitable nodes"
###See the logs of each container:
To see the logs of each container do:
docker ps -a
This will list the containers, look for the ones that correspond to the services, for this look at the last column which have the name of the container, it should be something like: "akkaclusterswarm_frontend".
Then do the following to see in real time the logs of the container:
docker logs --follow containerid
###Use docker swarm visualizer
There is a very nice visualizer for the containers/services deployed on a docker swarm. Looks at the repo of the project:
Project starts with class "main.Main" on root There are other two classes as well:
- main.Backendmain
- Frontendmain
Each one initializes the respective systems
main.Main: it creates two instances of the backend, with specified ports. The ports have to be the same as the seed nodes specified on the "application.conf" file, otherwise no cluster is created.
The constant name: CLUSTER_SYSTEM_NAME has to be the same across all the systems created in the cluster. This allows for all the created actor systems to belong to the same cluster otherwise it will create individual clusters.
main.Backendmain: Node/System in charge of receiving tasks from frontend and producing a result. It is listening to "MemberUp" events, so in case a new "frontend" member joins, it will tell it that it is available to process tasks.
Frontendmain: Node/System in charge of assigning tasks to backend. It has a list of backends that can process some work.
Backend: Actor class for the backend, that listens and carry out tasks from frontend.
Frontend: Actor class that registers workes(backends) and forward the tasks to an available backend.
AppMessages: All the messages for the application. In general there are 3 types of messages:
- JobMessage, which carries the job to be done.
- ResultMessage, message that carries the result of the computation
- FailedMessage, message that carries information about the failing computation
Basic configuration file. Provider is set to cluster.
There are 2 seed nodes, one is deployed on a ARM device and the other on a linux machine.
The hostnames and ports of the nodes are being set as environment variables on the docker-compose.yml file. These variables are obtained by the application.conf file using the HOCON syntax.
IMPORTANT: When using docker it is necessary to define the bindings for the hostname otherwise it won't work (see configuration under netty.tcp ):
bind-hostname = "0.0.0.0"
bind-hostname = ${?BIND_HOST} # internal (bind) hostname
bind-port = ${?BIND_PORT} # internal (bind) port
The cluster management is set to use the HTTP API, that can be found on: Cluster Http Management
To use it for queries and actions, is better to use Postman: Postman
Now go to postman, or use any browser, on any node of the cluster nodes and go to:
http://nodeip:8558/cluster/members/
or using curl:
curl http://nodeip:8558/cluster/members/
Where the nodeip is IP address of the node. The response is in a json format which shows the members of the node. To see more api endpoints see the link provided above.
The service is available at port 8558 on any node of the cluster, which is using the routing mesh of docker, therefore it is accessible on any node, even though the service is only set on seed2. See the official documentation: Publish ports
The address and port binding are set in the "application.conf" file under the "management" section. See that the hostname is set for the "clustering.seed2.ip" node. Which is the ip of the node running the cluster management as stated on the file "BackendMain.java". IMPORTANT: When using docker it is necessary to define the bindings otherwise it won't work:
bind-hostname = 0.0.0.0 # internal (bind) hostname
bind-port = 8558 # internal (bind) port