-
Notifications
You must be signed in to change notification settings - Fork 7
Setting up a canoed backend
These are notes on how to make a Canoe backend server, based on Ubuntu 16.04 LTS.
This setup is for Canoe version 1.0.0 that relies on Nginx to be proxy in front of VerneMQ so that all traffic goes over port 443.
NOTE: If you want for Canoe 0.3.7, you need to have VerneMQ serve WSS publically on port 1884 also. And if you want to serve Ben's wallet (he uses regular TCP right now) you also need the SSL listener added publically on port 1885.
According to cryptocode, the rai_node seems to consume a lot of file descriptors on some OS's. His advice is to perform the following sysctl changes to decrease timout for lingering time_wait connections and enable recycling of sockets in that state.
Edit /etc/sysctl.conf
and then apply with sudo /sbin/sysctl -p
:
net.ipv4.tcp_fin_timeout = 20
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
We also want higher limits for open files, so we edit /etc/security/limits.conf
and add:
# Raise for both canoed and vernemq users. If another user is running the rai_node add a line for it aswell.
canoed - nofile 65536
vernemq - nofile 65536
First we install a bunch of standard stuff, nginx, redis and letsencrypt.
sudo apt-get update && sudo apt-get upgrade
sudo apt-get install nginx redis-server
Then add a default site config for nginx:
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name test.getcanoe.io;
return 301 https://$host$request_uri;
}
server {
# SSL configuration
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
root /var/www/html;
index index.html;
server_name test.getcanoe.io;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
# VerneMQ proxy
location /mqtt {
proxy_pass http://localhost:1884;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location ~ /\. {
deny all;
}
# Canoed RPC
location /rpc {
proxy_pass_header Authorization;
proxy_pass http://localhost:8180/rpc;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
add_header 'Access-Control-Allow-Origin' '*';
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_buffering off;
client_max_body_size 0;
proxy_read_timeout 36000s;
proxy_redirect off;
proxy_ssl_session_reuse off;
}
}
Change the server_name
value to your domain.
Then do the dance with certbot to get Letsencrypt cert installed in Nginx, typically:
sudo apt-get update
sudo apt-get install software-properties-common
sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install python-certbot-nginx
sudo certbot --nginx
wget https://bintray.com/artifact/download/erlio/vernemq/deb/xenial/vernemq_1.3.1-1_amd64.deb
sudo dpkg -i vernemq_1.3.1-1_amd64.deb
Add section at end of /etc/vernemq/vernemq.conf
:
listener.tcp.default = 127.0.0.1:1883
listener.ws.default = 127.0.0.1:1884
plugins.vmq_passwd = off
plugins.vmq_acl = off
plugins.vmq_diversity = on
vmq_diversity.auth_postgres.enabled = on
vmq_diversity.postgres.host = 127.0.0.1
vmq_diversity.postgres.port = 5432
vmq_diversity.postgres.user = canoe
vmq_diversity.postgres.password = secretpassword1
vmq_diversity.postgres.database = canoe
As you can see we use MQTT over WS, and also regular TCP for local use from canoed, and we use auth via postgres diversity plugin, not passwd.
Now let's run vernemq via systemd, so create /etc/systemd/system/vernemq.service
:
[Unit]
Description=VerneMQ
After=network.target
[Service]
WorkingDirectory=/usr/lib/vernemq
ExecStart=/usr/sbin/vernemq start
ExecStop=/usr/sbin/vernemq stop
User=vernemq
Type=forking
Restart=always
RestartSec=45
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
Now add it:
sudo systemctl daemon-reload
sudo systemctl enable vernemq.service
sudo systemctl start vernemq.service
Check status with sudo systemctl status vernemq
. If you get some problem enabling the service, you may need to run sudo update-rc.d -f vernemq remove
and do sudo systemctl disable vernemq.service && sudo systemctl enable vernemq.service
, to get things in order.
curl -sL https://deb.nodesource.com/setup_8.x -o nodesource_setup.sh
sudo bash nodesource_setup.sh
sudo apt-get install nodejs
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt xenial-pgdg main" >> /etc/apt/sources.list'
wget --quiet -O - http://apt.postgresql.org/pub/repos/apt/ACCC4CF8.asc | sudo apt-key add -
sudo apt-get update
sudo apt-get install postgresql-9.6 postgresql-contrib-9.6
Then you need to do some manual steps, setting password for the "postgres" db user, and the "canoe" db user, etc
sudo -u postgres createuser canoe
sudo -u postgres createdb canoe
sudo -u postgres psql
\password postgres
alter user canoe with encrypted password 'secretpassword1';
grant all privileges on database canoe to canoe;
\list
\connect canoe
CREATE EXTENSION pgcrypto;
\q
Make a user canoed and change to that user. We run both the canoed daemon and the rai_node under this user.
There are many guides on how to setup a node (for example this). The node has to do a callback to canoed. Change the following lines in /home/canoed/RaiBlocks/config.json
:
"callback_address": "[::1]",
"callback_port": "8180",
"callback_target": "/callback",
We like to run the node via systemd, so add /etc/systemd/system/rai_node.service
:
[Unit]
Description=Nano node service
After=network.target
[Service]
User=canoed
WorkingDirectory=/home/canoed
LimitNOFILE=65536
ExecStart=/home/canoed/rai_node --daemon
Restart=on-failure
[Install]
WantedBy=multi-user.target
The above presumes you run it under the canoed user and that the binary is at /home/canoed/rai_node
.
Change to the canoed user and go to /home/canoed
.
Get the canoed daemon from git: git clone git@github.com:getcanoe/canoed.git
Add conf file /home/canoed/canoed/canoed.conf
{
"logging": {
"level": "debug"
},
"debug": true,
"server": {
"port": 8180,
"host": ""
},
"rainode": {
"host": "[::1]",
"port": 7076
},
"postgres": {
"user": "canoe",
"host": "localhost",
"database": "canoe",
"password": "secretpassword1",
"port": 5432
},
"redis": {
"host": "localhost",
"port": 6379
},
"mqtt": {
"url": "tcp://localhost",
"options": {
"clientId": "canoed",
"username": "canoed",
"password": "secretpassword2"
},
"subacl": "[{\"pattern\": \"#\"}]",
"pubacl": "[{\"pattern\": \"#\"}]",
"block": {
"opts": {
"qos": 2,
"retain": false
}
}
}
}
Run npm install
to get needed dependencies. Now run canoed --initialize
to create proper tables in Postgres and add the MQTT user for canoed.
Make a systemd service for it, /etc/systemd/system/canoed.service
:
[Unit]
Description=Canoed
Documentation=https://github.com/gokr/canoed
After=network.target httpd.service nano_node.service
[Service]
User=canoed
WorkingDirectory=/home/canoed/canoed
ExecStart=/home/canoed/canoed/canoed
LimitNOFILE=65536
KillMode=mixed
KillSignal=SIGTERM
Restart=always
RestartSec=2s
NoNewPrivileges=yes
[Install]
WantedBy=multi-user.target
Do the dance to get it running:
sudo systemctl daemon-reload
sudo systemctl enable canoed.service
sudo systemctl start canoed.service
To get infos about currency conversion you need to setup the rateservice.
Change to the canoed user and go to /home/canoed
.
Get the canoed daemon from git: git clone https://github.com/getcanoe/rateservice.git
Add conf file /home/canoed/rateservice/rateservice.conf
{
"logging": {
"level": "info"
},
"debug": false,
"mqtt": {
"url": "wss://canoe.nifni.net/mqtt",
"options": {
"clientId": "rateservice",
"username": "rateservice",
"password": "secretpassword"
},
"rates": {
"topic": "rates",
"opts": {
"qos": 2,
"retain": true
}
}
}
}
Run npm install
to get needed dependencies. Now run /home/canoed/canoed/addmqttuser rateservice secretpassword rateservice
to create a mqtt user for the rateservice. Change secretpassword to a password of your choice in the config and in this command.
Make a systemd service for it, /etc/systemd/system/rateservice.service
:
[Unit]
Description=Rateservice
Documentation=https://github.com/getcanoe/rateservice
After=network.target
[Service]
User=canoed
WorkingDirectory=/home/canoed/rateservice
ExecStart=/home/canoed/rateservice/rateservice.js
KillMode=mixed
KillSignal=SIGTERM
Restart=always
RestartSec=2s
NoNewPrivileges=yes
[Install]
WantedBy=multi-user.target
Do the dance to get it running:
sudo systemctl daemon-reload
sudo systemctl enable rateservice.service
sudo systemctl start rateservice.service
Monit is used for monitoring and restarting the services if they fail for some reason.
Install monit:
sudo apt-get install monit
Modify /etc/monit/conf-available/nginx
:
check process nginx with pidfile /run/nginx.pid
group www
group nginx
start program = "/bin/systemctl start nginx"
stop program = "/bin/systemctl stop nginx"
if failed port 443 protocol https request "/" then restart
if 5 restarts with 5 cycles then timeout
depend nginx_bin
depend nginx_rc
check file nginx_bin with path /usr/sbin/nginx
group nginx
include /etc/monit/templates/rootbin
check file nginx_rc with path /etc/init.d/nginx
group nginx
include /etc/monit/templates/rootbin
Create /etc/monit/conf-available/postgresql
:
check process postgres with pidfile /var/run/postgresql/9.6-main.pid
group database
start program = "/bin/systemctl postgresql start"
stop program = "/bin/systemctl postgresql stop"
if failed unixsocket /var/run/postgresql/.s.PGSQL.5432 protocol pgsql
then restart
if failed host localhost port 5432 protocol pgsql then restart
Create /etc/monit/conf-available/rai_node
:
check program rai_rpc with path "/home/canoed/check-rai.sh"
start program = "/home/canoed/start-rai.sh" with timeout 90 seconds
stop program = "/bin/systemctl stop rai_node"
if 3 restarts within 3 cycles then timeout
if status > 0 then restart
Then enable them:
sudo ln -s /etc/monit/conf-available/postgresql /etc/monit/conf-enabled/
sudo ln -s /etc/monit/conf-available/nginx /etc/monit/conf-enabled/
sudo ln -s /etc/monit/conf-available/rai_node /etc/monit/conf-enabled/
Create /home/canoe/check-rai.sh
:
#!/bin/sh
curl --connect-timeout 2 --max-time 5 -s -g -d '{"action":"version" }' '[::1]:7076'
Create /home/canoe/start-rai.sh
:
#!/bin/sh
/bin/systemctl start rai_node
JSON='{"action":"version"}'
# Then wait until it responds on RPC
while true
do
curl --fail --connect-timeout 2 --max-time 5 -s -g -d $JSON '[::1]:7076'
if [ $? -ne 0 ]
then
# curl didn't return 0 - failure
echo $i
printf '.'
sleep 5
else
break # terminate loop
fi
done
printf 'node running'
Make them executable:
chmod +x /home/canoed/check-rai.sh
Adjust /etc/monit/monitrc
:
## Start Monit in the background (run as a daemon):
set daemon 10 # check services at 10-second intervals
## Set syslog logging. If you want to log to a standalone log file instead,
## specify the full path to the log file
#
set logfile /var/log/monit.log
## Set the location of the Monit id file which stores the unique id for the
## Monit instance. The id is generated and stored on first Monit start. By
## default the file is placed in $HOME/.monit.id.
#
# set idfile /var/.monit.id
set idfile /var/lib/monit/id
## Set the location of the Monit state file which saves monitoring states
## on each cycle. By default the file is placed in $HOME/.monit.state. If
## the state file is stored on a persistent filesystem, Monit will recover
## the monitoring state across reboots. If it is on temporary filesystem, the
## state will be lost on reboot which may be convenient in some situations.
#
set statefile /var/lib/monit/state
## Set global SSL options (just most common options showed, see manual for
## full list).
set ssl {
verify : enable, # verify SSL certificates (disabled by default but STRONGLY RECOMMENDED)
selfsigned : reject # allow self signed SSL certificates (reject by default)
}
## Set the list of mail servers for alert delivery. Multiple servers may be
## specified using a comma separator. If the first mail server fails, Monit
# will use the second mail server in the list and so on. By default Monit uses
# port 25 - it is possible to override this with the PORT option.
#
# set mailserver mail.bar.baz, # primary mailserver
# backup.bar.baz port 10025, # backup mailserver on port 10025
# localhost # fallback relay
set mailserver mail.getcanoe.io, localhost # change according to our setup
## By default Monit will drop alert events if no mail servers are available.
## If you want to keep the alerts for later delivery retry, you can use the
## EVENTQUEUE statement. The base directory where undelivered alerts will be
## stored is specified by the BASEDIR option. You can limit the queue size
## by using the SLOTS option (if omitted, the queue is limited by space
## available in the back end filesystem).
#
set eventqueue
basedir /var/lib/monit/events # set the base directory where events will be stored
slots 100 # optionally limit the queue size
## You can set alert recipients whom will receive alerts if/when a
## service defined in this file has errors. Alerts may be restricted on
## events by using a filter as in the second example below.
set alert alert@getcanoe.io # receive all alerts, change according to your email
## Monit has an embedded HTTP interface which can be used to view status of
## services monitored and manage services from a web interface. The HTTP
## interface is also required if you want to issue Monit commands from the
## command line, such as 'monit status' or 'monit restart service' The reason
## for this is that the Monit client uses the HTTP interface to send these
## commands to a running Monit daemon. See the Monit Wiki if you want to
## enable SSL for the HTTP interface.
#
set httpd port 2812 and
use address localhost # only accept connection from localhost
allow localhost # allow localhost to connect to the server and
allow admin:monit # require user 'admin' with password 'monit'
###############################################################################
## Services
###############################################################################
##
## Check general system resources such as load average, cpu and memory
## usage. Each test specifies a resource, conditions and the action to be
## performed should a test fail.
check system $HOST
if loadavg (1min) > 4 then alert
if loadavg (5min) > 2 then alert
if cpu usage > 95% for 10 cycles then alert
if memory usage > 75% then alert
if swap usage > 25% then alert
sudo systemctl reload monit
Check with sudo monit status
and sudo monit summary
shows like:
The Monit daemon 5.16 uptime: 23m
System 'test.getcanoe.io' Running
Program 'rai_rpc' Status ok
Process 'postgres' Running
Process 'nginx' Running
File 'nginx_bin' Accessible
File 'nginx_rc' Accessible
Logs of canoed and the rateservice can be checked using journald. To follow the canoed log run:
sudo journalctl -f -u canoed.service
To search for errors run:
sudo journalctl -u canoed.service | grep error
If you want to check the rateservice log just replace canoed
with rateservice
.
Following command shows the amount of accounts that were used with your canoe backend:
redis-cli scan | wc
Sometimes it might be needed to reset the sessions stored in redis. You can do that with this command:
redis-cli flushall
Keep in mind that currently running canoes need to be restarted after this.