Elk-stack deployment using Ansible
- Install
- Generate CA (if SSL) (#6)
- Configure Elasticsearch
- Copy CA into elasticsearch hosts (#7)
- Elasticsearch instances and certificates (with elasticsearch certutil)
- Ship certificates to all nodes
- Give permissions to certificates (
chown root:elasticsearch
) - Add to the elasticsearch internal keystore passwords for the certificates
/usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
/usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password
/usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.http.ssl.keystore.secure_password
/usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.http.ssl.truststore.secure_password
elasticsearch.yml
for all hosts (#8)- jvm options
- underlying vm
swapoff -a
-
cat << EOF >> /etc/security/limits.conf elasticsearch - nofile 65535 EOF
ulimit -n 65535
- elasticsearch service
mkdir -p /etc/systemd/system/elasticsearch.service.d cat << EOF > /etc/systemd/system/elasticsearch.service.d/override.conf [Service] LimitMEMLOCK=infinity EOF
- Reload the daemon
systemctl daemon-reload
- If the disk of the data node is external:
chown -R elasticsearch:elasticsearch /data/elasticsearch
- Start elasticsearch
-
systemctl start elasticsearch # Check nodes curl -k https://10.115.8.10:9200/_cat/nodes # Should return a security exception
- Setup passwords:
/usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive curl -k -u elastic:'67@22KR!mX25H' https://10.115.8.10:9200/_cat/nodes # Should NOT return a security exception
-
- Configure Logstash
- Copy CA into logstash hosts (#9)
- From master 1: configure
instances_logstash.yml
and generate its certificates - Ship logstash certificates to logstash vm
- Give permissions to certificates (
chown root:logstash
) logstash.yml
for all hosts (#10)- Install Java jdk
sudo yum install java-11-openjdk-devel sudo /usr/share/logstash/bin/system-install /etc/logstash/startup.options systemd systemctl status logstash systemctl start logstash
- Configure Kibana
- Install and Configure NGINX (optional) (#13)
- Install nginx (
yum install -y nginx
) - Enable connection for nginx proxy (
setsebool httpd_can_network_connect on -P
) - Configure nginx (
/etc/nginx/nginx.conf
) - Start and enable nginx
- Install nginx (
- Configure Logstash user for kibana (#14)
- Create
logstash_writer
user on kibanaPOST _xpack/security/role/logstash_writer { "cluster": ["manage_index_templates", "monitor", "manage_ilm"], "indices": [ { "names": [ "logs-*" ], "privileges": ["write","create","delete","create_index","manage","manage_ilm"] } ] }
- Configure Logstash password for user on logstash vm
cat << EOF > /etc/sysconfig/logstash LOGSTASH_KEYSTORE_PASS=<LOGSTASH_KEYSTORE_PASS> EOF export LOGSTASH_KEYSTORE_PASS=<LOGSTASH_KEYSTORE_PASS> chmod 600 /etc/sysconfig/logstash set -o history sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash create /usr/share/logstash/bin/logstash-keystore add LOGSTASH_WRITER_PASSWORD --path.settings /etc/logstash
- With kafka you now add kafka certificates to logstash in order to entrust communication, should this be done even with filebeat?
- Create logstash keystore and truststore (#15)
keytool -genkey -keystore logstash.keystore.jks -alias logstash -dname CN=logstash -keyalg RSA -validity 3650 -storepass '<LOGSTASH_KEY_PASS>' -keypass '<LOGSTASH_KEY_PASS>' -storetype pkcs12 # Export the certificate of the user from the keystore. keytool -certreq -keystore logstash.keystore.jks -alias logstash -file logstash.csr -storepass '<LOGSTASH_KEY_PASS>' -keypass '<LOGSTASH_KEY_PASS>' # Sign the certificate signing request with the root CA. openssl x509 -req -CA ca-cert -CAkey ca-key -in logstash.csr -out logstash.crt -days 3650 -CAcreateserial -passin pass:'<CA_PWD>' # Import the certificate of the CA into the user keystore. keytool -importcert -file ca-cert -alias ca -keystore logstash.keystore.jks -storepass '<LOGSTASH_KEY_PASS>' -keypass '<LOGSTASH_KEY_PASS>' -noprompt # Import the signed certificate into the user keystore. Make sure to use the same `-alias` as you used ealier. keytool -importcert -file logstash.crt -alias logstash -keystore logstash.keystore.jks -storepass '<LOGSTASH_KEY_PASS>' -keypass '<LOGSTASH_KEY_PASS>'
- Add logstash keystore password inside logstash keystore (
/usr/share/logstash/bin/logstash-keystore add LOGSTASH_KAFKA_KEYSTORE_PASSWORD --path.settings /etc/logstash
) - Add the pipelines (#16)
- Create
- Configure Kibana Resources
gcloud compute addresses create elastic01-internal --region europe-north1 --subnet zone-1 --addresses 10.100.0.10
gcloud compute addresses create elastic01-external --project=learning-288910 --network-tier=PREMIUM --region=europe-north1
export EXT_ADDRESS=$(gcloud compute addresses describe elastic01-external --region europe-north1 --format json | jq '[.address][0]' --raw-output)
gcloud beta compute --project=learning-288910 instances create elastic01 \
--zone=europe-north1-a \
--machine-type=e2-medium \
--subnet=zone-1 \
--private-network-ip=10.100.0.10 \
--address=$EXT_ADDRESS \
--maintenance-policy=MIGRATE \
--service-account=237354390854-compute@developer.gserviceaccount.com \
--scopes=https://www.googleapis.com/auth/devstorage.read_only,https://www.googleapis.com/auth/logging.write,https://www.googleapis.com/auth/monitoring.write,https://www.googleapis.com/auth/servicecontrol,https://www.googleapis.com/auth/service.management.readonly,https://www.googleapis.com/auth/trace.append \
--image=centos-7-v20200910 \
--image-project=centos-cloud \
--boot-disk-size=20GB \
--boot-disk-type=pd-standard \
--boot-disk-device-name=elastic01 \
--no-shielded-secure-boot \
--shielded-vtpm \
--shielded-integrity-monitoring \
--labels=type=elastic,data=false \
--reservation-affinity=any
gcloud compute addresses create elastic02-internal --region europe-north1 --subnet zone-2 --addresses 10.101.0.10
gcloud compute addresses create elastic02-external --project=learning-288910 --network-tier=PREMIUM --region=europe-north1
export EXT_ADDRESS=$(gcloud compute addresses describe elastic02-external --region europe-north1 --format json | jq '[.address][0]' --raw-output)
gcloud beta compute --project=learning-288910 instances create elastic02 \
--zone=europe-north1-b \
--machine-type=e2-medium \
--subnet=zone-2 \
--private-network-ip=10.101.0.10 \
--address=$EXT_ADDRESS \
--maintenance-policy=MIGRATE \
--service-account=237354390854-compute@developer.gserviceaccount.com \
--scopes=https://www.googleapis.com/auth/devstorage.read_only,https://www.googleapis.com/auth/logging.write,https://www.googleapis.com/auth/monitoring.write,https://www.googleapis.com/auth/servicecontrol,https://www.googleapis.com/auth/service.management.readonly,https://www.googleapis.com/auth/trace.append \
--image=centos-7-v20200910 \
--image-project=centos-cloud \
--boot-disk-size=20GB \
--boot-disk-type=pd-standard \
--boot-disk-device-name=elastic02 \
--create-disk=mode=rw,size=50,type=projects/learning-288910/zones/europe-north1-b/diskTypes/pd-ssd,name=elastic02-ssd,device-name=elastic02-ssd \
--no-shielded-secure-boot \
--shielded-vtpm \
--shielded-integrity-monitoring \
--labels=type=elastic,data=true \
--reservation-affinity=any
gcloud compute addresses create elastic03-internal --region europe-north1 --subnet zone-3 --addresses 10.102.0.10
gcloud compute addresses create elastic03-external --project=learning-288910 --network-tier=PREMIUM --region=europe-north1
export EXT_ADDRESS=$(gcloud compute addresses describe elastic03-external --region europe-north1 --format json | jq '[.address][0]' --raw-output)
gcloud beta compute --project=learning-288910 instances create elastic03 \
--zone=europe-north1-c \
--machine-type=e2-medium \
--subnet=zone-3 \
--private-network-ip=10.102.0.10 \
--address=$EXT_ADDRESS \
--maintenance-policy=MIGRATE \
--service-account=237354390854-compute@developer.gserviceaccount.com \
--scopes=https://www.googleapis.com/auth/devstorage.read_only,https://www.googleapis.com/auth/logging.write,https://www.googleapis.com/auth/monitoring.write,https://www.googleapis.com/auth/servicecontrol,https://www.googleapis.com/auth/service.management.readonly,https://www.googleapis.com/auth/trace.append \
--image=centos-7-v20200910 \
--image-project=centos-cloud \
--boot-disk-size=20GB \
--boot-disk-type=pd-standard \
--boot-disk-device-name=elastic03 \
--create-disk=mode=rw,size=50,type=projects/learning-288910/zones/europe-north1-b/diskTypes/pd-ssd,name=elastic03-ssd,device-name=elastic03-ssd \
--no-shielded-secure-boot \
--shielded-vtpm \
--shielded-integrity-monitoring \
--labels=type=elastic,data=true \
--reservation-affinity=any
gcloud compute addresses create logstash01-internal --region europe-north1 --subnet zone-2 --addresses 10.101.0.20
gcloud compute addresses create logstash01-external --project=learning-288910 --network-tier=PREMIUM --region=europe-north1
export EXT_ADDRESS=$(gcloud compute addresses describe logstash01-external --region europe-north1 --format json | jq '[.address][0]' --raw-output)
gcloud beta compute --project=learning-288910 instances create logstash01 \
--zone=europe-north1-b \
--machine-type=e2-medium \
--subnet=zone-2 \
--private-network-ip=10.101.0.20 \
--address=$EXT_ADDRESS \
--maintenance-policy=MIGRATE \
--service-account=237354390854-compute@developer.gserviceaccount.com \
--scopes=https://www.googleapis.com/auth/devstorage.read_only,https://www.googleapis.com/auth/logging.write,https://www.googleapis.com/auth/monitoring.write,https://www.googleapis.com/auth/servicecontrol,https://www.googleapis.com/auth/service.management.readonly,https://www.googleapis.com/auth/trace.append \
--image=centos-7-v20200910 \
--image-project=centos-cloud \
--boot-disk-size=20GB \
--boot-disk-type=pd-standard \
--boot-disk-device-name=logstash01 \
--no-shielded-secure-boot \
--shielded-vtpm \
--shielded-integrity-monitoring \
--labels=type=logstash \
--reservation-affinity=any
gcloud compute addresses create kibana01-internal --region europe-north1 --subnet zone-2 --addresses 10.101.0.30
gcloud compute addresses create kibana01-external --project=learning-288910 --network-tier=PREMIUM --region=europe-north1
export EXT_ADDRESS=$(gcloud compute addresses describe kibana01-external --region europe-north1 --format json | jq '[.address][0]' --raw-output)
gcloud beta compute --project=learning-288910 instances create kibana01 \
--zone=europe-north1-b \
--machine-type=e2-medium \
--subnet=zone-2 \
--private-network-ip=10.101.0.30 \
--address=$EXT_ADDRESS \
--maintenance-policy=MIGRATE \
--service-account=237354390854-compute@developer.gserviceaccount.com \
--scopes=https://www.googleapis.com/auth/devstorage.read_only,https://www.googleapis.com/auth/logging.write,https://www.googleapis.com/auth/monitoring.write,https://www.googleapis.com/auth/servicecontrol,https://www.googleapis.com/auth/service.management.readonly,https://www.googleapis.com/auth/trace.append \
--image=centos-7-v20200910 \
--image-project=centos-cloud \
--boot-disk-size=20GB \
--boot-disk-type=pd-standard \
--boot-disk-device-name=kibana01 \
--no-shielded-secure-boot \
--shielded-vtpm \
--shielded-integrity-monitoring \
--labels=type=logstash \
--tags=http-server,https-server \
--reservation-affinity=any
gcloud compute --project=learning-288910 firewall-rules create default-allow-http --direction=INGRESS --priority=1000 --network=elastic-cluster --action=ALLOW --rules=tcp:80 --source-ranges=0.0.0.0/0 --target-tags=http-server
gcloud compute --project=learning-288910 firewall-rules create default-allow-https --direction=INGRESS --priority=1000 --network=elastic-cluster --action=ALLOW --rules=tcp:443 --source-ranges=0.0.0.0/0 --target-tags=https-server
# Elastic01
gcloud compute instances delete elastic01 --zone=europe-north1-a --quiet
gcloud compute addresses delete elastic01-internal --region=europe-north1 --quiet
gcloud compute addresses delete elastic01-external --region=europe-north1 --quiet
# Elastic02
gcloud compute instances delete elastic02 --zone=europe-north1-b --quiet
gcloud compute addresses delete elastic02-internal --region=europe-north1 --quiet
gcloud compute addresses delete elastic02-external --region=europe-north1 --quiet
# Elastic03
gcloud compute instances delete elastic03 --zone=europe-north1-c --quiet
gcloud compute addresses delete elastic03-internal --region=europe-north1 --quiet
gcloud compute addresses delete elastic03-external --region=europe-north1 --quiet
# Logstash01
gcloud compute instances delete logstash01 --zone=europe-north1-b --quiet
gcloud compute addresses delete logstash01-internal --region=europe-north1 --quiet
gcloud compute addresses delete logstash01-external --region=europe-north1 --quiet
# Kibana01
gcloud compute instances delete kibana01 --zone=europe-north1-b --quiet
gcloud compute addresses delete kibana01-internal --region=europe-north1 --quiet
gcloud compute addresses delete kibana01-external --region=europe-north1 --quiet