I will show how to install Elastic Search Cluster structure. This repo is first aid kit. 🚑
Don't you know Elastic Search? Lets jump to Elasticsearch 101
CentOS image is used as Linux.
The example was made on Google Cloud.
You can do it if you allocate 2 GB to each node on your own computer.
We are click Compute Engine -> VM Instance side.
Requirements before creating VM Instance
>> The instances we will install must be in the same region and zone.
>> The instance will be E2 Series and machine type is e2 medium(2 vCPU, 4GB Memory).
>> Change boot disk with CentOS. Version is CentOS 7. Size is 20 GB.
>> Check is Allow HTTP Trafic and Allow HTTPS Trafic in Firewall settings.
Our first instance names "elastic-master" which is my master node, then I created 2 data node instances. We created 3 VM Instances in total.
>> Our data nodes named "elastic-data-1" and"elastic-data-2".
VM Instances are completed!
Choose Linuxx86_64
>> Get link at 'Download Linux86_64'.
And lets go to VM Intances with SSH. You can switch to root user. That's how I preferred it.
yum install wget
yum install nano
wget <elastic_search_download_link> -y
rpm -ivh <elasticsearch_file.rpm>
We must do same action other instances (which is elastic-data-1 and elastic-data-2)
cd /etc/elasticsearch
ls (You can see elasticsearch.yml)
nano elasticsearch.yml
First I at Master VM Instance You can see everthing is comment mode. We need to delete "#" and configuration some lines:
cluster-name
node.name : master
node.data : false
node.ingest : false
---- Network ----
network.host : 0.0.0.0
http.port : 9200
---- Discovery ----
Make sure that VM Instances are same regions.
discovery.seed_hosts : ["elastic-master's internal IP","elastic-data-1's internal IP", "elastic-data-2's internal IP"]
cluster.initial_master_nodes: ["elastic-master's internal IP"]
We must do same action other instances (which is elastic-data-1 and elastic-data-2)
cd /usr/share/elasticsearch/
./bin/elasticsearch-certutil ca
Enter + define some password what you want. (but dont forget,it will work for us)
You can see "elastic-stack-ca.p12"
Lets go elasticsearch.yml file.
cd /etc/elasticsearch/
nano elasticsearch.yml
At the bottom of the document we write:
xpack.security.enabled : true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.client_authentication: required
xpack.security.transport.ssl.keystore.path: /usr/share/elastic-stack-ca.p12
xpack.security.transport.ssl.truststore.path: /usr/share/elastic-stack-ca.p12
We created certification (elastic-stack-ca.p12), this certification must be have other VM Instances (elastic-data-1 & elastic-data-2). If they do not have this certification, your configuration will not run.
How to move elastic-stack-ca.p12 file to other instances?
- Lets move Cloud Storage and create bucket.
- Open elastic-master's terminal
gcloud auth login // autorize your GCP account
gsutil cp elastic-stack-ca.p12 gs://your_bucket_name
- Open elastic-data-1 & elastic-data-2's terminal
gsutil cp gs://your_bucket_name/elastic-stack-ca.p12 /usr/share/elasticsearch/
6)This 2 command must do all of VM Instances.
chmod 660 elastic-stack-ca.p12
chown root:elasticsearch elastic-stack-ca.p12
cd /usr/share/elasticsearch/
./bin/elasticsearch-cerutil http
Do not create CSR file, because we have CA file.
CA Path:/usr/share/elasticsearch/elastic-stack-ca.p12
Generate cerfitication per node? : No
Now, It will want hostnames. If you do not know hostname and internal hostname, you can write like this at VM Instances's terminals:
hostname
hostname -f
You can write hostname names below.
localhost
elastic-master
elastic-master_bla_bla_bla
.
..
...
Then, you must write IP addresses (which is localhost IP, internal IP and External IP)
127.0.0.1
.
..
...
....
Now, you have "elasticsearch-ssl-http.zip", location is:
usr/share/elasticsearch/elasticsearch-ssl-http.zip
unzip elasticsearch-ssl-http.zip
You can see elasticsearch and kibana files. Elasticsearch's file have http.p12 files. We'll send other VM Instances like moving "elastic-stack-ca.p12". We need to add a few security parameters to the elasticsearch.yml file.
cd /etc/elasticsearch/
nano elasticsearch.yml
Add these parameters:
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: /usr/share/elasticsearch/elasticsearch/http.p12 //Location of http.p12
xpack.security.http.ssl.truststore.path: /usr/share/elasticsearch/elasticsearch/http.p12 //Location of http.p12
service elasticsearch start
When we start to run the service, we can monitor it in the log file.
cd /var/log/elasticsearch/
tail -100f my-applcation.log
Finally, you must run other data nodes which is elastic-data-1 and elastic-data-2
service elasticsearch start
You can send curl if you want to see the health of the system, active nodes and shards.
curl -XGET 'https://localhost:9200/_cluster/health?pretty' -k
cd /usr/share/elasticsearch
./bin/elasticsearch-setup-passwords auto
Take screenshot, do not lose it. You can send curl with your username & paasword.
curl -u username:password -XGET 'https://localhost:9200/_cluster/health?pretty' -k
Elasticsearch is live !
- Youtube Video
- Fleet manager
- Kibana
Thank you very much to my beloved wife for her support. @busetorunlarertan