Skip to content

Latest commit

 

History

History
157 lines (132 loc) · 7.09 KB

README.md

File metadata and controls

157 lines (132 loc) · 7.09 KB

because pires has stopped maintaining its repository updates are provided in SDU AWS ECR

See original repository.

docker-elasticsearch

Ready to use, lean and highly configurable Elasticsearch container image.

Original Docker Repository on Quay.io

Current software

  • Alpine Linux 3.9
  • OpenJDK JRE 8u191
  • Elasticsearch 6.6.0

Note: x-pack-ml module is forcibly disabled as it's not supported on Alpine Linux.

Run

Attention

Ready to use node for cluster elasticsearch-default:

docker run --name elasticsearch \
	--detach \
	--privileged \
	--volume /path/to/data_folder:/data \
        117533191630.dkr.ecr.eu-west-1.amazonaws.com/upstream-fork/docker-elasticsearch:6.6.0

Ready to use node for cluster myclustername:

docker run --name elasticsearch \
	--detach \
	--privileged \
	--volume /path/to/data_folder:/data \
	-e CLUSTER_NAME=myclustername \
        117533191630.dkr.ecr.eu-west-1.amazonaws.com/upstream-fork/docker-elasticsearch:6.6.0

Ready to use node for cluster elasticsearch-default, with 8GB heap allocated to Elasticsearch:

docker run --name elasticsearch \
	--detach \
	--privileged \
	--volume /path/to/data_folder:/data \
	-e ES_JAVA_OPTS="-Xms8g -Xmx8g" \
        117533191630.dkr.ecr.eu-west-1.amazonaws.com/upstream-fork/docker-elasticsearch:6.6.0

Ready to use node with plugins (x-pack and repository-gcs) pre installed. Already installed plugins are ignored:

docker run --name elasticsearch \
	--detach \
	--privileged \
	--volume /path/to/data_folder:/data \
	-e ES_JAVA_OPTS="-Xms8g -Xmx8g" \
	-e ES_PLUGINS_INSTALL="repository-gcs,x-pack" \
        117533191630.dkr.ecr.eu-west-1.amazonaws.com/upstream-fork/docker-elasticsearch:6.6.0

Master-only node for cluster elasticsearch-default:

docker run --name elasticsearch \
	--detach \
	--privileged \
	--volume /path/to/data_folder:/data \
	-e NODE_DATA=false \
	-e HTTP_ENABLE=false \
        117533191630.dkr.ecr.eu-west-1.amazonaws.com/upstream-fork/docker-elasticsearch:6.6.0

Data-only node for cluster elasticsearch-default:

docker run --name elasticsearch \
	--detach --volume /path/to/data_folder:/data \
	--privileged \
	-e NODE_MASTER=false \
	-e HTTP_ENABLE=false \
        117533191630.dkr.ecr.eu-west-1.amazonaws.com/upstream-fork/docker-elasticsearch:6.6.0

Data-only node for cluster elasticsearch-default with shard allocation awareness:

docker run --name elasticsearch \
	--detach --volume /path/to/data_folder:/data \
        --volume /etc/hostname:/dockerhost \
	--privileged \
	-e NODE_MASTER=false \
	-e HTTP_ENABLE=false \
    -e SHARD_ALLOCATION_AWARENESS=dockerhostname \
    -e SHARD_ALLOCATION_AWARENESS_ATTR="/dockerhost" \
        117533191630.dkr.ecr.eu-west-1.amazonaws.com/upstream-fork/docker-elasticsearch:6.6.0

Client-only node for cluster elasticsearch-default:

docker run --name elasticsearch \
	--detach \
	--privileged \
	--volume /path/to/data_folder:/data \
	-e NODE_MASTER=false \
	-e NODE_DATA=false \
        117533191630.dkr.ecr.eu-west-1.amazonaws.com/upstream-fork/docker-elasticsearch:6.6.0

I also make available special images and instructions for AWS EC2 and Kubernetes.

Environment variables

This image can be configured by means of environment variables, that one can set on a Deployment.

Backup

Mount a shared folder (for example via NFS) to /backup and make sure the elasticsearch user has write access. Then, set the REPO_LOCATIONS environment variable to "/backup" and create a backup repository:

backup_repository.json:

{
  "type": "fs",
  "settings": {
    "location": "/backup",
    "compress": true
  }
}
curl -XPOST http://<container_ip>:9200/_snapshot/nas_repository -d @backup_repository.json`

Now, you can take snapshots using:

curl -f -XPUT "http://<container_ip>:9200/_snapshot/nas_repository/snapshot_`date --utc +%Y_%m_%dt%H_%M`?wait_for_completion=true"