Included in the repo is Terraform code (Version 0.11.11
) for launching the application in AWS and Google Kubernetes Engine. Provision either AWS, GKE, or both, and then you can watch Habitat update across cloud deployments.
Due to how often Terraform is updated, it's recommended to use tfswitch
. This will help alleviate any problems with version incompatabilities.
Terraform installation guide is here.
The tfswitch
tool is available here
cd terraform/chef-automate/(aws|azure|gcp)
cp tfvar.example terraform.tfvars
$EDITOR terraform.tfvars
terraform apply
Note: To Deploy Chef Automate with EAS Beta you must deploy in AWS (see this issue if you want to add to azure) and set the disable_event_tls variable to "true" in your terraform.tfvars.
disable_event_tls = "true"
Once you automate instance is up it will outpfut credentials to login and an automate token:
Outputs:
a2_admin = admin
a2_admin_password = <password>
a2_token = <token>
a2_url = https://scottford-automate.chef-demo.com
chef_automate_public_ip = 34.222.124.23
chef_automate_server_public_r53_dns = scottford-automate.chef-demo.com
You will need to have an AWS account already created
cd terraform/aws
cp tfvars.example terraform.tfvars
- edit
terraform.tfvars
with your own values and add in theautomate_url
andautomate_token
from the previous step terraform apply
Note: To Deploy the national parks application with EAS Beta you must follow these additional instructions:
- Follow instructions for Chef-Automate setup with EAS beta
- Enable the event stream in your terraform.tfvars as follows:
event-stream-enabled = "true"
- Ensure your terraform.tfvars file has values (from your chef-automate terraform output) set for:
automate_ip
automate_token
- Set Habitat Supervisors to version 0.83.0-dev by setting the hab-sup-version varaible in your terraform.tfvars as follows:
hab-sup-version = "core/hab-sup/0.83.0-dev -c unstable"
- When you log into the Automate UX type 'beta' Turn ON the "EAS Application" Feature. When you refresh the page a new Applications tab will appear.
Once the provisioning finishes you will see the output with the various public IP addresses
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
haproxy_public_ip = 34.216.185.16
mongodb_public_ip = 54.185.74.152
national_parks_public_ip = 34.220.209.230
permanent_peer_public_ip = 34.221.251.189
http://<haproxy_public_ip>:8085/national-parks
or
http://<haproxy_public_ip>:8000/haproxy-stats
You will need to have an Azure account already created
cd terraform/azure
terraform init
az login
cp tfvars.example terraform.tfvars
- edit
terraform.tfvars
with your own values and add in theautomate_url
andautomate_token
from the previous step terraform apply
Once provisioning finishes you will see the output with the various public IP addresses:
Apply complete! Resources: 19 added, 0 changed, 0 destroyed.
Outputs:
haproxy-public-ip = 40.76.29.195
instance_ips = [
40.76.29.123
]
mongodb-public-ip = 40.76.17.2
permanent-peer-public-ip = 40.76.31.133
Like in the AWS example, you will be able to access either http://<haproxy_public_ip>:8085/national-parks
or
http://<haproxy_public_ip>:8000/haproxy-stats
Both the AWS and Azure deployments support scaling of the web front end instances to demonstrate the concept of 'choreography' vs 'orchestration' with Habitat. The choreography comes from the idea that when the front end instances scale out, the supervisor for the HAProxy instance automatically takes care of the adding the new members to the pool and begins balancing traffic correctly across all instances.
- In your
terraform.tfvars
add a line forcount = 3
- run
terraform apply
- Once provisioning finishes, go to the
http://<haproxy-public-ip>:8000/haproxy-stats
to see the new instances in the pool