libvirt
terraform
- the
terraform-provider-libvirt
plugin - Some olter tools like
wget
,sshpass
...
The deployment can by tuned uses some terraform variables. All of them
are defined at the top of the terraform.tf
file. Each variable has also a
description field that explains its purpose.
These are the most important ones:
libvirt_uri
: by default this points to localhost, however it's possible to perform the deployment on a different libvirt machine. More on that later.img_src
: this is the URL of a directory where the CaaSP image can be found for creating the whole cluster. Note: the latest version of the image will be automatically obtained unless therefresh
variables is set fofalse
.nodes_count
: number of non-admin nodes to be created.
The easiest way to set these values is by creating a terraform.tfvars
. The
project comes with an example file named terraform.tfvars.example
.
The project comes with two cloud-init files: one for the admin node, the other for the generic nodes.
Note well: the system is going to have a root
and a qa
users with password
linux
(specified on the Terraform variable password
).
The cluster is made by 1 admin node and the number of generic nodes chosen by the user.
All of them have a cloud-init ISO attached to them to inject the cloud-init configuration.
All the nodes are attached to the default
network of libvirt. This is a network
that satisfies CaaSP's basic network requirement: there's a DHCP and a DNS
enabled but the DNS server is not able to resolve the names of the nodes inside
of that network.
These are some examples of what you can do with the caasp
script:
- Create a cluster in the tupperware environment, with the "fix_deployment" branch of Salt, orchestrating and then creating a snapshot of the VMs
./caasp --env tupperware \
--salt-src-branch fix_deployment \
'cluster create ; salt wait ; orch boot ; cluster snapshot'
- Run the
tests/orchestration-simple.scene
script, but after thepost-create
stage
./caasp --script tests/orchestration-simple.scene \
--script-begin post-create
- Dump the
/etc/hosts
in any machine that matchesnode-1
./caasp @node-1 cat /etc/hosts
- Create a cluster with 6 nodes, create a snapshot,
bootstrap the cluster, create a new snapshot and then
remove
node-2
./caasp \
'cluster tfvar num_nodes=6 ; cluster create ; cluster snapshot ; orch boot ; cluster snapshot ; orch rm node-2'
- Run
./caasp cluster create
for creating a default cluster in the localhost. - Then run the bootstrap orchestration with
./caasp orch boot
. - Finally, you can get a valid kubeconfig file
with
./caasp orch kubeconfig
You can get a command line loop by just running ``./caasp, where you can get some help on commands with
help ` (or just `? `).
You can also:
- ssh to any machine with
./caasp ssh <name>
(or the shortcut@<name>
in the loop) - run local commands with
./caasp shell <cmd>
(or the shortcut! <cmd>
in the loop)
You can enable the development mode by running ./caasp devel enable
. This will:
- link the
terraform/profile-devel.tf
file into the top directory, adding features like copying Salt code, assigning roles, etc. Checkout the contents of this Terraform file. If it does not suit your needs, you can create your own development profile and use it with./caasp --tf-devel-profile=<MY_PROFILE> devel enable
. - prior to any orchestration, it will sync the local Salt code directory with the directory in the Admin Node.