This repo hosts the jm1.cloudy
Ansible Collection.
This repo hosts a variety of Ansible content, such as inventories, playbooks and roles, that demonstrates how to setup a cloud infrastructure using libvirt and/or OpenStack:
- Hosts
lvrt-lcl-session-srv-0*
showcase how to provision libvirt domains (QEMU/KVM based virtual machines) with cloud-init and CentOS Linux 7, CentOS Stream 8, CentOS Stream 9, Debian 10 (Buster), Debian 11 (Bullseye), Debian 12 (Bookwork), Debian 13 (Trixie), Ubuntu 18.04 LTS (Bionic Beaver), Ubuntu 20.04 LTS (Focal Fossa), Ubuntu 22.04 LTS (Jammy Jellyfish) and Ubuntu 24.04 LTS (Noble Numbat). - Hosts
lvrt-lcl-session-srv-1*
showcase automatic system installation with PXE network boot on BIOS and UEFI systems for- CentOS Stream 8 and CentOS Stream 9 with Kickstart,
- Debian 11 (Bullseye), Debian 12 (Bookworm) and Debian 13 (Trixie) with Preseed, and
- Ubuntu 20.04 LTS (Focal Fossa), Ubuntu 22.04 LTS (Jammy Jellyfish) and Ubuntu 24.04 LTS (Noble Numbat) with Autoinstall.
- Host
lvrt-lcl-session-srv-200-*
showcases how to "quickly bring up a OpenStack environment based on the latest versions of everything from git master" with DevStack. - Host
lvrt-lcl-session-srv-210-*
showcases how to deploy TripleO standalone on CentOS Stream 8. - Host
lvrt-lcl-session-srv-3*
showcases how to fingerprint and report hardware specifications of systems which can be booted via PXE. Hostslvrt-lcl-session-srv-310-*
andlvrt-lcl-session-srv-311-*
demonstrate how a poweron-fingerprint-report-poweroff cycle works in practice. - Hosts
lvrt-lcl-session-srv-4*
showcase how to deploy an installer-provisioned OKD cluster on bare-metal servers and run OpenShift's conformance test suite. This setup uses libvirt domains (QEMU/KVM based virtual machines) to simulate bare-metal servers and auxiliary resources. sushy-emulator provides a virtual Redfish BMC to power cycle servers and mount virtual media for hardware inspection and provisioning. Beware of high resource utilization, e.g. this cluster requires >96GB of RAM. - Hosts
lvrt-lcl-session-srv-5*
showcase how to deploy an OKD HA cluster on bare-metal servers with agent-based installer and run OpenShift's conformance test suite. This setup uses libvirt domains (QEMU/KVM based virtual machines) to simulate bare-metal servers and auxiliary resources. sushy-emulator provides a virtual Redfish BMC to power cycle servers and mount virtual media for hardware inspection and provisioning. Beware of high resource utilization, e.g. this cluster requires >96GB of RAM. - Hosts
lvrt-lcl-session-srv-6*
showcase how to deploy an installer-provisioned OKD cluster on bare-metal servers and run OpenShift's conformance test suite. This setup is similar to that of hostslvrt-lcl-session-srv-4*
except for a additional provisioning network to PXE boot servers and VirtualBMC, a virtual IPMI BMC, to power cycle servers. Beware of high resource utilization, e.g. this cluster requires >96GB of RAM. - Hosts
lvrt-lcl-session-srv-7*
showcase how to deploy a single-node OKD (SNO) cluster on a bare-metal server and run OpenShift's conformance test suite. This setup uses libvirt domains (QEMU/KVM based virtual machines) to simulate a bare-metal server and auxiliary resources. sushy-emulator provides a virtual Redfish BMC to power cycle the server and mount virtual media for provisioning. Beware that this setup requires 32GB of RAM. - Hosts
lvrt-lcl-session-srv-8*
showcase how to deploy an OpenStack cloud with Kolla Ansible. Beware of high resource utilization, e.g. this cluster requires >=96GB of RAM.
This collection has been developed and tested for compatibility with:
- Debian 10 (Buster)
- Debian 11 (Bullseye)
- Debian 12 (Bookworm)
- Debian 13 (Trixie)
- Fedora
- Red Hat Enterprise Linux (RHEL) 7 / CentOS Linux 7
- Red Hat Enterprise Linux (RHEL) 8 / CentOS Stream 8
- Red Hat Enterprise Linux (RHEL) 9 / CentOS Stream 9
- Ubuntu 18.04 LTS (Bionic Beaver)
- Ubuntu 20.04 LTS (Focal Fossa)
- Ubuntu 22.04 LTS (Jammy Jellyfish)
- Ubuntu 24.04 LTS (Noble Numbat)
Goals for this collection are:
- KISS. No magic involved. Do not hide complexity (if complexity is unavoidable). Follow Ansible's principles:
Hosts, groups and group memberships are defined in
hosts.yml
, host- and group-specific configuration is stored inhost_vars
andgroup_vars
, tasks and related content are grouped in roles, playbooks such assite.yml
assign roles to hosts and groups of hosts.
- Generic and reusable code. Code is adaptable and extendable for various cloud use cases with minimal changes:
Most roles offer choices of several modules to customize role behaviour, e.g. role
cloudinit
allows to use Ansible modules and action plugins such aslineinfile
andcopy
inhost_vars
andgroup_vars
to edit lines in files, copy directories etc.
- Users are experts. Users know what to do (once you give them the options). Users have to understand the code to reliable and securely operate systems. Users have to understand the code to debug it due to leaky abstractions.
NOTE: This section lists a minimal set of commands to spin up the Ansible hosts, i.e. virtual machines, from the example inventory. For a complete guide on how to use this collection and how to build your own cloud infrastructure based on this collection, read the following sections.
Install git
and Podman on a bare-metal system with Debian 11 (Bullseye), CentOS Stream 8,
Ubuntu 22.04 LTS (Jammy Jellyfish) or newer. Ensure the system has KVM nested virtualization enabled, has enough storage to store disk images for the virtual machines and is not
connected to ip networks 192.168.157.0/24
and 192.168.158.0/24
. Then run:
git clone https://github.com/JM1/ansible-collection-jm1-cloudy.git
cd ansible-collection-jm1-cloudy/
cp -i ansible.cfg.example ansible.cfg
cd containers/
sudo DEBUG=yes DEBUG_SHELL=yes ./podman-compose.sh up
The last command will create various Podman networks, volumes and containers, and attach to a container named cloudy
.
Inside this container a Bash shell will be spawned for user cloudy
. This user cloudy
will be executing the libvirt
domains (QEMU/KVM based virtual machines) from the example inventory. For example, to launch a
virtual machine with Debian 12 (Bookworm) run the following commands from cloudy
's Bash shell:
ansible-playbook playbooks/site.yml --limit lvrt-lcl-session-srv-022-debian12
Once Ansible is done, launch another shell at the container host (the bare-metal system) and connect to the virtual machine with:
sudo podman exec -ti -u cloudy cloudy ssh ansible@192.168.158.21
The ip address 192.168.158.21
is assigned to network interface eth0
from the virtual machine
lvrt-lcl-session-srv-022-debian12.home.arpa
and can be retrieved from the Ansible inventory, i.e. from file
inventory/host_vars/lvrt-lcl-session-srv-022-debian12.yml.
Back at cloudy
's Bash shell inside the container, remove the virtual machine for Ansible host
lvrt-lcl-session-srv-022-debian12
with:
# Note the .home.arpa suffix
virsh destroy lvrt-lcl-session-srv-022-debian12.home.arpa
virsh undefine --remove-all-storage --nvram lvrt-lcl-session-srv-022-debian12.home.arpa
A few Ansible hosts from the example inventory have to be launched in a given order. These
dependencies are codified with Ansible groups build_level0
, build_level1
et cetera in inventory/hosts.yml. Each Ansible host is member of exactly one build_level*
group. For example, when deploying a
installer-provisioned OKD cluster, the Ansible host lvrt-lcl-session-srv-430-okd-ipi-provisioner
has
to be provisioned after all other lvrt-lcl-session-srv-4*
hosts have been installed successfully:
ansible-playbook playbooks/site.yml --limit \
lvrt-lcl-session-srv-400-okd-ipi-router,\
lvrt-lcl-session-srv-401-okd-ipi-bmc,\
lvrt-lcl-session-srv-410-okd-ipi-cp0,\
lvrt-lcl-session-srv-411-okd-ipi-cp1,\
lvrt-lcl-session-srv-412-okd-ipi-cp2,\
lvrt-lcl-session-srv-420-okd-ipi-w0,\
lvrt-lcl-session-srv-421-okd-ipi-w1
ansible-playbook playbooks/site.yml --limit lvrt-lcl-session-srv-430-okd-ipi-provisioner
Removal does not impose any order:
for vm in \
lvrt-lcl-session-srv-400-okd-ipi-router.home.arpa \
lvrt-lcl-session-srv-401-okd-ipi-bmc.home.arpa \
lvrt-lcl-session-srv-410-okd-ipi-cp0.home.arpa \
lvrt-lcl-session-srv-411-okd-ipi-cp1.home.arpa \
lvrt-lcl-session-srv-412-okd-ipi-cp2.home.arpa \
lvrt-lcl-session-srv-420-okd-ipi-w0.home.arpa \
lvrt-lcl-session-srv-421-okd-ipi-w1.home.arpa \
lvrt-lcl-session-srv-430-okd-ipi-provisioner.home.arpa
do
virsh destroy "$vm"
virsh undefine --remove-all-storage --nvram "$vm"
done
Some setups such as the OKD clusters build with Ansible hosts lvrt-lcl-session-srv-{4,5,6}*
use internal DHCP
and DNS services which are not accessible from the container host. For example, to access the OKD clusters
connect from cloudy
's Bash shell to the virtual machines which initiate the cluster installation, i.e. Ansible host
lvrt-lcl-session-srv-430-okd-ipi-provisioner
:
ssh ansible@192.168.158.28
From ansible
's Bash shell the OKD cluster can be accessed with:
export KUBECONFIG=/home/ansible/clusterconfigs/auth/kubeconfig
oc get nodes
oc debug node/cp0
Exit cloudy
's Bash shell to stop the container.
NOTE: Any virtual machines still running inside the container will be killed!
Finally, remove all Podman containers, networks and volumes related to this collection with:
sudo DEBUG=yes ./podman-compose.sh down
Click on the name of an inventory, module, playbook or role to view that content's documentation:
- Inventories:
- Playbooks:
- Roles:
- apparmor
- chrony
- cloudinit
- debconf
- devstack
- dhcpd
- dnsmasq
- files
- groups
- grub
- httpd
- initrd
- ipmi
- iptables
- kolla_ansible
- kubernetes_resources
- libvirt_domain
- libvirt_domain_state
- libvirt_images
- libvirt_networks
- libvirt_pools
- libvirt_volumes
- meta_packages
- netplan
- networkmanager
- openshift_client
- openshift_abi
- openshift_ipi
- openshift_sno
- openshift_tests
- openstack_server
- packages
- podman
- pxe_hwfp
- pxe_installer
- selinux
- services
- ssh_authorized_keys
- sshd
- storage
- sudoers
- sysctl
- tftpd
- tripleo_standalone
- users
For starting off with this collection, first derive your own cloud infrastructure from the content of this collection such as inventories, playbooks and roles and prepare your host environment.
To deploy this customized cloud infrastructure, you can either deploy a container with Docker Compose, deploy a container with Podman or utilize a bare-metal system. Both container-based approaches will start libvirt, bridged networks and all QEMU/KVM based virtual machines in a single container. This is easier to get started with, is mostly automated, requires less changes to the Ansible controller system and is less likely to break your host system.
To build your own cloud infrastructure based on this collection, create a new git repository and copy both directories
inventory/
and playbooks/
as well as file ansible.cfg.example
to it. For a containerized setup with Docker or
Podman also copy directory containers/
. Instead of copying
containers/
and playbooks/
you could also add this collection as a
git submodule and refer directly to those directories inside the git submodule. Git submodules can be
difficult to use but allow to pin your code a specific commit of this collection, making it more resilient against
breaking changes. The following example shows how to use derive a new project utilizing containers and git submodules:
git init new-project
cd new-project
git submodule add https://github.com/JM1/ansible-collection-jm1-cloudy.git vendor/cloudy
cp -ri vendor/cloudy/inventory/ .
ln -s vendor/cloudy/playbooks
cp -i vendor/cloudy/ansible.cfg.example ansible.cfg
git add inventory/ playbooks ansible.cfg # git submodule has been added already
git commit -m "Example inventory"
Host lvrt-lcl-system
defines a libvirt environment to be set up on a bare-metal system or inside a container. For
example, this includes required packages for libvirt and QEMU, libvirt virtual networks such as
NAT based networks as well as isolated networks) and a default libvirt storage pool.
Host lvrt-lcl-session
defines the libvirt session of your local user on a bare-metal system or user cloudy
inside
the container. For example, this includes a default libvirt storage pool and OS images for host provisioning.
All remaining Ansible hosts inside the example inventory define libvirt domains (QEMU/KVM based
virtual machines) which require both hosts lvrt-lcl-system
and lvrt-lcl-session
to be provisioned successfully.
Dig into the inventory and playbooks and customize them as needed.
Edit ansible.cfg
to match your environment, i.e. set inventory
path where Ansible will find your inventory:
cp -nv ansible.cfg.example ansible.cfg
editor ansible.cfg
Ensure you have valid SSH public keys for SSH logins, e.g. a RSA public key at $HOME/.ssh/id_rsa.pub
. If no key exists
at $HOME/.ssh/id_{dsa,ecdsa,ed25519,rsa}.pub
generate a new key pair with ssh-keygen
or edit Ansible variable
ssh_authorized_keys
in inventory/group_vars/all.yml
to include your SSH public key:
ssh_authorized_keys:
- comment: John Wayne (Megacorp)
key: ssh-rsa ABC...XYZ user@host
state: present
user: '{{ ansible_user }}'
To run playbooks and roles of this collection with Docker Compose,
- Docker or Podman and Docker Compose have to be installed,
- KVM nested virtualization has to be enabled,
- a Docker bridge network has to be created and
- a container has to be started with Docker Compose.
Ensure Docker or Podman is installed on your system.
OS | Install Instructions |
---|---|
Debian 10 (Buster), 11 (Bullseye), 12 (Bookworm), 13 (Trixie) | apt install docker.io docker-compose or follow Docker's official install guide for Debian and their install guide for Docker Compose |
Fedora | Follow Docker's official install guide for Fedora and their install guide for Docker Compose or use Podman with Docker Compose |
Red Hat Enterprise Linux (RHEL) 7, 8, 9 / CentOS Linux 7, CentOS Stream 8, 9 | Follow Docker's official install guide for CentOS and RHEL and their install guide for Docker Compose or use Podman with Docker Compose |
Ubuntu 18.04 LTS (Bionic Beaver), 20.04 LTS (Focal Fossa), 22.04 LTS (Jammy Jellyfish), 24.04 LTS (Noble Numbat) | apt install docker.io docker-compose or follow Docker's official install guide for Ubuntu and their install guide for Docker Compose |
Some libvirt domains (QEMU/KVM based virtual machines) like the DevStack and TripleO standalone hosts require KVM nested virtualization to be enabled on the container host, the system running Docker or Podman.
To enable KVM nested virtualization for Intel and AMD CPUs, run Ansible role jm1.kvm_nested_virtualization
on your container host or execute the commands shown in its README.md
manually.
To access libvirt domains (QEMU/KVM based virtual machines) running inside containers from the container host, a Docker bridge network must be created and routes for the ip networks used inside containers must be published on the container host.
The network configuration is twofold. First, a network bridge with ip routes will be set up and afterwards a Docker
bridge will be created. For example, on Debian network interfaces are configured with ifupdown
.
To define a network bridge docker-cloudy
and enable connectivity with routed ip networks 192.168.157.0/24
and
192.168.158.0/24
used inside containers, change /etc/network/interfaces
to:
# Ref.:
# man interfaces
# man bridge-utils-interfaces
auto docker-cloudy
iface docker-cloudy inet manual
bridge_ports none
bridge_stp off
bridge_waitport 3
bridge_fd 0
bridge_maxwait 5
# publish routes of routed libvirt networks inside containers
post-up ip route add 192.168.157.0/24 dev docker-cloudy
post-up ip route add 192.168.158.0/24 dev docker-cloudy
pre-down ip route del 192.168.158.0/24 dev docker-cloudy || true
pre-down ip route del 192.168.157.0/24 dev docker-cloudy || true
iface docker-cloudy inet6 manual
To apply these changes, run systemctl restart networking.service
or reboot your system.
This network bridge docker-cloudy
has no physical network ports assigned to it, because connectivity is established
with ip routing.
On systems using systemd-networkd
refer to Arch's Wiki or upstream's documentation. For distributions using NetworkManager
, refer to GNOME's project page on NetworkManager, esp. its See Also
section.
The second step is to create a Docker network cloudy
which containers will be using to communicate with the outside:
docker network create --driver=bridge -o "com.docker.network.bridge.name=docker-cloudy" --subnet=192.168.150.0/24 --gateway=192.168.150.1 cloudy
If you do not intend to communicate from the container host with libvirt domains running inside containers, you can skip
the instructions about network bridge docker-cloudy
above and only create Docker bridge cloudy
with:
docker network create --subnet=192.168.150.0/24 --gateway=192.168.150.1 cloudy
Open a docker-compose.yml.*
in your copy of the containers/
directory which matches the
distribution of the container host. The following example assumes that the container host is running on
Debian 11 (Bullseye)
. The matching Docker Compose file is named docker-compose.yml.debian_11
. To start it, run these
commands on the container host:
# Change to Docker Compose directory inside your project directory
# containing your Ansible inventory, playbooks and ansible.cfg
cd containers/
# Start container in the background
DEBUG=yes DEBUG_SHELL=yes docker-compose -f docker-compose.yml.debian_11 -p cloudy up -d
# Monitor container activity
docker-compose -f docker-compose.yml.debian_11 -p cloudy logs --follow
Inside the container, script containers/entrypoint.sh
will execute playbook
playbooks/site.yml
for hosts lvrt-lcl-system
and lvrt-lcl-session
, as defined in the
inventory. When container execution fails, try to start the
container again.
When all Ansible playbook runs for both Ansible hosts lvrt-lcl-system
and lvrt-lcl-session
have been completed
successfully, attach to the Bash shell for user cloudy
running inside the container:
# Attach to Bash shell for user cloudy who runs the libvirt domains (QEMU/KVM based virtual machines)
docker attach cloudy
Inside the container continue with running playbook playbooks/site.yml
for all remaining hosts
from your copy of the inventory/
directory which is available in /home/cloudy/project
.
To connect to the libvirt daemon running inside the container from the container host, run the following command at your container host:
# List all libvirt domains running inside the container
virsh --connect 'qemu+tcp://127.0.0.1:16509/session' list
The same connection URI qemu+tcp://127.0.0.1:16509/session
can also be used with virt-manager at the container host.
To view a virtual machine's graphical console, its Spice server or VNC server has to be changed, i.e. its listen type
has to be changed to address
, address has to be changed to 0.0.0.0
(aka All interfaces
) or 192.168.150.2
and
port has to be changed to a number between 5900-5999
. Then view its graphical console on your container host with:
# View a libvirt domain's graphical console with vnc server at port 5900 running inside the container
remote-viewer vnc://127.0.0.1:5900
To stop and remove the container(s), exit the container's Bash shells and run on your container host:
# Stop and remove container(s)
docker-compose -f docker-compose.yml.debian_11 -p cloudy down
Both the SSH credentials and the libvirt storage volumes of the libvirt domains (QEMU/KVM based virtual machines) have been persisted in Docker volumes which will not be deleted when shutting down the Docker container. To list and wipe those Docker volumes, run:
# List all Docker volumes
docker volume ls
# Remove Docker volumes
docker volume rm cloudy_images cloudy_ssh
To run playbooks and roles of this collection with Podman,
- Podman has to be installed,
- KVM nested virtualization has to be enabled,
- a container has to be started with Podman.
Ensure Podman is installed on your system.
OS | Install Instructions |
---|---|
Debian 11 (Bullseye), 12 (Bookworm), 13 (Trixie) | apt install podman |
Fedora | dnf install podman |
Red Hat Enterprise Linux (RHEL) 7, 8, 9 / CentOS Linux 7, CentOS Stream 8, 9 | yum install podman |
Ubuntu 22.04 LTS (Jammy Jellyfish), 24.04 LTS (Noble Numbat) | apt install podman |
podman-compose.sh
helps with managing Podman storage volumes, establishing network connectivity
between host and container as well as running our Ansible code and virtual machines inside containers. It offers command
line arguments similar to docker-compose
, run containers/podman-compose.sh --help
to find out more about its usage.
podman-compose.sh
will create a bridged Podman network cloudy
which
libvirt domains (QEMU/KVM based virtual machines) will use to connect to the internet. The bridge has no physical
network ports attached, because connectivity is established with [ip routing]. The script will also configure ip routes
for networks 192.168.157.0/24
and 192.168.158.0/24
at the container host which allows to access the libvirt domains
running inside the containers from the host.
NOTE: Ensure both ip networks 192.168.157.0/24
and 192.168.158.0/24
are not present at the container host before
executing podman-compose.sh
else the script will fail.
The following example shows how to use an example of how to use podman-compose.sh
at a container
host running on Debian 11 (Bullseye)
:
# Change to containers directory inside your project directory
# containing your Ansible inventory, playbooks and ansible.cfg
cd containers/
# Start Podman networks, volumes and containers in the background
sudo DEBUG=yes DEBUG_SHELL=yes ./podman-compose.sh up --distribution debian_11 --detach
# Monitor container activity
sudo podman logs --follow cloudy
Inside the container, script containers/entrypoint.sh
will execute playbook
playbooks/site.yml
for hosts lvrt-lcl-system
and lvrt-lcl-session
, as defined in the
inventory. When container execution fails, try to start the
container again.
When all Ansible playbook runs for both Ansible hosts lvrt-lcl-system
and lvrt-lcl-session
have been completed
successfully, attach to the Bash shell for user cloudy
running inside the container:
# Attach to Bash shell for user cloudy who runs the libvirt domains (QEMU/KVM based virtual machines)
sudo podman attach cloudy
Inside the container continue with running playbook playbooks/site.yml
for all remaining hosts
from your copy of the inventory/
directory which is available in /home/cloudy/project
.
To connect to the libvirt daemon running inside the container from the container host, run the following command at your container host:
# List all libvirt domains running inside the container
virsh --connect 'qemu+tcp://127.0.0.1:16509/session' list
The same connection URI qemu+tcp://127.0.0.1:16509/session
can also be used with virt-manager at the container host.
To view a virtual machine's graphical console, its Spice server or VNC server has to be changed, i.e. its listen type
has to be changed to address
, address has to be changed to 0.0.0.0
(aka All interfaces
) or 192.168.150.2
and
port has to be changed to a number between 5900-5999
. Then view its graphical console on your container host with:
# View a libvirt domain's graphical console with vnc server at port 5900 running inside the container
remote-viewer vnc://127.0.0.1:5900
To stop the containers, exit the container's Bash shells and run on your container host:
# Stop containers
sudo DEBUG=yes ./podman-compose.sh stop
Both the SSH credentials and the libvirt storage volumes of the libvirt domains (QEMU/KVM based virtual machines) have been persisted in Podman volumes which will not be deleted when stopping the Podman container:
# List all Podman volumes
sudo podman volume ls
To remove all container(s), networks and wipe all volumes, run:
# Stop and remove containers, volumes and networks
sudo DEBUG=yes ./podman-compose.sh down
To use this collection on a bare-metal system,
- Ansible 2.9 or greater 1 has to be installed either
with
pip
or using distribution-provided packages, - necessary Ansible roles and collections have to be fetched,
- their requirements have to be satisfied,
- this collection has to be installed from Ansible Galaxy and
- the bare-metal system has to be configured with Ansible.
Ansible's Installation Guide provides instructions on how to install Ansible on several
operating systems and with pip
.
First, make sure that pip
is available on your system.
OS | Install Instructions |
---|---|
Debian 10 (Buster), 11 (Bullseye), 12 (Bookworm), 13 (Trixie) | apt install python3 python3-pip |
Red Hat Enterprise Linux (RHEL) 7, 8, 9 / CentOS Linux 7, CentOS Stream 8, 9 | yum install python3 python3-pip |
Ubuntu 18.04 LTS (Bionic Beaver), 20.04 LTS (Focal Fossa), 22.04 LTS (Jammy Jellyfish), 24.04 LTS (Noble Numbat) | apt install python3 python3-pip |
Run pip3 install --user --upgrade pip
to upgrade pip
to the latest version because an outdated pip
version is the
single most common cause of installation problems. Before proceeding, please follow the hints and instructions given in
pip-requirements.txt
because some Python modules have additional prerequisites. Next, install
Ansible and all required Python modules with pip3 install --user --requirement pip-requirements.txt
.
You may want to use Python's virtualenv
tool to create a self-contained Python environment for Ansible,
instead of installing all Python packages to your home directory with pip3 install --user
.
pip
instead of OS package managers is preferred because distribution-provided packages are often outdated.
:warning:
To install Ansible 2.9 or later using OS package managers do:
OS | Install Instructions |
---|---|
Debian 10 (Buster) | Enable Backports. apt install ansible ansible-doc make |
Debian 11 (Bullseye), 12 (Bookworm), 13 (Trixie) | apt install ansible make |
Fedora | dnf install ansible make |
Red Hat Enterprise Linux (RHEL) 7 / CentOS Linux 7 | Enable EPEL. yum install ansible ansible-doc make |
Red Hat Enterprise Linux (RHEL) 8, 9 / CentOS Stream 8, 9 | Enable EPEL. yum install ansible make |
Ubuntu 18.04 LTS (Bionic Beaver), 20.04 LTS (Focal Fossa) | Enable Launchpad PPA Ansible by Ansible, Inc.. apt install ansible ansible-doc make |
Ubuntu 22.04 LTS (Jammy Jellyfish), 24.04 LTS (Noble Numbat) | apt install ansible make |
Some Ansible modules used in this collection require additional tools and Python libraries which have to be installed
manually. Refer to pip-requirements.txt
for a complete list. Use a package search
to find matching packages for your distribution.
Content in this collection requires additional roles and collections, e.g. to collect operating system facts. You can
fetch them from Ansible Galaxy using the provided requirements.yml
:
ansible-galaxy collection install --requirements-file requirements.yml
ansible-galaxy role install --role-file requirements.yml
# or
make install-requirements
community.general
have dropped support for older Ansible releases such as Ansible 2.9 and
2.10, so when using older Ansible releases you will have to downgrade to older versions of the Ansible collections.
:warning:
These collections require additional tools and libraries, e.g. to interact with package managers, libvirt and OpenStack. You can use the following roles to install necessary software packages:
sudo -s
ansible-playbook playbooks/setup.yml
# or
ansible-console localhost << EOF
gather_facts
include_role name=jm1.pkg.setup
# Ref.: https://github.com/JM1/ansible-collection-jm1-pkg/blob/master/roles/setup/README.md
include_role name=jm1.libvirt.setup
# Ref.: https://github.com/JM1/ansible-collection-jm1-libvirt/blob/master/roles/setup/README.md
include_role name=jm1.openstack.setup
# Ref.: https://github.com/JM1/ansible-collection-jm1-openstack/blob/master/roles/setup/README.md
EOF
The exact requirements for every module and role are listed in the corresponding documentation. See the module documentations for the minimal version supported for each module.
Before using the jm1.cloudy
collection, you need to install it with the Ansible Galaxy CLI:
ansible-galaxy collection install jm1.cloudy
You can also include it in a requirements.yml
file and install it via
ansible-galaxy collection install -r requirements.yml
, using the format:
---
collections:
- name: jm1.cloudy
version: 2024.11.2
To configure and run the libvirt domains (QEMU/KVM based virtual machines) defined in the example inventory, both Ansible hosts lvrt-lcl-system
and lvrt-lcl-session
have to be provisioned successfully
first. Executing playbook playbooks/site.yml
for hosts lvrt-lcl-system
and lvrt-lcl-session
will create several libvirt virtual networks, both NAT based networks as well as isolated
networks. For each network, a bridge will be created with names virbr-local-0
to
virbr-local-7
. To each network an ip subnet will be assigned, from 192.168.151.0/24
to 192.168.158.0/24
. The
libvirt virtual networks are defined with variable libvirt_networks
in inventory/host_vars/lvrt-lcl-system.yml
.
Before running the playbooks for hosts lvrt-lcl-system
and lvrt-lcl-session
, please make sure that no bridges with
such names do exist on your system. Please also verify that the ip subnets 192.168.151.0/24
to 192.168.156.0/24
are
not currently known to your system. For example, use ip addr
to show all IPv4 and IPv6 addresses assigned to all
network interfaces.
Both ip subnets 192.168.157.0/24
and 192.168.158.0/24
have either to be published to your router(s), probably your
standard gateway only, or your bare-metal system has to do masquerading.
Once ip subnets have been set up properly, the libvirtd configuration for your local user (not root) has to be changed
to allow tcp connections from libvirt isolated network 192.168.153.0/24
which is used for virtual BMCs:
# Disable libvirt tls transport, enable unauthenticated libvirt tcp transport
# and bind to 192.168.153.1 for connectivity from libvirt domains.
mkdir -p ~/.config/libvirt/
cp -nv /etc/libvirt/libvirtd.conf ~/.config/libvirt/libvirtd.conf
sed -i \
-e 's/^[#]*listen_tls = .*/listen_tls = 0/g' \
-e 's/^[#]*listen_tcp = .*/listen_tcp = 1/g' \
-e 's/^[#]*listen_addr = .*/listen_addr = "192.168.153.1"/g' \
-e 's/^[#]*auth_tcp = .*/auth_tcp = "none"/g' \
~/.config/libvirt/libvirtd.conf
An SSH agent must be running and your SSH private key(s) must be loaded. Ansible will use SSH agent forwarding to access nested virtual machines such as the bootstrap virtual machine of OpenShift Installer-provisioned installation (IPI) or OKD Installer-provisioned installation (IPI).
# Start ssh-agent and add SSH private keys if ssh-agent is not running
if [ -z "$SSH_AGENT_PID" ]; then
eval $(ssh-agent)
ssh-add
fi
# Ensure your SSH public key is listed
ssh-add -L
Run playbook playbooks/site.yml
for host lvrt-lcl-system
to prepare a libvirt environment on your system, e.g. to
install packages for libvirt and QEMU, configure libvirt networks and prepare a default libvirt storage pool.
# Cache user credentials so that Ansible can escalate privileges and execute tasks with root privileges
sudo true
ansible-playbook playbooks/site.yml --limit lvrt-lcl-system
The former will also enable masquerading with nftables or iptables (if nftables is
unavailable) on your bare-metal system for both ip subnets 192.168.157.0/24
and 192.168.158.0/24
.
The nftables / iptables are defined in Ansible variable iptables_config
in
inventory/host_vars/lvrt-lcl-system.yml
.
NOTE: The changes applied to nftables and iptables are not persistant and will not survive
reboots. Please refer to your operating system's documentation on how to store nftables or iptables rules persistently. Or run playbooks/site.yml
again after rebooting.
Run playbook playbooks/site.yml
for host lvrt-lcl-session
to prepare the libvirt session of your local user, e.g. to
prepare a default libvirt storage pool and preload OS images for host provisioning.
ansible-playbook playbooks/site.yml --limit lvrt-lcl-session
With both hosts lvrt-lcl-system
and lvrt-lcl-session
being set up, continue with running playbook
playbooks/site.yml
for all remaining hosts.
The example inventory of this collection, on which your cloud infrastructure can be build upon, defines several libvirt domains (QEMU/KVM based virtual
machines) lvrt-lcl-session-srv-*
and two special Ansible hosts lvrt-lcl-system
and lvrt-lcl-session
. The latter
two have been used above to deploy a container with Docker Compose,
deploy a container with Podman orprepare a bare-metal system
and are not of interest here. For an overview about the libvirt domains please refer to the introduction at the
beginning.
For example, to set up Ansible host lvrt-lcl-session-srv-020-debian10
run the following command from inside your
project directory as local non-root user, e.g. cloudy
in
containerized setup:
# Set up and boot a libvirt domain (QEMU/KVM based virtual machine) based on Debian 10 (Buster)
ansible-playbook playbooks/site.yml --limit lvrt-lcl-session-srv-020-debian10
Inside inventory/host_vars/lvrt-lcl-session-srv-020-debian10.yml
you will find the ip address
of that system which can be used for ssh'ing into it:
# Establish SSH connection to Ansible host lvrt-lcl-session-srv-020-debian10
ssh ansible@192.168.158.13
Besides individual Ansible hosts, you can also use Ansible groups such as build_level1
, build_level2
and et cetera
to set up several systems in parallel.
playbooks/site.yml
for multiple hosts across many build levels at once will create dozens of virtual
machines. Ensure that your system has enough memory to run them in parallel. To lower memory requirements, you may want
to limit playbooks/site.yml
to a few hosts or a single host instead. Refer to hosts.yml
for a
complete list of hosts and build levels.
:warning:
# build_level0 contains lvrt-lcl-system and lvrt-lcl-session which have been prepared in previous steps
ansible-playbook playbooks/site.yml --limit build_level1
ansible-playbook playbooks/site.yml --limit build_level2
ansible-playbook playbooks/site.yml --limit build_level3
You can either call modules and roles by their Fully Qualified Collection Name (FQCN), like jm1.cloudy.devstack
, or
you can call modules by their short name if you list the jm1.cloudy
collection in the playbook's collections
,
like so:
---
- name: Using jm1.cloudy collection
hosts: localhost
collections:
- jm1.cloudy
roles:
- name: Setup an OpenStack cluster with DevStack
role: devstack
For documentation on how to use individual modules and other content included in this collection, please see the links in the 'Included content' section earlier in this README.
See Ansible Using collections for more details.
There are many ways in which you can participate in the project, for example:
- Submit bugs and feature requests, and help us verify them
- Submit pull requests for new modules, roles and other content
We're following the general Ansible contributor guidelines; see Ansible Community Guide.
If you want to develop new content for this collection or improve what is already here, the easiest way to work on the
collection is to clone this repository (or a fork of it) into one of the configured ANSIBLE_COLLECTIONS_PATHS
and work on it there:
- Create a directory
ansible_collections/jm1
; - In there, checkout this repository (or a fork) as
cloudy
; - Add the directory containing
ansible_collections
to yourANSIBLE_COLLECTIONS_PATHS
.
Helpful tools for developing collections are ansible
, ansible-doc
, ansible-galaxy
, ansible-lint
, flake8
,
make
and yamllint
.
OS | Install Instructions |
---|---|
Debian 10 (Buster) | Enable Backports. apt install ansible ansible-doc ansible-lint flake8 make yamllint |
Debian 11 (Bullseye), 12 (Bookworm), 13 (Trixie) | apt install ansible ansible-lint flake8 make yamllint |
Fedora | dnf install ansible python3-flake8 make yamllint |
Red Hat Enterprise Linux (RHEL) 7 / CentOS Linux 7 | Enable EPEL. yum install ansible ansible-lint ansible-doc python-flake8 make yamllint |
Red Hat Enterprise Linux (RHEL) 8, 9 / CentOS Stream 8, 9 | Enable EPEL. yum install ansible python3-flake8 make yamllint |
Ubuntu 18.04 LTS (Bionic Beaver), 20.04 LTS (Focal Fossa) | Enable Launchpad PPA Ansible by Ansible, Inc.. apt install ansible ansible-doc ansible-lint flake8 make yamllint |
Ubuntu 22.04 LTS (Jammy Jellyfish), 24.04 LTS (Noble Numbat) | apt install ansible ansible-lint flake8 make yamllint |
Have a look at the included Makefile
for
several frequently used commands, to e.g. build and lint a collection.
- Ansible Collection Overview
- Ansible User Guide
- Ansible Developer Guide
- Ansible Community Code of Conduct
GNU General Public License v3.0 or later
See LICENSE.md to see the full text.
Jakob Meng @jm1 (github, galaxy, web)
Footnotes
-
Ansible Collections have been introduced in Ansible 2.9, hence for collection support a release equal to version 2.9 or greater has to be installed. ↩