- Create a Sample Task, Basically will not needed as most of the tasks are already ready to use and invoking modules code.
- Create a sample playbook
- Create a sample role
- Create a sample inventory
- Create in advance 7 VMs for demos
- Install Ansible suite on control Machine, follow instructions here
- Check if your hardware supporting virtualization
egrep -c '(vmx|svm)' /proc/cpuinfo
- Install QEMU-KVM, and libvirt
- RHEL, CentOS:
sudo yum install qemu-kvm qemu-img libvirt virt-install libvirt-client virt-manager
- Fedora, CoreOS:
sudo dnf install qemu-kvm qemu-img libvirt virt-install libvirt-client virt-manager
- Debian, Ubuntu:
sudo apt-get update sudo apt-get install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils
- Sudo/root privileges.
- Download the latest ubuntu Image to your machine:
curl -X GET https://releases.ubuntu.com/22.04.1/ubuntu-22.04.1-live-server-amd64.iso -o /tmp/ubuntu-22.04.1-live-server-amd64.iso
- Create A Virtual Machine (Repeat the process as many times as needed, according to your local resources storage ,memory and cpu cores):
sudo virt-install --connect qemu:///system --name ubuntu-xx --os-variant ubuntu22.04 --vcpus 1 --memory 2048 --location /tmp/ubuntu-22.04.1-live-server-amd64.iso,kernel=casper/vmlinuz,initrd=casper/initrd --network bridge=virbr0,model=virtio --disk size=10 --graphics none --extra-args='console=ttyS0,115200n8 --- console=ttyS0,115200n8' --debug
- the VM is being now created, a guided interactive Installer will appear within 30-45 seconds. There you choose "Continue in basic mode" --> then "Continue without updating" --> on Langauge selection screen, let it remain Both Layout and Variant= English (US) --> Press Done --> Checkbox the first option(Ubuntu Server) and press Done.
- On Next screen, It will automatically create network interface and will assign it an IP Address to interact with it from host and other machines, just press Done.
- On Proxy Address and Mirror Address, just press twice on Done.
- Choose the option "Use an entire disk", and press twice on done, and then on continue.
- Fill details of your full name, Server name (Machine name), username , and pick up password, this password will used also for privileged escalation using sudo for privileged operations on this VM.
- Check the option
Install OpenSSH server
- Import SSH identity = from GitHub, and write down your github user to import public ssh keys from there and install them on machine as authorized keys,
Allow password authentication over SSH
should remain unchecked, press on Done. - Click on Yes.
- Click on Done on page
Featured Server Snaps
. - Wait for installation to finish, and then click on button
Reboot Now
. - Login With user and password you've defined in installation process.
- Get the IP Address of the machine and write it down
ip a | grep enp1s0 | grep inet | awk '{print $2}' | awk -F / '{print $1}
- Type exit, and then
Ctrl + ]
In order to quit from emulation - Verify that you can ssh into VM from host without password:
ssh zgrinber@ip_from_section14
- Run the command, and input a password to be hashed, this will be the user that will be privileged and will have sudo access on VM:
podman run -ti --rm quay.io/coreos/mkpasswd --method=yescrypt
- Print one of the public ssh keys in /home/zgrinber/.ssh/, for example:
cat ~/.ssh/id_ed25519.pub
- Create a butane configuration file with your desired ssh public key from section 2 and hashed password from section 1:
variant: fcos
version: 1.4.0
passwd:
users:
- name: zgrinber # User name that will have sudo access
groups:
- wheel
- sudo
- docker
password_hash: hashed_password # Output From Section 1
ssh_authorized_keys:
- your_public_key_content # Output From Section 2
storage:
files:
- path: /etc/hostname
mode: 0644
contents:
inline: core-os-1 # Host name
- Create an ignition file (JSON) from Butane YAML:
podman run --interactive --rm quay.io/coreos/butane:release \
--pretty --strict < coreos-vm.bu > coreos-vm.ign
- Download fedora CoreOS latest QEMU image:
curl -X GET https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/37.20221225.3.0/x86_64/fedora-coreos-37.20221225.3.0-qemu.x86_64.qcow2.xz -o /tmp/fedora-coreos-37.20221225.3.0-qemu.x86_64.qcow2.xz
- Uncompress it to same directory
xz -k -d /tmp/fedora-coreos-37.20221225.3.0-qemu.x86_64.qcow2.xz
- Set the following environment variables for the current terminal session:
echo "IGNITION_CONFIG="\"$(pwd)\"/coreos-vm.ign"
IMAGE=\"/tmp/fedora-coreos/fedora-coreos-37.20221127.3.0-qemu.x86_64.qcow2\"
VM_NAME=\"coreos-01\"
VCPUS=\"2\"
RAM_MB=\"2048\"
STREAM=\"stable\"
DISK_GB=\"10\"" | tee vm-args.properties ; source vm-args.properties
- Define the following environment variable, using ignition file path in its environment variables:
export IGNITION_DEVICE_ARG=(--qemu-commandline="-fw_cfg name=opt/com.coreos/config,file=${IGNITION_CONFIG}")
- Provision virtual machine using ignition file and environment variables defined earlier
sudo virt-install --connect="qemu:///system" --name="${VM_NAME}" --vcpus="${VCPUS}" --memory="${RAM_MB}" --os-variant="fedora-coreos-$STREAM" --import --graphics=none --disk="size=${DISK_GB},backing_store=${IMAGE}" --network bridge=virbr0 "${IGNITION_DEVICE_ARG[@]}"
- After CoreOs VM Startup, login using the credentials you've set in ignition file.
- Install Python 3 using RPM package manager
sudo rpm-ostree install python3
- Get the IP Address of the machine and write it down
ip a | grep enp1s0 | grep inet | awk '{print $2}' | awk -F / '{print $1}
- Reboot machine
sudo systemctl reboot
- Type exit, and then
Ctrl + ]
In order to quit from emulation - Verify that you can ssh into VM from host without password:
ssh zgrinber@ip_from_section11
- Repeat steps 8-14 for each additional CoreOS VM, with these 2 additional steps To change machine name before 8-14 steps:
# For example, for provisioning another Fedore CoreOs Machine with name coreos-02, run the following, and then repeat steps 8-14
export VM_NAME=coreos-02 ; sed -i -E 's/source\": \"data:,core-os-[0-9x]{1,2}\"/source\": \"data:,'$VM_NAME'/g' coreos-vm.ign
To Create Kubernetes Cluster of 1 master and 2 workers using Ansible and LXC (Linux Containers Virtualization Technology) on Ubuntu Host.
- Ubuntu Host ( Tested on version 20.04) with root/sudo privileges.
- At Least free 50 GB disk space on Host.
- Ansible And Python Installed on Host.
- Lxc Installed On Host
- Host with free 6 VCPUs and 8 GB RAM.
- SSH into ubuntu HOST.
- Install LXC
sudo apt-get update && sudo apt-get install lxc -y
- Check that lxc service is up and running, and init lxd
sudo systemctl status lxc
lxd init
- Create empty LXC profile for k8s:
lxc profile create k8s
cat > ./advanced-demo/k8s-config-profile.j2 << EOF
config:
limits.cpu: "2"
limits.memory: 2GB
limits.memory.swap: "false"
linux.kernel_modules: ip_tables,ip6_tables,nf_nat,overlay,br_netfilter
raw.lxc: "lxc.apparmor.profile=unconfined\nlxc.cap.drop= \nlxc.cgroup.devices.allow=a\nlxc.mount.auto=proc:rw
sys:rw\nlxc.mount.entry = /dev/kmsg dev/kmsg none defaults,bind,create=file"
security.privileged: "true"
security.nesting: "true"
description: LXD profile for Kubernetes
devices:
eth0:
name: eth0
nictype: bridged
parent: lxdbr0
type: nic
root:
path: /
pool: default
type: disk
name: k8s
used_by: []
EOF
cat ./advanced-demo/k8s-config-profile.j2 | lxc profile edit k8s
- Make sure that you can see profile created alongside default:
lxc profile list
- Create 1 container for master and 2 containers for workers , using the k8s profile:
lxc launch ubuntu:20.04 kmaster --profile k8s
lxc launch ubuntu:20.04 kworker1 --profile k8s
lxc launch ubuntu:20.04 kworker2 --profile k8s
- If host is using cgroup v1, skip this step (only needed for cgroup v2):
lxc config device add kmaster "kmsg" unix-char source="/dev/kmsg" path="/dev/kmsg"
lxc config device add kworker1 "kmsg" unix-char source="/dev/kmsg" path="/dev/kmsg"
lxc config device add kworker2 "kmsg" unix-char source="/dev/kmsg" path="/dev/kmsg"
lxc restart kmaster kworker1 kworker2
- Install Kubernetes on master container, and wait for it to finish:
cat advanced-demo/bootstrap-kube.sh | lxc exec kmaster bash
- Install Kubernetes on two worker containers as worker nodes, and join them to the cluster:
cat advanced-demo/bootstrap-kube.sh | lxc exec kworker1 bash
cat advanced-demo/bootstrap-kube.sh | lxc exec kworker2 bash
- Check that cluster provisioned correctly:
lxc exec kmaster bash
kubectl get nodes
- Make cluster accessible from host:
mkdir -p ~/.kube
lxc file pull kmaster/etc/kubernetes/admin.conf ~/.kube/config
# If kubectl doesn't installed in host, kindly install it
sudo snap install --classic kubectl
kubectl get nodes