On Exoscale, you can attach a private network to your VMs, which comes unmanaged. To ease the communication between your instances on this private link, let's setup a DHCP server answering requests on the private interface for all your VMs using this second network.
We are going to use Ansible to ease the process of deploying one DHCP server and 3 sample virtual machines to validate the setup.
If you're not familiar with Ansible, it's an open source automation tool for your infrastructure as a code. The only language you'll need is YAML to write the configuration files.
First you have to clone the repository:
$ git clone https://github.com/marcaurele/ansible-exoscale-privnet.git
$ cd ansible-exoscale-privnet
Create a new virtual environment for Python:
# For python 2
$ virtualenv -p <location_of_python_2.7> venv
# For python 3
$ python3 -m venv venv
# Activate the virtual environment
$ . ./venv/bin/activate
Install the requirements for the playbook (ansible (>=2.4), cs, sshpubkeys):
$ pip install -r requirements.txt
In your shell you need to export those 3 variables, as per cs documentation:
export CLOUDSTACK_ENDPOINT=https://api.exoscale.ch/compute
export CLOUDSTACK_KEY=<your-api-key>
export CLOUDSTACK_SECRET=<your-api-secret-key>
Or if you're alread using a .cloudstack.ini
file, you only need to export:
export CLOUDSTACK_REGION=<section_name>
Your API key can be found at https://portal.exoscale.ch/account/profile/api.
Now you are all set to run the playbook. To verify the setup, from your terminal run:
$ cs listZones
You should get a JSON output of the current zones available on Exoscale.
{
"count": 3,
"zone": [
{
"allocationstate": "Enabled",
"dhcpprovider": "VirtualRouter",
"id": "1128bd56-b4d9-4ac6-a7b9-c715b187ce11",
"localstorageenabled": true,
"name": "ch-gva-2",
"networktype": "Basic",
"securitygroupsenabled": true,
"tags": [],
"zonetoken": "ccb0a60c-79c8-3230-ab8b-8bdbe8c45bb7"
},
{
"allocationstate": "Enabled",
"dhcpprovider": "VirtualRouter",
"id": "91e5e9e4-c9ed-4b76-bee4-427004b3baf9",
"localstorageenabled": true,
"name": "ch-dk-2",
"networktype": "Basic",
"securitygroupsenabled": true,
"tags": [],
"zonetoken": "fe63f9cb-ff75-31d3-8c46-3631f7fcd533"
},
{
"allocationstate": "Enabled",
"dhcpprovider": "VirtualRouter",
"id": "4da1b188-dcd6-4ff5-b7fd-bde984055548",
"localstorageenabled": true,
"name": "at-vie-1",
"networktype": "Basic",
"securitygroupsenabled": true,
"tags": [],
"zonetoken": "26d84c22-f66d-377e-93ab-987ef477cab3"
}
]
}
I will discuss the playbook setup afterwards. If you're eager to run the playbook and see the result, run:
$ ansible-playbook deploy-privnet-dhcp.yml
If you wish to deploy those virtual machines in another zone/region, for example
de-fra-1
you can overwrite the zone
variable on the command line:
$ ansible-playbook deploy-privnet-dhcp.yml -e "zone=de-fra-1"
This role creates an SSH key "privnet" to be used on all virtual machines
deployed by this playbook to authenticate, instead of using a password. This
new key generated by CloudStack is saved under ~/.ssh/id_rsa_privnet
on your
local machine.
This role provisions the VMs on Exoscale, as well as a new SSH key, security
groups, and adds the private networking interface to each of them. What you
might not see often is the user_data
provided for the VM deployment in
create_vm.yml
which is used to let cloud-init manage the hostname nicely as done when starting
a VM from Exsocale portal:
user_data: |
#cloud-config
manage_etc_hosts: true
fqdn: {{ zone }}-{{ dhcp_name }}
The private networking interface is added through
create_private_nic.yml
using Ansible
cs_instance_nic
module:
- name: "dhcp server : add privnet nic"
local_action:
module: cs_instance_nic
network: "{{ private_network }}"
vm: "{{ zone }}-{{ dhcp_name }}"
zone: "{{ zone }}"
It can also be directly attached to the VM on the deployment step in
create_vm.yml
with the networks
attribute:
- name: "dhcp server : create"
local_action:
module: cs_instance
name: "{{ zone }}-{{ dhcp_name }}"
template: "{{ template }}"
root_disk_size: "{{ root_disk_size }}"
service_offering: "{{ instance_type }}"
ssh_key: "{{ ssh_key }}"
security_groups: [ '{{ security_group_name }}' ]
networks:
- "{{ private_network }}"
user_data: |
#cloud-config
manage_etc_hosts: true
fqdn: {{ zone }}-{{ dhcp_name }}
zone: "{{ zone }}"
This instructs Ansible to attach a new NIC to VM {{ dhcp_name }}
in the
{{zone }}
zone on the privNetForBasicZone
network. A new eth1
interface
comes up on your Linux Ubuntu box for the DHCP server to bind to.
This role configures the DHCP server. We configure a static IP address for
its privnet interface eth1
in
configure_private_nic.yml
and activate the interface:
- name: upload network interface configuration
template:
src: privnet.cfg.j2
dest: /etc/network/interfaces.d/01-privnet.cfg
force: yes
register: privnet_cfg
- name: enable privnet interface
shell: "ifup eth1"
when: privnet_cfg.changed
In setup_dhcp_server.yml
we install ISC DHCP server with a basic configuration to serve IP addresses
in the range 10.11.12.2
- 10.11.12.30
:
- name: install packages
apt:
name:
- isc-dhcp-server
state: present
- name: set listening interfaces
lineinfile:
path: /etc/default/isc-dhcp-server
line: "INTERFACES=\"eth1\""
regexp: "^INTERFACES"
notify: restart dhcp server
- name: set configuration
template:
dest: /etc/dhcp/dhcpd.conf
src: dhcpd.conf.j2
owner: root
group: root
notify: restart dhcp server
This role is the simplest through
configure_private_nic.yml
: it uploads the
network interface configuration file
for the privnet and enables it:
- name: copy network interface configuration
copy:
src: privnet.cfg
dest: /etc/network/interfaces.d/01-privnet.cfg
force: yes
register: privnet_cfg
- name: enable privnet interface
shell: "ifup eth1"
when: privnet_cfg.changed
This setup could be extended to also configure the DHCP server with static DHCP mappings for your VMs based on their private interface MAC address.