This repository contains a Packer template for creating CoreOS KVM images for OpenNebula.
Based on @bfraser's packer-coreos-qemu.
You will need:
A Linux host with KVM support will make the build much faster.
The build process is driven with make
:
$ make
[..]
Image file builds/coreos-alpha-991.0.0-qemu/packer-qemu ready
$
By default, make
will build a CoreOS image from the
CoreOS alpha channel. You may specify
a particular CoreOS version and channel by passing the appropriate
parameters to make
:
$ make COREOS_CHANNEL=stable COREOS_VERSION=899.13.0 COREOS_MD5_CHECKSUM=31f1756ecdf5bca92a8bff355417598f
[..]
Image file builds/coreos-stable-899.13.0-qemu/packer-qemu ready
$
Once the image has been built, you may upload it to OpenNebula using the Sunstone UI.
Alternatively, if you are allowed to access OpenNebula using its
command-line tools,
you may upload the image usng make
:
$ make register
The register
target also accepts specific CoreOS channels and
versions:
$ make register COREOS_CHANNEL=stable COREOS_VERSION=899.13.0 COREOS_MD5_CHECKSUM=31f1756ecdf5bca92a8bff355417598f
If you plan on using OpenNebula's
EC2 interface,
the image should be tagged with the attribute EC2_AMI
set to YES
(the register
target does this for you).
Before creating CoreOS VMs, you will need to create an OpenNebula VM template which uses the CoreOS images you have built. The VM template should follow these conventions:
- It should use the image you have created and uploaded.
- The first network interface will be used as CoreOS' private IPv4 address.
- If there is a second network interface defined, it will be used as CoreOS' public IPv4 network.
- You should add a user input field called
USER_DATA
, so that you may pass extra cloud-config user data to configure your CoreOS instance.
The following template assumes a CoreOS image called coreos-alpha
,
and two virtual networks called public-net
and private-net
, and
uses them to provide the disk and the two network interfaces of a
virtual machine:
NAME = coreos-alpha
MEMORY = 512
CPU = 1
HYPERVISOR = kvm
OS = [
ARCH = x86_64,
BOOT = hd
]
DISK = [
DRIVER = qcow2,
IMAGE = coreos-alpha
]
NIC=[
NETWORK = private-net
]
NIC=[
NETWORK = public-net
]
GRAPHICS = [
TYPE = VNC,
LISTEN = 0.0.0.0
]
USER_INPUTS = [
USER_DATA = "M|text|User data for `cloud-config`"
]
CONTEXT = [
NETWORK = YES,
SET_HOSTNAME = "$NAME",
SSH_PUBLIC_KEY = "$USER[SSH_PUBLIC_KEY]",
USER_DATA = "$USER_DATA"
]
If you plan on using OpenNebula's EC2 interface, your template should follow instead these conventions:
- It must not use any image, since the disk will be provided by the AMI you choose when you create your instances.
- It must include the attribute
EC2_INSTANCE_TYPE
set to a valid AWS instance type. If you plan on using OpenNebula'secone-*
command-line tools, ensure that name is recognised by the Ruby AWS modules they depend on. - The first network interface will be used as CoreOS' private IPv4 address.
- If there is a second network interface defined, it will be used as CoreOS' public IPv4 network.
The following template assumes you have two virtual networks called
public-net
and private-net
, and uses them to provide the two
network interfaces of a virtual machine:
NAME = t1.micro
EC2_INSTANCE_TYPE = t1.micro
MEMORY = 512
CPU = 1
HYPERVISOR = kvm
OS = [
ARCH = x86_64,
BOOT = hd
]
NIC=[
NETWORK = private-net
]
NIC=[
NETWORK = public-net
]
GRAPHICS = [
TYPE = VNC,
LISTEN = 0.0.0.0
]
CONTEXT = [
NETWORK = YES,
SET_HOSTNAME = "$NAME",
SSH_PUBLIC_KEY = "$USER[SSH_PUBLIC_KEY]"
]
In both examples above, the host name in the VM will be set to the OpenNebula VM name. If you want the host name to be assigned by reverse DNS lookup, replace the line:
SET_HOSTNAME = "SNAME"
with:
DNS_HOSTNAME = YES
in the CONTEXT
section, as you would do with any other OpenNebula
template.
If no host name is passed (or none can be found with reverse DNS lookup), the VM host name will be set to a value based on the MAC address of the first network interface.
If you specify a value for the hostname
field in the cloud-config
user data, it will take precedence over anything else.
Just fork this repository and open a pull request.