forked from CPqD/RouteFlow
-
Notifications
You must be signed in to change notification settings - Fork 0
/
INSTALL
148 lines (93 loc) · 8.97 KB
/
INSTALL
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
RouteFlow v0.1.0
-----------------------------------
Copyright (C) 2011 CPqD
Welcome
========
Welcome to the RouteFlow remote virtual routing platform. This distribution includes all the software you need to build, install, and deploy RouteFlow in your OpenFlow network.
This version of RouteFlow is an alpha developers' release intended to evaluate RouteFlow for providing virtualized IP routing services on one or more OpenFlow switches.
*Note*
RouteFlow relies on NOX and OpenFlow as the communication protocol for controlling switches.
RouteFlow uses Open vSwitch to provide the connectivitiy within the virtual environment where Linux virtual machines run the Quagga routing engine.
This build supports Openflow v1.0.0 and NOX v0.6.
Please be aware of OpenFlow, NOX, Open vSwitch, Quagga and RouteFlow licenses.
Contents
========
- Install
- Starting RouteFlow
- Running RouteFlow
Install
=======
Requirements:
- RouteFlow package (contains NOX-Zaku (v0.9.0), http://noxrepo.org/)
- Virtual Machine image with the Quagga routing engine and an implementation of RFSlave on it. You can obtain a sample environment for QEMU: http://github.com/chesteve/RouteFlowVM
- OpenVSwitch (Download and installation instructions: http://openvswitch.org/?page_id=10)
- OpenFlow-enabled* devices (e.g., OVS, Mininet, NetFPGA, commercial)
*Note: Corrrect IP forwarding requires MAC-re-writing capabilities (plus TTL decrement and header checksum update)
Dependencies:
RouteFlow requires the following packages to be installed:
$ sudo apt-get install iproute-dev
$ sudo apt-get install libboost-all-dev swig1.3
In addition to the dependencies from the NOX controller. An unofficial list of dependencies can be found here:
http://noxrepo.org/noxwiki/index.php/Dependencies
For instance, for Ubuntu 10.04, a full compile of NOX zaku requires the following packages:
$ sudo apt-get install autoconf automake g++ libtool swig make git-core libboost-dev libboost-test-dev libboost-filesystem-dev libssl-dev libpcap-dev python-twisted python-simplejson python-dev
The main directory tree of RouteFlow is divided into:
-rf-server
- rf-slave (also provided in the sample VM with Quagga 0.99.10)
- rf-controller (includes NOX-Zaku with the RouteFlowc application located in the netapps folder: \src\nox\netapps)
- rf-protocol
You can compile all modules simply running the command "make all" on the main directory.
Since rf-slave, rf-controller, and rf-server are meant to run on different machines you can compile only the slave module with "make slave" (which will make needed libraries and rf-slave), only the controller with "make controller" (which will make needed libraries and rf-controller) and only the server with make server, both in the main directory.
The OpenFlow switches in the data plane must support the same controller's version (in this case, version 1.0) and it is not necessary any special configuration. For instance, you can use Mininet and connect to the external NOX controller running the rf-controller.
You need to setup a virtual enviroment that will be responsible for running the virtual topology. There are options in the virtualization solution space. We have succesfully tested RouteFlow with images for KVM/QEMU and LXC. Each VM should meet the minimal requirements to run the route engine (e.g, Quagga) and RouteFlow-Slave daemon. In order to get a good performance on virtualization the processor should support a virtualization technology, it means that you will find the flag VMX for Intel processor or SVM for AMD. This information can be found in the file /proc/cpuinfo.
Starting RouteFlow
=================
The initialization sequence should be as follows:
1- Insert the Open vSwitch (OVS) module into the kernel. At OVS main directory:
$sudo insmod datapath/linux-2.6/openvswitch_mod.ko
2- Start two OVS instances, each using a different port:
$sudo ovs-openflowd br0 tcp:127.0.0.1:6363 --out-of-band (VM configuration interfaces)
$sudo ovs-openflowd --hw-desc=rfovs switch1 tcp:127.0.0.1:6633 --out-of-band. (Virtual Control Plane Network i.e. inter-VM connectivity).
3- Configure the br0 interface with an IP on the configuration network, for instance 192.169.100.1:
$ifconfig br0 192.169.100.1.
Make sure that the IP of the Virtual Machines running the RF-Slave are on the same network of br0. In the sample virtualization environment it should be on the form 192.169.100.X
4- Start the NOX controller located at rf-controller/build/src with a simple learning switch module (such as pyswitch) listening to the same port of the (br0) OVS used for VM configuration connectivity:
$cd rf-controller/build/src
$sudo ./nox_core -v -i ptcp:6363 pyswitch
5- Start rf-server:
$ ./build/rf-server
The default tcpport that it uses to connect to the VMs is 5678.
NOTE: In further versions you’ll have the option to choice RouteFlow´s mode operation on start-up. For now, there is only one mode available in which all packets originated in the VMs (e.g. OSPF hello) are sent through the data plane. If you want to try the mode of operation where routing packets are kept in the virtual data plane where the virtual topology is defined with the aid of NOX discovery application, you will have to change the following piece of code in the file main.cc contained in the rf-server folder, line 45.
RouteFlowServer rfSrv(TCP_PORT, VIRT_N_FORWARD_PLANE);
becomes
RouteFlowServer rfSrv(TCP_PORT, VIRTUAL_PLANE);
6- Start a new rf-controller NOX application instance, with the RouteFlowc module, listening to the same port used on the OVS in charge of the inter-VM connectivity:
$sudo ./nox_core -v -i ptcp:6633 routeflowc .
The default tcpport that it uses to connect with server is 7890, but this can be changed passing the parameter tcpport. For example, if you want to set the port to 1234:
$sudo ./nox_core -v -i ptcp:6633 routeflowc=tcpport=1234 .
7- Start VMs
Any virtualization environment can be used to load the VMs with Quagga and RF-Slave..
You can find a sample VM in the routeflow-vm-0.2.tar.bz2
Uncompress:
$tar xvjpf routeflow-vm-0.2.tar.bz2
To start a new VM use the start-vm.sh script that accepts two parameters: 1.- An ID to identify the VM, 2.- The number* of interfaces that should be loaded. The VMs are started with one management interface plus the number used on 2. Thus, if you want VM 1 to be started with 25 interfaces* (e.g., 1 for control and 24 to act as data path ports):
# start-vm <ID> <interfaces>
$ start-vm.sh 1 24
$ user: root
$ password: root
*Note the number of interfaces in the Virtual Machine shall be equal + 1 to the number of ports in the OpenFlow, switch to be controlled, because the rf-slave uses the VM first port to communicate with server.
* The script start-vm.sh adds the virtual machines interfaces to the datapaths of Open vSwitch instances. That’s another reason to init the Open vSwitchs before any virtual machine.
If you’re not using our sample virtualization environment, you will need to add the interfaces with the Open vSwitch command below:
sudo ovs-dpctl add-if <datapath_name> <interface>
For more information about the command read the ovs-dpctl manual.
Inside the VM x, run the RF-Slave located in the /opt/ folder. This module needs three parameters on start, and the correct syntax is as follows:
$./rfslave <IP> <port number> <interface>
The <port number> parameter must match the routeflowc tcpport parameter, <IP> must be the IP of NOX and <interface> will provide the interface from which rfslave will get the MAC address to use as it's Id.
*In our sample virtualization environment the RF-Slave starts when you initizalize the virtual machine.
8- Start the Datapaths (i.e. OpenFlow switches)
Start the OpenFlow switches pointing them to the NOX controller running routeflowc (e.g. tcp:6633). When a Datapath connects to RouteFlowc, if there are no Idle VMs available, the Datapath is set to Idle mode and it's flow table is left empty.
When a VM joins, it is assigned to the first Datapath on the list, the Datapath and the VM are both set to running mode and two default permanent flows are installed on the Datapath to forward RIP and OSPF packets to the controller. If there are no eligible Datapath's to assign to the registered VM, it's status is set to Idle, and it will wait for the next Datapath join event.
When a VM assignment is successfully made, RouteFlowc stores this info so that a Datapath will always connect to it's specific VM.
Note that when a VM connects to RouteFlow, the MAC address corresponding to the <interface> will be it's assigned Id. If a VM tries to connect using a MAC address equal to another VM already connected, the connection will be refused.
Once the desired topology of the network is up, Quagga may be started on each VM, so that flows can be installed on each Datapath according to the Quagga+Linux generated forwarding rules (i.e., FIB = ARP + ROUTE tables) obtained by Quagga (more information about Quagga configuration can be obtained at http://www.quagga.net).
*In the virtualization environment available with the RouteFlow package, Quagga is initialized when the VM starts.