First we are going to begin with Eucalyptus. I won't dig too much on the steps you can find here for the packages install.
As said, we gonna have in this deployment : 1 CLC, 1UFS, 1 CC/SC and 1 NC. But before initializing the cloud, we are going to do a few VLANs to separate our traffic.
In the meantime, our cloud has been initialized. Now simply register the services as indicated in the documentation - register components. Don't forget to use the VLAN 1000 IP address for registration.
For all components, don't forget to change the VNET_MODE value to "VPCMIDO" instead of "MANAGED-NOVLAN" which will indicate to the components that their configuration must fit VPCMIDO requirements.
So, from here the Cassandra and Zookeeper will allow us to have midonet-API and midolman installed.
Midonet-API is here to be the endpoint against which the Eucalyptus components will do API calls to create new routers and switches, as well as configure security groups (L4 filtering). Midolman is here to connect the different systems together and make the networking possible. You MUST have a midolman on each NC and CLC. Midonet-API is only to be installed on the CLC.
To have the API working, for now in Eucalyptus 4.1.0 we have to (sadly) have it installed on the CLC and the CLC only (that is the sad thing). Here our CLC will act as the "Midonet Gateway", this EUCART router I was talking about previously.
Let's do the install : (of course, here you will also need the midonet.repo we used before).
yum install tomcat midolman midonet-api python-midonetclient
Tomcat will act as server for the API (basically). Unfortunately, the port in Eucalyptus to talk to the API has been hardcoded :'( to 8080. So before going any further we need to change one of Eucalyptus' port to a different on Eucalyptus itself:
$> euca-modify-property -p www.http_port=8081
8081 was 8080
If you don't make this change, your API will never be available.
Now the packages are installed and the port 8080/TCP free, we must configure Tomcat itself to serve the midonet API. Add a new file into /etc/tomcat/Catalina/localhost named "midonet-api.xml"
Good, so now we need to configure Midonet API to get connected to our Zookeeper server. Go into /usr/share/midonet-api/WEB-INF and edit the file web.xml
<param-value>http://<CLC VLAN 1001 IP>:8080/midonet-api</param-value>
<!-- Specify the class path of the auth service -->
# old value is for Keystone OS
# new value ->
<!-- comma separated list of Zookeeper nodes(host:port) -->
<param-value><ZOOKEEPER VLAN 1001 IP>:2181</param-value>
Alright, now we can start tomcat which will enable the midonet-api. To verify, you can simply do a curl call on the entry point
curl <CLC VLAN 1001 IP>:8080/midonet-api/
We can configure midolman. The good thing about the midolman configuration, is that you can use the same configuration across all nodes. Once more, we simply have to change a few parameters to use our Cassandra / Zookeeper server. Edit /etc/midolman/
#zookeeper_hosts = 192.168.100.8:2181,192.168.100.9:2181,192.168.100.10:2181
zookeeper_hosts = <ZOOKEEPER VLAN 1001 IP>:2181
session_timeout = 30000
midolman_root_key = /midonet/v1
session_gracetime = 30000
#servers = 192.168.100.8,192.168.100.9,192.168.100.10
servers = <CASSANDRA VLAN 1001 IP>:9042
# DO CHANGE THIS, recommended value is 3
replication_factor = 1
cluster = midonet
This is it, nothing else to configure.
you need to have IP route with netns installed. To verify, simply try "ip netns list". If you end with an error, you need to install the iproute netns package.
Now we are done with the config files, we can start the services. For Midolman, there is not default init.d script installed. So, here it is :
# midolman Start up the midolman virtual network controller daemon
# chkconfig: 2345 80 20
### BEGIN INIT INFO
# Provides: midolman
# Required-Start: $network
# Required-Stop: $network
# Description: Midolman is the virtual network controller for MidoNet.
# Short-Description: start and stop midolman
### END INIT INFO
# Midolman's backwards compatibility script to forward requests to upstart.
# Based on Ubuntu's /lib/init/upstart-job
if [ -z "$1" ]; then
echo "Usage: $0 COMMAND" 1>&2
case $COMMAND in
echo $status_output | grep -q running
if status "$JOB" 2>/dev/null | grep -q ' start/'; then
if [ -z "$RUNNING" ] && [ "$COMMAND" = "stop" ]; then
elif [ -n "$RUNNING" ] && [ "$COMMAND" = "start" ]; then
elif [ -n "$DISABLED" ] && [ "$COMMAND" = "start" ]; then
if status "$JOB" 2>/dev/null | grep -q ' start/'; then
if [ -n "$RUNNING" ] ; then
# If the job is disabled and is not currently running, the job is
# not restarted. However, if the job is disabled but has been forced into the
# running state, we *do* stop and restart it since this is expected behaviour
# for the admin who forced the start.
if [ -n "$DISABLED" ] && [ -z "$RUNNING" ]; then
$ECHO "$COMMAND is not a supported operation for Upstart jobs." 1>&2
Once you have installed and configured midolman for every components, we need to configure midonet to have all our cloud component, here we will simply call them "hosts" (the terminology is very important).
Back on our CLC, let's add a midonetrc file so we don't have to specify the IP address everytime
api_url = http://<CLC VLAN 1001 IP>:8081/midonet-api
username = admin
password = admin
project_id = admin
Here, the credentials are not important and won't work. So anytime, to get onto midonet-cli, use the option "-A"
Before we get any further, there are 2 new packages which MUST be installed on the CLC : eucanetd and nginx. Explanations later.
yum install eucanetd nginx -y
We are half the way. I know, sounds like quite a lot. But in fact, that is not that much. We now need to configure Eucalyptus network configuration. This, as for EDGE, is done using a JSON template. Pay attention, a mistake will cause you headaches for a long time.
So here, what does it mean ?
- InstanceDnsServers : The list of nameservers. Nothing unexpected
- Mode : VPCMIDO : Indicates to the cloud that VPC is enabled
- PublicIps : List of ranges and / or subnets of Public IPs which can be used by the instances
- Mido : This is the most important object !!
- EucanetdHost: String() which points onto the server which runs the eucanetd binary and midonetAPI
- GatewayHost: String() which points onto the server which runs the midonet GW. As said for now the GW and EucanetdHost must be the same machine.
- GatewayIP : String() which indicates which IP will be used by the router EUCART. Here, you must use an IP address which DOESNT EXIST !!!
- GatewayInterface : The IFACE which is used for the GatewayIP. Here, I had created a dedicated VLAN for it, vlan 1002.
- PublicNetworkCidr: String() which is the network / subnet for all your public IPs. Here in my example, I am using a /16 and defined only a /24 for my cloud public IPs. It is because I can have multiple Clouds in this /16 which each will use a different range of IPs
- PublicGatewayIP : String() which points on our GBP router.
Don't forget that the GatewayInterface must be an interface WITHOUT an IP address set
For now in 4.1.0 as VPC is techpreview, many configuration and topologies are not yet supported. So for now, you must keep the MidonetGW on the CLC and have the EucanetdHost and EucanetdHost pointing onto the CLC DNS' name. And this MUST be a DNS name, otherwise the net namespaces wont be created correctly.
Also as we speak of DNS, if those VLAN we created can lead to resolve the hostname, you MUST add in your local hosts file the VLAN1001 IP to resolve your hostname.
Alright, at this point we can have instances created into subnets, but they won't be able to get connected to external networks. We need to setup BGP and configure midonet for that.
Get onto the the BGP server. Here, we are only going to create 1 VLAN, which we will use for public Addresses of instances. Here, we gonna use 172.16.0.0/16 and our BGP router will use 172.16.255.254 as we indicated into the JSON previously.
vconfig add em1 1002
ifconfig em2.1002 172.16.255.254 up
To have it working, it is very easy : (originally I followed this tutorial)
yum install quagga
setsebool -P zebra_write_config 1
Now the vty config itself and BGP are really simple :
! -*- bgp -*-
! BGPd sample configuratin file
! $Id: bgpd.conf.sample,v 1.1 2002/12/13 20:15:29 paul Exp $
!enable password please-set-at-here
router bgp 66000
bgp router-id 172.16.255.254
neighbor 172.16.128.100 remote-as 66742
neighbor 172.16.128.101 remote-as 66743
log file bgpd.log
Here we can see that the server will get BGP information from 2 "neighbor" with unique IDs. We will later be able to have 1 peer per midonet GW which will be used by the system to reach networks.
To simplify : the BGP server is waiting for information coming from other BGP servers. Those BGP servers will be our MidonetGW. Our MidonetGW will then announce themselves to the server saying "hi, I am server ID XXXX, and I know the route to YY networks". Once the announcement is done on the root BGP router, all traffic going to it to reach our instances EIP will be sent onto our MidonetGW.
Here is the zebra config.
! Zebra configuration saved from vty
! 2015/03/05 13:14:09
enable password zebra
log file /var/log/quagga/quagga.log
ipv6 nd suppress-ra
ipv6 nd suppress-ra
ipv6 nd suppress-ra
Alright, almost finished !.
Back on our CLC, we need to configure the EUCART router.
# we list routers
midonet> router list
router router0 name eucart state up
# we list the ports on the router
midonet> router router0 list port
port port0 device router0 state up mac ac:ca:ba:b0:df:d8 address 169.254.0.1 net 169.254.0.0/17 peer bridge0:port0
port port1 device router0 state up mac ac:ca:ba:09:b9:47 address 172.16.128.100 net 172.16.0.0/16
# At this point, we know that we must configure port1 as it has the GWIpAddress we have set in the JSON earlier. Check if there is any BGP configuration done on it
midonet> router router0 port port1 list bgp
# nothing, it is normal we have not set anything yet
# first we need to add the BGP peering configuration
midonet> router router0 port port1 add bgp local-AS 66742 peer-AS 66000 peer 172.16.255.254
# here, note that the values are the same as in bgpd.conf . Our router is ID 66742 where the root BGP is 66000
# Now, we need to indicate that we are the routing device to our public IPs
router router0 port port1 bgp bgp0 add route net 172.16.142.0/24
# in my JSON config I have used only 240 addresses, but those addresses fit into this subnet
# ok, at this point, things can work just fine. Last step is to indicate on which port the BGP has to be
# to do so, we need to spot the CLC VLAN 1002 interface
midonet> host list
host host0 name odc-c-33.prc.eucalyptus-systems.com alive true
host host1 name odc-c-30.prc.eucalyptus-systems.com alive true
# Now we create a tunnel zone for our hosts
tunnel-zone create name euca-mido type gre
# Add the hosts
tunnel-zone tzone0 add member host host0 address A.B.C.D
tunnel-zone tzone0 add member host host1 address X.Y.Z.0
# here my GW is host1
midonet> host host1 list interface
iface midonet host_id host1 status 0 addresses  mac 9a:3a:bd:6d:89:c2 mtu 1500 type Virtual endpoint DATAPATH
iface lo host_id host1 status 3 addresses [u'127.0.0.1', u'0:0:0:0:0:0:0:1'] mac 00:00:00:00:00:00 mtu 65536 type Virtual endpoint LOCALHOST
iface em2.1001 host_id host1 status 3 addresses [u'192.168.1.67', u'fe80:0:0:0:baac:6fff:fe8c:e96d'] mac b8:ac:6f:8c:e9:6d mtu 1500 type Virtual endpoint UNKNOWN
iface em1.1002 host_id host1 status 3 addresses [u'fe80:0:0:0:baac:6fff:fe8c:e96c'] mac b8:ac:6f:8c:e9:6c mtu 1500 type Virtual endpoint UNKNOWN
iface em2.1000 host_id host1 status 3 addresses [u'192.168.1.3', u'fe80:0:0:0:baac:6fff:fe8c:e96d'] mac b8:ac:6f:8c:e9:6d mtu 1500 type Virtual endpoint UNKNOWN
iface em1 host_id host1 status 3 addresses [u'10.104.10.30', u'fe80:0:0:0:baac:6fff:fe8c:e96c'] mac b8:ac:6f:8c:e9:6c mtu 1500 type Physical endpoint PHYSICAL
iface em2 host_id host1 status 3 addresses [u'10.105.10.30', u'fe80:0:0:0:baac:6fff:fe8c:e96d'] mac b8:ac:6f:8c:e9:6d mtu 1500 type Physical endpoint PHYSICAL
# iface em1.1002 -> no IP address, good. We can now bind the router onto it.
midonet> host host1 add binding port router0:port1 interface em1.1002
On your CLC, you should see new interfaces being created called "mbgp_X". This is good sign. This means that your BGP processes are running and broadcasting information. Let's check on the upstream BGP that we have learned those routes.
Hello, this is Quagga (version 0.99.22.4).
Copyright 1996-2005 Kunihiro Ishiguro, et al.
router# show ip bgp
BGP table version is 0, local router ID is 172.16.255.254
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale, R Removed
Origin codes: i - IGP, e - EGP, ? - incomplete
Network Next Hop Metric LocPrf Weight Path
* 172.16.0.0 0.0.0.0 0 32768 i
* 172.16.142.0/24 172.16.128.100 0 0 66742 i
Total number of prefixes 2
Here we can see that we have router 66742 which has announced he knows the route to the subnet 172.16.142.0/24.
Now on our cloud, if we get an EIP on the instances, we will be able to reach those instances and / or host services accessible from - potentially - anywhere.