Rachel's Yard

| A New Continuation
Tags VXLAN

Back in 2014, I was like, Screw OpenStack. However it is really taking a toll on my administrative time..

Networking in OpenNebula model:

Networking in OpenNebula

Networking in this model is pretty much FlatDHCP in OpenStack's context, shared L2 domain, lot of broadcast traffic, and blah. And if the VM needs a public IP, I need to go in to pfSense, 1:1 NAT to the VM, change SNAT, add firewall rules, and such. Though I have to do the same in OpenStack, at least Neutron is offering a more flexible model.

What Does It Take to Move to OpenStack?

Actually, a lot.

Since all KVM hypervisors running OpenNebula are PXE boot + iSCSI root to the Storage, it is very easy to switch things back and forth.

First of all, we are going to migrate all the "necessary" VMs to one node:

OpenNebula

We basically decommissioned 4 of the KVM hypervisors to be used with OpenStack.

High Availability you said? Fear not, VMware is here:

VMWare

The Controller and Neutron server are running on shared storage, vSphere HA, and EVC, meaning that if one of the physical nodes (that came from the decommissioned nodes) does fail, other nodes can take over. However for Compute nodes, they are running on local SSD (remember the spring upgrades?), so no HA for them, but who cares about HA on compute nodes (lol).

Cinder and Glance are running on a proxy node (a Supermicro Atom 1U), where it runs OpenStack services but doesn't contaminate the OpenNebula environments (it uses Sunstone's ZFS array for storage).

Networking in OpenStack:

Networking in OpenStack

Tenant traffic is running in the management network and storage network as well, because i don't see the need of separating these traffic in my environment. But in the future deployment, when I can afford 10Gbps infrastructure and 1Gbps uplink, then I would consider to separate those traffic.

Tenant traffic is isolated with GRE tunneling. I found VXLAN uninteresting for some reasons, although in theory VXLAN would archive higher throughput (no MTU bullshit here yet).

Now if we are going to use VMware to virtual the infrastructure, we might as well use dSwitch:

dSwitch

Teaming/Load balance is set to route based depends on physical NIC load.

In my case, VMware layer is a mean to rapidly spin up/spin down/start over in case Packstack fucks something horribly badly, but to my surprise, Packstack actually deploys the stack in 15 minutes, and everything works beautifully.

Here's my answer file if anyone is interested: 20150326.txt

Now it is the matter of running more test and if everything is okay, time to migrate from OpenNebula. Why? Because SDN is the goal:

Topology

1
Weightless Theme
Rocking Basscss
RSS