Rachel's Yard| A New Continuation
Back in 2014, I was like, Screw OpenStack. However it is really taking a toll on my administrative time..
Networking in OpenNebula model:
Networking in this model is pretty much FlatDHCP in OpenStack's context, shared L2 domain, lot of broadcast traffic, and blah. And if the VM needs a public IP, I need to go in to pfSense, 1:1 NAT to the VM, change SNAT, add firewall rules, and such. Though I have to do the same in OpenStack, at least Neutron is offering a more flexible model.
Actually, a lot.
Since all KVM hypervisors running OpenNebula are PXE boot + iSCSI root to the Storage, it is very easy to switch things back and forth.
First of all, we are going to migrate all the "necessary" VMs to one node:
We basically decommissioned 4 of the KVM hypervisors to be used with OpenStack.
High Availability you said? Fear not, VMware is here:
The Controller and Neutron server are running on shared storage, vSphere HA, and EVC, meaning that if one of the physical nodes (that came from the decommissioned nodes) does fail, other nodes can take over. However for Compute nodes, they are running on local SSD (remember the spring upgrades?), so no HA for them, but who cares about HA on compute nodes (lol).
Cinder and Glance are running on a proxy node (a Supermicro Atom 1U), where it runs OpenStack services but doesn't contaminate the OpenNebula environments (it uses Sunstone's ZFS array for storage).
Networking in OpenStack:
Tenant traffic is running in the management network and storage network as well, because i don't see the need of separating these traffic in my environment. But in the future deployment, when I can afford 10Gbps infrastructure and 1Gbps uplink, then I would consider to separate those traffic.
Tenant traffic is isolated with GRE tunneling. I found VXLAN uninteresting for some reasons, although in theory VXLAN would archive higher throughput (no MTU bullshit here yet).
Now if we are going to use VMware to virtual the infrastructure, we might as well use dSwitch:
Teaming/Load balance is set to route based depends on physical NIC load.
In my case, VMware layer is a mean to rapidly spin up/spin down/start over in case Packstack fucks something horribly badly, but to my surprise, Packstack actually deploys the stack in 15 minutes, and everything works beautifully.
Here's my answer file if anyone is interested: 20150326.txt
Now it is the matter of running more test and if everything is okay, time to migrate from OpenNebula. Why? Because SDN is the goal:
Because Intel NUC is just not gonna cut it.
Let me walk you through...
From the top, there is a pfSense node running. It is running:
And of course there is the TP-Link switch for awesomeness...
Going down onto the unknown and Supermicros:
You may have noticed, "Look ma no SSD/Hard drives!" Yes, they are all PXE boot from the Storage pod and iSCSI root onto the Storage pod. If I were to create a single point of failure because of a centralized storage, I might as well put all the eggs in one basket.
There is a HP Proliant DL160 G6 and a Dell R410 right on top, they are running:
The HP one has 4 x 3TB Seagate Barracuda connected onto the chipset SATA, and VT-d passthrough onto one of the FreeNAS virtual machine to provides backup.
The last one is the storage pod, nothing new. E3-1220 V3, 4x8GB ECC, 8x1TB Original Hitachi.
Man, Nehalem is still powerful yet doesn't break a bank. I will have a Westmere-EP (X5650) coming next week, and I plan to rent out two of my other R410 (2xX5570 and 1xE5620) to Chinese.
I plan to use the X5650 as a remote workstation with a Quadro K2000, because it might come in handy.
The magnificent OpenNebula: