Rachel's Yard| A New Continuation
Update 1: E3-12xx v4 has a lead time of 4 weeks from Asia via ACME, no source via Superbiiz
So I've been looking at Paperspace.io and Sixa.io for virtual desktop solutions, and it does look very appealing. However, is it possible to run it in a home-lab environment?
Currently, I have a XenServer running on my E5-1650v3 node (256GB) and Apache Guacamole for vdi. I do want to enable some sort of GPU capabilities (like for my Jetson TX2 training, but I will use Sixa.io for now; my workstation has a very shitty support for Linux). Looking at XenServer's HCL, and Intel seems to be a promising solution...
However, Intel is being a stupid asshole. E3-15xx v5 are available only in BGA. (WTF?) and E3-12xx v4 are not available anywhere except very special order. I was able to find v5 but only in Asia (none available in North America). What the fuck?
But, the Skull Canyon NUC does have a Iris Pro processor and unofficially supported by XenServer...
Here's the pricing as of the time of the writing:
On Tuesday half of the Internet broke because S3 was fucked. SlugSurvival happened to have a bug that needed a hotfix. However, since Docker Hub uses S3 as the backend, I could not push my images!
Thus, I have to setup my own in my racks of servers.
First, setup the authentication service. Refers to mkuchin/docker-registry-web. You should have a public key and private key ready.
kubectl create a secret for the public key part.
Then, spin up the registry:
This also assumes that you have a Redis running somewhere in the same namespace.
Third, Profit (of course setup services and ingress as well).
As of the time of this writing,
registry:2 corrupts large layers (see my issue) when using 3rd-party S3. You need to use
secrettokens for ALL namespaces, and recreate services/pods. Your applications in each pods can access the API with a
serviceaccountcredentials, and it is only generated once. Therefore, if you change the TLS certs, the old
secretswill be invalid.
--apiserver-count=<count>flag in your kube-apiserver. Otherwise, apiserver will fight to get control of the service endpoints.
Billions of $$$ are awesome, but how do you invest? (joke
Docker is awesome (late to the party again), but how to manage?
TL;DR Kubernetes is a management tool (sort of) of containers.
I will just skip the introduction, since there are many articles out there already.
The best way of running Kubernetes is to deploy it on CoreOS, period.
See, all other OSs are too heavyweight. CoreOS (other than SmartOS) is highly specialized to run containers, so there's that. Plus, folks at Quay.io is kind enough to have kubelet image ready. Kubelet is a binary that contains all the components that you will need to run Kubernetes (Golang is awesome), and Quay/CoreOS team makes it running in a rkt container, which makes updates/upgrades easy.
Of course, all components are stateless. Persistent states are stored in a etcd cluster (that seems to be the trend).
An obligatory graph of architecture:
Here are my
cloud-config files for my deployment on OpenNebula: https://coreos-opennebula.s3.fmt01.sdapi.net/cloud-config/
Currently only a project written for an econ professor, but I will containerize more of my projects.
The nginx controller on kubernetes/contrib kind of blows. So I (sort of) compiled the newest nginx and CHACHA20 ciphers:
You will need to recompile the controller with this tag.