Rachel's Yard

| A New Continuation
Tags Kubernetes

On Tuesday half of the Internet broke because S3 was fucked. SlugSurvival happened to have a bug that needed a hotfix. However, since Docker Hub uses S3 as the backend, I could not push my images!

Outrageous!

Thus, I have to setup my own in my racks of servers.

Setup

  1. Minio
  2. Kubernetes
  3. mkuchin/docker-registry-web

As Fast As possible

First, setup the authentication service. Refers to mkuchin/docker-registry-web. You should have a public key and private key ready. kubectl create a secret for the public key part.

Then, spin up the registry:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
{
"apiVersion": "extensions/v1beta1",
"kind": "Deployment",
"metadata": {
"name": "kube-registry",
"namespace": "kube-system"
},
"spec": {
"replicas": 3,
"selector": {
"matchLabels": {
"app": "kube-registry"
}
},
"template": {
"metadata": {
"labels": {
"app": "kube-registry"
}
},
"spec": {
"volumes": [{
"name": "auth-signing",
"secret": {
"secretName": "auth-signing"
}
}],
"containers": [{
"name": "sysctl-buddy",
"image": "alpine:latest",
"command": [
"/bin/sh",
"-c",
"while true; do\n sysctl -w net.core.somaxconn=32768 > /dev/null 2>&1\n sleep 10\ndone\n"
],
"imagePullPolicy": "IfNotPresent",
"securityContext": {
"privileged": true
}
}, {
"name": "registry",
"image": "registry:2.5.1",
"resources": {
"requests": {
"cpu": "200m",
"memory": "1024Mi"
},
"limits": {
"cpu": "2",
"memory": "2048Mi"
}
},
"env": [{
"name": "REGISTRY_LOG_LEVEL",
"value": "warn"
},
{
"name": "REGISTRY_HTTP_ADDR",
"value": ":5000"
},
{
"name": "REGISTRY_HTTP_HOST",
"value": ""
},
{
"name": "REGISTRY_HTTP_SECRET",
"value": ""
},
{
"name": "REGISTRY_STORAGE",
"value": "s3"
},
{
"name": "REGISTRY_STORAGE_DELETE_ENABLED",
"value": "true"
},
{
"name": "REGISTRY_STORAGE_S3_REGION",
"value": "us-east-1"
},
{
"name": "REGISTRY_STORAGE_S3_REGIONENDPOINT",
"value": "http://minio-docker"
},
{
"name": "REGISTRY_STORAGE_S3_BUCKET",
"value": "images"
},
{
"name": "REGISTRY_STORAGE_S3_ACCESSKEY",
"value": "docker"
},
{
"name": "REGISTRY_STORAGE_S3_SECRETKEY",
"value": "supersecret"
},
{
"name": "REGISTRY_STORAGE_S3_ENCRYPT",
"value": "false"
},
{
"name": "REGISTRY_STORAGE_S3_SECURE",
"value": "false"
},
{
"name": "REGISTRY_STORAGE_S3_CHUNKSIZE",
"value": "20971520"
},
{
"name": "REGISTRY_STORAGE_CACHE_BLOBDESCRIPTOR",
"value": "redis"
},
{
"name": "REGISTRY_REDIS_ADDR",
"value": "docker-redis-cache:6379"
},
{
"name": "REGISTRY_AUTH",
"value": "token"
},
{
"name": "REGISTRY_AUTH_TOKEN_REALM",
"value": "point to your auth server"
},
{
"name": "REGISTRY_AUTH_TOKEN_SERVICE",
"value": "must match"
},
{
"name": "REGISTRY_AUTH_TOKEN_ISSUER",
"value": "must match"
},
{
"name": "REGISTRY_AUTH_TOKEN_ROOTCERTBUNDLE",
"value": "/auth/auth.crt"
}
],
"ports": [{
"containerPort": 5000,
"name": "registry",
"protocol": "TCP"
}],
"volumeMounts": [{
"name": "auth-signing",
"mountPath": "/auth",
"readOnly": true
}]
}]
}
}
}
}

This also assumes that you have a Redis running somewhere in the same namespace.

Third, Profit (of course setup services and ingress as well).

Caveat

As of the time of this writing, registry:2 corrupts large layers (see my issue) when using 3rd-party S3. You need to use registry:2.5.1 or registry:2

Oct 15 2016

What the Hell Is This?

It is basically a glorified calendar web app written in VueJS (no backend) that basically helps you (students) plan/search your classes better.

Here's a TL; DR page for you.

What the Hell? Why Reinvent the Wheel?

Well, sometimes the school's AIS is too slow for my likeky. Also, I have always dreamed of being able to enroll classes with ease. However, a typical enrollment process/checking for classes has always been like this:

  1. Search your classes on the AIS (Now they have a better interface)
  2. Add the classes that you want to the shopping cart
  3. Oh wait, before you can do that, select the section that you want
  4. Then you add your class+section to the shopping cart
  5. Add classes * click *
  6. Ah shit, class conflicts
  7. Repeats

Finding your classes should not be that hard.

OK, So It's a Calendar App, No Big Deal, Right?

You are wrong.

For starters, where are you getting the data? It was sort of impossible before, until the school rolled out a better interface on PISA, where it uses bootstrap in 2015, and it is actually human readable. Now we can use all kind of crazy DOM parser to find the class data.

Of course, me being me, always write spaghetti code, then fix later, this is how it looks like right now:

1
2
3
4
5
6
7
8
9
10
11
12
13
split = sectionDom[i].children[1].children[0].data.split(' ');
section.num = split[0].match(/\d+/g)[0];
section.sec = split[2];
section.loct = [
{
t: classDataCompatibleTime,
loc: sectionDom[i].children[7].children[0].data.replace('Loc: ', '')
}
]

section.ins = sectionDom[i].children[5].children[0].data.trim();
section.cap = sectionDom[i].children[9].children[0].data.substring(sectionDom[i].children[9].children[0].data.lastIndexOf('/') + 1).trim();
sections.push(section);

Well, don't worry about it, it gets the job done, at least for now. I will use prev() and next() and what not when I actually have time to improve the code base.

Fine, So What? Does Your App Enroll Classes For Users Too? HACKS?!

No, it does not enroll users automatically.

Well no shit sherlock. It involves student credentials, and I don't want to fuck with that.

SO YOUR APP DOES SHITS

Calm down, it will notify you when your classes are opened. Basically, I have a dispatcher and a bunch of workers to poll data from the website, and insert the changes to the database. It does all sort of magical stuff in the background. Allow me to explain:

Architecture of SlugSurvival

  1. The Data Fetcher will periodically compare the term list on S3 and PISA (usually every 4 days or 7 days). If there are new terms available, then it will fetch the newest term automatically, and upload the course data to S3. If there are no new terms, it will refresh the data for current quarter, until the drop deadline since there are usually changes to the courses up until drop deadline.
  2. The Data Fetcher will also spawns workers periodically to fetch data on RateMyProfessors (usually every 14 days), and only doing so incrementally.
  3. Then, the frondend loads the data from S3, and you will see something like this: http://url.sc/fall2016
  4. The real MVP here is the watcher. It will poll openings data from PISA and insert the changes to RethinkDB, and db2Queue will have a changefeed to push the delta to another queue, where the notification API can notify the students about their class openings.
  5. Of course, when you have time series data, you should graph them
  6. At the time of writing, I'm still trying to improve the automation aspects of the notification component (Tracked Here). But the idea is to unsubscribe users automatically after drop deadline and etc.

So yeah, this is sort of a big project in terms of reliability and automation requirements. I do want to talk to the school and see if they want to use this as part of the AIS.

I will update this post when I have more time and more changes made

Aug 2 2016

For the sake of humanity, let's point out the caveats first:

  1. If you ever change the CA or apiserver TLS certificate, remember to delete default secret tokens for ALL namespaces, and recreate services/pods. Your applications in each pods can access the API with a serviceaccount credentials, and it is only generated once. Therefore, if you change the TLS certs, the old secrets will be invalid.
  2. If you are running multiple master components, remember to add --apiserver-count=<count> flag in your kube-apiserver. Otherwise, apiserver will fight to get control of the service endpoints.

Now, let's get to the topic.

Billions of $$$ are awesome, but how do you invest? (joke

Docker is awesome (late to the party again), but how to manage?

TL;DR Kubernetes is a management tool (sort of) of containers.

I will just skip the introduction, since there are many articles out there already.

Installing Kubernetes

The best way of running Kubernetes is to deploy it on CoreOS, period.

https://coreos.com/kubernetes/docs/latest/getting-started.html

See, all other OSs are too heavyweight. CoreOS (other than SmartOS) is highly specialized to run containers, so there's that. Plus, folks at Quay.io is kind enough to have kubelet image ready. Kubelet is a binary that contains all the components that you will need to run Kubernetes (Golang is awesome), and Quay/CoreOS team makes it running in a rkt container, which makes updates/upgrades easy.

Of course, all components are stateless. Persistent states are stored in a etcd cluster (that seems to be the trend).

What's my environment?

An obligatory graph of architecture:

Kubernetes

Here are my cloud-config files for my deployment on OpenNebula: https://coreos-opennebula.s3.fmt01.sdapi.net/cloud-config/

What am I running?

Currently only a project written for an econ professor, but I will containerize more of my projects.

Ingress

The nginx controller on kubernetes/contrib kind of blows. So I (sort of) compiled the newest nginx and CHACHA20 ciphers:

Docker: https://hub.docker.com/r/zllovesuki/nginx-slim/

Git: https://git.fm/zllovesuki/nginx-slim/

You will need to recompile the controller with this tag.

1
Weightless Theme
Rocking Basscss
RSS