Kubernetes?

Kubernetes?

The funny thing about the massive crater created by the explosion of container management systems emerging lately is that it’s forcing people working in the systems administration field to come to terms with the container paradigm… or perhaps it’s more like sysadmins have been pushing business towards automating infrastructure which is part of the package in most PaaS (Platform as a Service) … quick note here, I’m going to mention Apcera here as part of at Apcera it was Policy as a Service, a much more effective approach to securing jobs and services for a broad range of uses… 

Either way it is, I’ve been working with Kubernetes lately == tinkering with deployments in my home lab; the same lab where I’d been running my small Apcera cluster (rip dojo-cluster) for a little over a year. They are functionally similar, but notably Kubernetes offers little resistance to get up and running, where Apcera’s policy engine and NATS messaging layer made for some robust sec, but deployment was cagey if you didn’t consume a few days worth of reading in the docs.
 
Back in 2012 it was becoming apparent that connected services and applications were in need of a way to manage and deploy on large scale. Infrastructure as code began to gain favor around this time with tools like Puppet, Chef, Ansible, SaltStack and the like began to emerge on the marketplace. Subsequently the skills to use these tools saw a rise in demand as they became more common in the years since. The driving forces behind this tooling was the need to know that machines looked the way you want them to, and also be able to describe that state with code. The Infrastructure as Code movement started with configuration management, and has evolved even further since then, to bring us tools like Terraform, Habitat and Vagrant.
Kubernetes brings ‘Google style’ cluster management capabilities to the world of virtual machines, or ‘on the metal’ scenarios.

Kubernetes is a declarative system…

Stop, what’s that?

In Kubernetes you can define a deployment with a yaml config. Effectively employing infrastructure as code, it manages your deployments and maintains the structure declared.

When you consider what Kubernetes does for an organization, it’s probably worth more thinking on what Kubernetes does for an engineering organization.

Typically a management layer generally will exist above the Operating System. You would generally operate with individual machines when managing applications, and linking them together involved putting services on top of other operating systems which would allow to link those applications together. You probably automated this process with configuration management (chef), but ultimately the operators worked at the operating system layer.

Since Kubernetes came onto the stage there’s been a change to how that abstraction layer deploys an application. When you deploy an application operators no longer need to install software on the OS. Kubernetes enacts the needed packages, essentially filling dependency for successful operation of that application.

This is going to manifest in some interesting ways, especially in a datacenter where Kubernetes could be used to manage available resources for a whole farm of machines sitting in a rest state being active only on demand.

We are in so much trouble!