JAX DevOps Blog

Wishes and Expectations on Kubernetes

JAX DevOps, 14-17 May 2019
The Conference for Continuous Delivery, Microservices, Docker & Clouds

17 Apr 2019

Kubernetes’ explosive growth continued in 2018; where will this essential tech go in 2019? In this article, JAX DevOps speaker Andrew Martin explores some of his hopes and dreams for Kubernetes in the next year, as well as some more grounded expectations.

Expectations vs. Reality

Kubernetes’ explosive growth continued in 2018; where will this essential tech go in 2019? In this article, JAX DevOps speaker Andrew Martin explores some of his hopes and dreams for Kubernetes in the next year, as well as some more grounded expectations.
Kubernetes has continued its meteoric rise in 2018, with 83% of CNCF survey respondents [1] using it to run their workloads. But with so many interested participants in the ecosystem, it can be difficult to separate the signal from the noise. Here are some wishes and expectations for the future of the Kubernetes ecosystem in 2019.


Hosted services catch up with GKE
Google’s eponymous Kubernetes Engine (GKE) has been ahead of the competition since launch and it continues to ship features faster with hosted Istio recently hitting beta. Microsoft’s Azure is a strong competitor with node auto-scaling and network policy both launching in late 2018.
By contrast, Amazon has been notably slow to deliver a hosted Kubernetes solution in favor of its own ECS service. However, the Elastic Kubernetes Service (EKS) has finally launched. Along with the eksctl tool [2], EKS now provides a viable alternative to user-provisioned masters on EC2. Digital Ocean now has a managed offering, too. As these managed services converge, we can hope to see wider feature sets, deeper service integrations like AWS’s Service Operator [3], and tighter default security profiles for these services.

Non-container (VM-based) isolation improves
Containers revolutionized web application development and deployment, stealing market share from Virtual Machines with faster startup and smaller footprints. Now, the circle is closing. With projects like Kata Containers [4], NABLA Containers [5], Google’s gVisor [6], and AWS’s Firecracker [7], container-compatible virtual machines are fighting for market share.
Relying on virtual machines for isolation requires fine-tuning of start times, security settings, and developer experience to match what we have become used to in containers. As the projects mature, Kubernetes should be able to orchestrate VMs transparently to the end user. Projects such as KubeVirt [8] and firecracker-containerd [9] have begun this process already. The option to wrap processes in whichever isolation technology is most appropriate to their workload may greatly enhance security without compromising usability and performance. The holy grail!

The tangle of YAML unravels
Kubernetes requires a lot of YAML to configure; the difficulty of taming this complexity has spawned various different approaches and tools.
Helm is the most used [10] and the most flexible templating solution, but its in-cluster Tiller component has had some security issues. Ksonnet [11] offers a hierarchical method for templating that favors inheritance. At the other end of the spectrum, users are using tools like Ansible [12] and Terraform [13] to deploy applications, which are arguably the wrong abstractions for the job.
But now Kustomize [14] has been merged into Kubectl [15] – and with it yet another YAML format for generating Kubernetes resources. As applications tend to choose a single templating tool to deliver YAML, I hope to see some standardization across these tools, or possibly a mechanism to transform between them.

Image and build metadata security matures
Supply chain security – compromising an upstream supplier and using the target’s trust in them to compromise the target – is gaining recognition as an easy attack vector. The Petya ransomware attack on the Ukrainian government affecting Maersk [16], Magecart’s attacks on TicketMaster’s and BA’s suppliers [17], and the NPM event-stream module poisoning [18] all suggest attackers are looking to exploit the supply chain in 2019.
Fortunately, Kubernetes and container supply chains have been the subject of scrutiny in recent years. Tools such as Notary [19] (ensuring images match their expected content with side-channel GPG signatures using TUF [20]), Grafeas [21] (Google Cloud’s Binary Authorization [22] technology exposed as an open source project), and in-toto [23] (pipeline metadata security and policy control) all expose admission controllers to validate images as they are deployed to Kubernetes.
These tools dramatically increase an organization’s compromise resilience. They can be used to limit supply chain attack vectors in build pipelines and for images deployed to Kubernetes. These tools need greater awareness as projects start to distribute signed software, which ultimately leads to increased trust in Kubernetes workloads.

Rootless container runtimes become standard
The oldest criticism of Docker is that its daemon runs as root, so an escape from a container via the container runtime can potentially gain root on the host. The last few years have seen progress in some of the challenges of integrating user namespaces to allow running unprivileged containers.
LXC [24] already solves some of these problems, but it is not supported by Kubernetes. However, an experimental binary distribution of Kubernetes called usernetes [25] runs rootless Moby (Docker) and CRI-O runs without root privilege by using user namespaces. If this approach gains traction we will see a dramatic improvement in the safety of containerized workloads, and therefore Kubernetes itself.


Standardization at all layers of Kubernetes deployments
2018 saw the Cluster API [26] introduce an API for machine and cluster provisioning, the kubeadm control plane provisioner reach general availability [27], and the GitOps [28] application deployment pattern rise as the logical progression of infrastructure as code.

Each of these projects addresses a deployment problem end users have struggled with at different layers from deployment of machines to the Kubernetes control plane and application workloads. We will continue to see adoption of these projects as they reach maturity.
Notably absent is the Federation v2 SIG, which is based on the lessons learned implementing cluster Federation v1. The complexity of herding distributed systems has yielded some valuable lessons, but the project may need more than a year to ensure sufficient testing for production readiness.

Service meshes will see widespread adoption
Service meshes hijacked KubeCon Austin in 2016 and again in Copenhagen and Seattle in 2017 with the Kubernetes-native Istio and Linkerd 2 as the front-runners. Envoy, the proxy that powers Istio, has already won the hearts of the cloud native community with its snappy performance, container-friendly immutable configuration model, and hot reload capability.
Commercial entities are building around Envoy (including Tetrate [29], Solo [30], and Octarine [31], AWS’s App Mesh [32], Hashicorp’s Consul Connect [33], and a slew of others [34]), whilst Google’s Knative [35] has launched a full developer-focused platform on top of Istio.
The steep learning curve will begin to be outweighed by the security, availability, and observability guarantees of stable service meshes. Expect to see general adoption by high compliance enterprises that would otherwise have to manage their own network encryption and policy.

Rootless build systems will replace Docker socket-sharing
Rootless container image builds (as distinct from rootless container runtimes) have been on the horizon for a couple of years with orca-build [36], BuildKit [37], and img [38] proving the concept. They allow container images to be built without exposing the Docker socket, which can be used to escalate privilege. They are also probably a backdoor into most Kubernetes-based CI build farms.
With a slew of new rootless tooling emerging including Red Hat’s buildah [39], Google’s Kaniko [40], and Uber’s Makisu [41], we will see build systems that will eventually support building untrusted Dockerfiles, although there are outstanding issues that prevent these tools achieving that today.

FaaS adoption will continue to increase
Serverless offerings, also referred to as Function as a Service, will continue to fight for market share. There is obvious interest in the promises of reduced resource utilization and pay-per-use computation.
The original managed services that triggered the trend have seen huge adoption. AWS’s Lambda has finally introduced a layered ZIP format [42] that allows the same type of composition as Docker images. The Kubernetes-hosted equivalents OpenFaaS [43], Knative [44], Kubeless [45], and Fission [46] will battle to deliver the smoothest developer experience and greatest feature set.

Kubernetes operators and CRDs will explode in popularity
Now that Kubernetes 1.13 supports multi-version Custom Resource Definitions (CRDs) and conversion via webhooks, the reimplementation of Third Party Resources (deprecated in 1.7) is complete. CRDs allow extension of the Kubernetes API, or the ability to add entirely new APIs.
As databases such as Vitess, Oracle, and MongoDB launch operators that manage their products at runtime using CRDs, application developers will follow, utilizing application scaffolding like the Operator Framework [47] to manage Kubernetes native applications and decrease the operational burden on SREs.


The Kubernetes community continues to innovate and inspire, driven as much by open source interests as commercial entities. The work done behind the scenes by SIG leads, developers, community and conference organizers, and end-users has been invaluable to the growth of the ecosystem. With the predicted growth in 2019, it’s hard to see an end in sight.

JAX DevOps “Docker & Kubernetes” Track

Interested in learning how to resolve Kubernetes production outages? Andrew Martin will be leading a workshop at JAX DevOps in May 2019. His workshop, “Kubernetes production debugging”, is a part of the a part of the Docker & Kubernetes track, which is all about exploring best practices for working with these technologies. Join us at JAX DevOps in London this May!


[1] https://www.cncf.io/blog/2018/08/29/cncf-survey-use-of-cloud-native-technologies-in-production-has-grown-over-200-percent/
[2] https://github.com/weaveworks/eksctl
[3] https://aws.amazon.com/blogs/opensource/aws-service-operator-kubernetes-available/
[4] https://katacontainers.io/
[5] https://github.com/nabla-containers
[6] https://github.com/google/gvisor
[7] https://aws.amazon.com/blogs/aws/firecracker-lightweight-virtualization-for-serverless-computing/
[8] https://github.com/kubevirt/kubevirt
[9] https://github.com/firecracker-microvm/firecracker-containerd
[10] https://kubernetes.io/blog/2018/04/24/kubernetes-application-survey-results-2018/
[11] https://ksonnet.io/
[12] https://www.ansible.com/
[13] https://www.terraform.io/
[14] https://github.com/kubernetes-sigs/kustomize
[15] https://github.com/kubernetes/kubernetes/pull/70875
[16] https://en.wikipedia.org/wiki/2017_cyberattacks_on_Ukraine?oldformat=true#Affected_companies
[17] https://tech.newstatesman.com/security/magecart-ba-ticketmaster
[18] https://medium.com/intrinsic/compromised-npm-package-event-stream-d47d08605502
[19] https://github.com/theupdateframework/notary
[20] https://theupdateframework.github.io/
[21] https://grafeas.io/
[22] https://cloud.google.com/binary-authorization/
[23] https://in-toto.github.io/
[24] https://linuxcontainers.org/
[25] https://github.com/rootless-containers/usernetes
[26] https://github.com/kubernetes-sigs/cluster-api
[27] https://kubernetes.io/blog/2018/12/04/production-ready-kubernetes-cluster-creation-with-kubeadm/
[28] https://www.weave.works/blog/what-is-gitops-really
[29] https://www.tetrate.io/
[30] https://www.solo.io/
[31] https://www.octarinesec.com/
[32] https://aws.amazon.com/about-aws/whats-new/2018/11/introducing-aws-app-mesh—service-mesh-for-microservices-on-aws/
[33] https://www.consul.io/docs/connect/index.html
[34] https://www.envoyproxy.io/community
[35] https://cloud.google.com/knative/
[36] https://github.com/cyphar/orca-build
[37] https://github.com/moby/buildkit
[38] https://github.com/genuinetools/img
[39] https://github.com/containers/buildah
[40] https://github.com/GoogleContainerTools/kaniko
[41] https://github.com/uber/makisu
[42] https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html
[43] https://www.openfaas.com/
[44] https://cloud.google.com/knative/
[45] https://kubeless.io/
[46] https://fission.io/
[47] https://github.com/operator-framework/operator-sdk

Behind the Tracks

the process of becoming fully agile
Cloud-based & native apps
Docker, Kubernetes, Mesos & Co
Build, test and deploy agile
Maximize development productivity
Business & Company Culture

Business & Company Culture

Cloud Platforms

Cloud Platforms

Docker & Kubernetes

Docker & Kubernetes

Continuous Delivery

Continuous Delivery



Monitoring & Diagnostics

Monitoring & Diagnostics