Canonical announces full enterprise support for Kubernetes 1.16, starting with the beta release, with support covering the following installation mechanisms – kubeadm, Charmed Kubernetes, and MicroK8s.
The beta release of Kubernetes offers users an opportunity to test some of the upcoming features and to validate containerised workloads on the latest Kubernetes technology. It also offers the user community a chance to give early feedback on the next release, ensuring new features work as intended, and the existing features you rely upon haven’t regressed.
For quick, secure, and reliable Kubernetes installations in a single step, the MicroK8s beta channel will be updated with Kubernetes 1.16 beta. In addition to supporting the beta, the MicroK8s community has recently added one line installs of Helm and Cilium. With MicroK8s 1.16 beta you can develop and deploy Kubernetes 1.16 on any Linux desktop, server or VM across 42 Linux distros. Mac and Windows are supported with Multipass.
For fully automated Kubernetes installations across public and private clouds, and bare-metal, including day 2 operations, look for the Charmed Kubernetes 1.16 beta announcement coming soon. Supported deployment targets include AWS, GCE, Azure, Oracle, VMware, OpenStack, LXD, and bare metal.
“Kubernetes 1.16 is on track to include new enhancements in public cloud integration, kubeadm, storage, dual IPv4/IPv6 stack support, and many scheduler / pod management improvements. The Ubuntu ecosystem benefits from the latest features of Kubernetes, as soon as they become available upstream, even in beta” commented Carmine Rimi, Kubernetes Product Manager at Canonical.
What’s new in Kubernetes 1.16
Notable upstream Kubernetes 1.16 features (beta release):
This release includes early access to components that should finalise over the next month. Some of the interesting new feature updates are:
- Support for IPv4/IPv6 dual-stack – IPv4/IPv6 dual-stack support and awareness for Kubernetes pods, nodes, and services. This adds IPv4/IPv6 dual stack functionality to Kubernetes clusters, which includes the following concepts: (1) Awareness of multiple IPv4/IPv6 address assignments per pod; and (2) Native IPv4-to-IPv4 in parallel with IPv6-to-IPv6 communications to, from, and within a cluster.
- Improved Pod Overhead Accounting – Pod sandbox runtimes introduce a non-negligible overhead at the pod level which should be accounted for to improve scheduling, resource quota management, and constraining.
- Node Topology Manager – This new component helps allocate resources for a pod based on requested resources. For instance, consider scenarios where aligning the available physical resources on a computer can improve performance dramatically. Fast virtualised network functions, where a user asks for a “fast network” and automatically gets all the various pieces coordinated (hugepages, cpusets, network device) co-located on a socket. Another example is accelerated neural network training, where a user asks for an accelerator device and some number of exclusive CPUs in order to get the best training performance, due to socket-alignment of the assigned CPUs and devices.
- New Endpoint API – The goal of this new API is to support tens of thousands of backend endpoints in a single service on a cluster with thousands of nodes. In the current Endpoints API, any change to the number of pods results in a series of events that, at scale, puts undue strain on multiple parts of the system.
- Pod Spreading across Failure Domains – This feature enables the Kubernetes scheduler to spread a group of pods across failure domains. The existing hard inter-pod anti-affinity does not allow more than one pod to exist in a failure domain. The new feature supports more than one pod in a failure domain.
- Multiple Features for Windows – Kubeadm for Windows, Support CSI plugins in Windows, and RunAsUserName for Windows.
- Kubernetes Metrics Overhaul – In order to have consistently named and high quality metrics, this effort aims to make working with metrics exposed by Kubernetes consistent with the rest of the ecosystem. Provide consistently named and high quality metrics in line with the rest of the Prometheus ecosystem. Consistent labeling in order to allow straightforward joins of metrics.
Other Kubernetes 1.16 Changes, by the numbers:
- Security enhancements: Over 9 pull requests, closing 4 CVE’s and improving the Kubernetes security poster across escalating privileges, TLS between services, Cgroup and user improvements, and more.
- Monitoring enhancements: Over 11 pull requests, with upgrades to monitoring components and including the addition of the Overhead field to the PodSpec and RuntimeClass types as part of the Pod Overhead accounting mentioned above.
- Public cloud enhancements: Over 17 pull requests, primarily focusing on better networking and storage integration, with a majority of the PRs targeting Azure.
- Kubeadm enhancements: Over 24 pull requests, ranging from bug fixes to new features, including support for IPv6 dual stack mode.
- Scheduler enhancements: Over 25 scheduling related pull requests, including PRs for the new Pod Overhead features. Enhancements to pod priority and failure zone scheduling are also included
- Robustness enhancements: Over 11 pull requests that increase general robustness, with several targeting resource leak scenarios.
- Storage enhancements: Over 23 storage related pull requests, the majority are bug fixes, with some upgrades and enhancements.
- Networking enhancements: A handful of changes, with the biggest changes for IPv4, IPv6 dual stack support.
- API Server enhancements: 9 pull requests, with several targeting improvements in webhook constructs and startup and shutdown experience.
- For more information, please see the upstream Kubernetes 1.16 release notes.
Get In Touch
If you’re interested in Kubernetes support, consulting, or training, please get in touch!