Why Bose uses Kubernetes to get a microservices architecture.
Kubernetes (commonly stylized as k8s) is an open-source container orchestration system for automating computer application deployment, scaling, and management.
It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation. It aims to provide a “platform for automating deployment, scaling, and operations of application containers across clusters of hosts”. It works with a range of container tools and runs containers in a cluster, often with images built using Docker. Kubernetes originally interfaced with the Docker runtime through a “Dockershim”; however, the shim has since been deprecated in favor of directly interfacing with conainerd or another CRI-compliant runtime.
Many cloud services offer a Kubernetes-based platform or infrastructure as a service on which Kubernetes can be deployed as a platform-providing service. Many vendors also provide their own branded Kubernetes distributions.
The Rise of Kubernetes
When containers were first introduced in 2008, Virtual Machines, or VMs, was the state-of-the-art option to optimize a data center’s physical resources. This arrangement worked well enough but had some flaws: Virtual machines utilized too many resources because they required both a complete operating system and emulated instructions to reach the physical CPU. Even with some technologies like Intel VT-x and AMD-V that attempted to solve the emulation problem, virtual machines were behind the bare metal.
The goal of containers is to maximize resource utilization to reach a similar bare-metal performance with the advantages of VMs. To do this, a common kernel is shared with all applications that can choose any operational resources as necessary. All containers have exclusive access to resources (like CPU, memory, disk, and network) and each container can be prioritized by a manager. In other words, containers can run on bare metal while sharing resources, but without being able to access other containers’ resources.
Orchestrators as a Multicloud Operating System
Container orchestrators handle several machines that can be in the same data center or in different physical locations from different providers. In fact, when using an orchestrator tool such as Kubernetes, it creates an abstraction that an organization won’t need to know where or how its computing resources are distributed. An orchestrator tool transforms the capacity of cloud providers into a commodity that can easily be substituted, or, better yet, organizations can distribute workloads among different providers.
Companies that have an on-premise data center can benefit from container orchestrators as well. If the orchestrator is deployed within a local infrastructure, the unlimited computing resources on the cloud won’t be available. However, companies can manage resources in the same way, which allows for an easy migration if necessary.
Another use case occurs when an operator wants to use unlimited computing capacity of the cloud in addition to internal resources. A hybrid cloud is designed to utilize a company’s existing hardware resources while increasing computing capacity without buying new hardware.
Container orchestration tools like Kubernetes have the capacity to function as a multicloud operating system. Companies do not need to know what they are handling, where the operating system is or who is providing it — they just need to designate what resources need to be running.
Deploying Kubernetes
Kubernetes offers a new way to deploy applications using containers. It creates an abstraction layer which can be manipulated with declarative rather than imperative programming. This way, it is much simpler to deploy and upgrade services over time. The screenshot below shows the deployment of a replication controller which controls the creation of pods — the smaller K8S unit available. The file is almost self-explanatory: the definition gcr.io/google_containers/elasticsearch:v5.5.1- 1 indicates that a Docker Elasticsearch will be deployed. This image will have two replicas and uses persistent storage for persistent data.
K8s can be deployed in very different scenarios depending on the size of the company and its objectives:
- In-house: Organizations can transform their own data center into a K8s cluster. In this case, companies can take full advantage of their own resources.
- Cloud: The setup process is similar to an in-house deployment, but includes virtual machines on the cloud. This allows for the creation of a virtually infinite number of machines, depending on demand.
- Hybrid: An organization’s data center might perform well for most of the day, but sometimes a peak occurs that local computing resources cannot handle. In this case, a hybrid solution works well. When necessary, K8s will create virtual machines on the cloud to better distribute computing resources when on-premise servers are full.
- On-premise: Some cloud providers have their own K8s implementation embedded. In this case, there is no need to deploy and configure Kubernetes itself; an organization just needs to manage the service. Since deploying Kubernetes can be tricky, this is a good solution for companies that do not have a big IT team capable of handling cluster configuration and maintenance.
- Multicloud: This is the next level of a hybrid cloud solution. Computing resources are deployed among two or more cloud vendors. In this case, companies need to avoid vendor lock-in and minimize risk if something goes wrong.
Kubernetes is not the only container orchestrator available. Other popular tools on the market include Docker Swarm and Apache Mesos. Swarm is an open-source container orchestrator intended to be the “big brother” of Docker and Docker Compose. Swarm uses the same command line from Docker and is not very opinionated: organizations must decide which tools to use for nearly every feature needed on their cluster. Apache Mesos is another open-source orchestrator that manages other technologies in addition to managing containers. Apache Mesos calls itself a “data center operating system.” This is also the name of its commercial product, Mesosphere‘s Data Center Operating System (DC/OS). Apache Mesos is much less opinionated than K8s, allowing for the deployment of various types of applications besides containerized applications.
Use cases of Kubernetes: Bose
Challenge
A household name in high-quality audio equipment, Bose has offered connected products for more than five years, and as that demand grew, the infrastructure had to change to support it. “We needed to provide a mechanism for developers to rapidly prototype and deploy services all the way to production pretty fast,” says Lead Cloud Engineer Josh West. In 2016, the company decided to start building a platform from scratch. The primary goal: “To be one to two steps ahead of the different product groups so that we are never scrambling to catch up with their scale,” says Cloud Architecture Manager Dylan O’Mahony.
Solution
From the beginning, the team knew it wanted a microservices architecture. After evaluating and prototyping a couple of orchestration solutions, the team decided to adopt Kubernetes for its scaled IoT Platform-as-a-Service running on AWS. The platform, which also incorporated Prometheus monitoring, launched in production in 2017, serving over 3 million connected products from the get-go. Bose has since adopted a number of other CNCF technologies, including Fluentd, CoreDNS, Jaeger, and OpenTracing.
Impact
With about 100 engineers onboarded, the platform is now enabling 30,000 non-production deployments across dozens of microservices per year. In 2018, there were 1250+ production deployments. Just one production cluster holds 1,800 namespaces and 340 worker nodes. “We had a brand new service taken from concept through coding and deployment all the way to production, including hardening, security testing and so forth, in less than two and a half weeks,” says O’Mahony.
“At Bose we’re building an IoT platform that has enabled our physical products. If it weren’t for Kubernetes and the rest of the CNCF projects being free open source software with such a strong community, we would never have achieved scale, or even gotten to launch on schedule.”
— JOSH WEST, LEAD CLOUD ENGINEER, BOSE
The team spent time working on choosing tooling to make the experience easier for developers. “Our developers interact with tools provided by our Ops team, and the Ops team run all of their tooling on top of Kubernetes,” says O’Mahony. “We try not to make direct Kubernetes access the only way. In fact, ideally, our developers wouldn’t even need to know that they’re running on Kubernetes.”
The platform, which also incorporated Prometheus monitoring from the beginning, backdoored its way into production in 2017, serving over 3 million connected products from the get-go. “Even though the speakers and the products that we were designing this platform for were still quite a ways away from being launched, we did have some connected speakers on the market,” says O’Mahony. “We basically started to point certain features of those speakers and the apps that go with those speakers to this platform.”
Today, just one of Bose’s production clusters holds 1,800 namespaces/discrete services and 340 nodes. With about 100 engineers now onboarded, the platform infrastructure is now enabling 30,000 non-production deployments across dozens of microservices per year. In 2018, there were 1250+ production deployments. It’s a staggering improvement over some of Bose’s previous deployment processes, which supported far fewer deployments and services.
Other Services of Kubernetes used by other major companies.
Serverless, with Server
Serverless architecture has taken the world by storm since AWS launched Lambda. The principle is simple: just develop the code, and don’t worry about anything else. Server and scalability are handled by the cloud provider and code just have to be developed as functions that handle specific events: from HTTP requests to queue messages.
Vendor lock-in is the major disadvantage of this solution. It is almost impossible to change cloud providers without refactoring most of the code. There are some solutions like Serverless that seek to standardize function code across clouds. Another solution is to use a Kubernetes cluster to create a vendor-free serverless platform. As mentioned above, K8S abstracts away the difference between cloud servers. Currently, two popular frameworks virtualize the cluster as a serverless platform: Kubeless and Fission.
Optimized Resource Usage with Namespaces
A K8s namespace is also known as a virtual cluster. Namespaces create a virtually separated cluster inside the real cluster. Clusters without namespaces probably have a test, staging, and production clusters. Virtual clusters usually waste some resources because they do not undergo continuous testing and because staging is used from time to time to validate the work of a new feature. By using a virtual cluster, or a namespace, an operations team can use the same set of physical machines for different sets depending on a given workload.
Namespaces are closely related to DNS because services located within the same namespace are accessible through their names. Namespaces offer a good solution for creating similar environments that locate services through network names: instances from different namespaces will find their dependencies without having to take into account which namespace they are located in.
In addition, namespaces can have resource quotas: each virtual cluster can receive a defined allocation in order to avoid a resource competition between namespaces. This is particularly useful to avoid a production environment sharing computing resources with just a few priority environments. Finally, different permissions can be created with roles for each namespace in order to limit the number of individuals with access to production environments.
Hybrid and Multiclouds
A hybrid cloud utilizes computing resources from a local, conventional data center, and from a cloud provider. A hybrid cloud is normally used when a company has some servers in an on-premise data center and wants to use the cloud’s unlimited computing resources to expand or substitute company resources. A multicloud, on the other hand, refers to a cloud that uses multiple cloud providers to handle computing resources. Multiclouds are generally used to avoid vendor lock-in, and to reduce the risk from a cloud provider going down while performing mission-critical operations.
Both solutions are addressed by Kubernetes Federation. Multiple clusters — one for each cloud or on-premise data center — are created that are managed by the Federation. The Federation synchronizes computing resources, and even allows cross-cluster discovery: virtually any pod can communicate with a pod in another cluster without knowing the infrastructure.
The Federation setup is not simple, and there is a caveat: for obvious reasons, the solution doesn’t work on managed services like Google Kubernetes Engine, Azure Container Service, or AWS EKS.