Managing Docker Containers with OpenShift and Kubernetes

Casey Justus Articles, AWS, Cloud, Development Technologies & Tools, DevOps, Docker, Microservices, openshiftseries Leave a Comment

Attention: The following article was published over 7 years ago, and the information provided may be aged or outdated. Please keep that in mind as you read the post.

Note: This is part one in an OpenShift series. See part two for a hands-on technical walkthrough.

For the last few years, Docker containers have been all the rage in the DevOps world. After all, what’s not to like? They allow you to strip out 99% of stuff in your VM and just deploy your code. If you’re new to Docker, check out Zach Gardner’s great introduction.

Containers can save resources, speed deployment, scale well and offer more fault tolerance. But how do you manage them?

In my experience, the Docker Machine and Docker Swarm stack hasn’t lived up my to expectations. It has a limited API, no support for monitoring and logging, and much more manual scaling. AWS’s EC2 containers scale well, but you’ll be locked into Amazon.

In my opinion, the best current stack for Docker containers includes Kubernetes and OpenShift. In this blog I will give a brief introduction to Kubernetes + OpenShift with an eye for what they do well.

Introduction To The Stack

First, let’s get the definitions out of the way.

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery.

At its core, Kubernetes manages the orchestration of containers. It packages orchestration, service discovery, load balancing together in one nice package.

You can use Kubernetes directly through its command-line interface that’s called kubectl. You can use it to see when applications were deployed, what the current status is, where they’re running and configuration.

Related Posts:  SBOMs: A Recipe for Software Success

Or even better yet, you can choose to use OpenShift. Red Hat® OpenShift is a container application platform that brings Docker and Kubernetes to the enterprise. The product provides container-based software deployment and management for apps on-premise, in a public cloud, or hosted. OpenShift adds operations-centric tools on top of Kubernetes to enable rapid application development, easy deployment and scaling, and long-term lifecycle maintenance

OpenShift is essentially a user interface on top of Kubernetes that adds a lot of other nice features. It is very easy to set up and integrate with Kubernetes and the underlying containers.

How Kubernetes And OpenShift Work

The atomic unit used in Kubernetes is the pod. These can contain one or many containers and any shared resources for those containers. Typically pods will only have one container, but sometimes it makes sense to have more if they are tightly coupled. Note that in a true microservices architecture, there shouldn’t be more than one though.

Pods are deployed to Nodes. Nodes represent a machine, either physical or virtual, depending on the cluster. A Node can have multiple Pods.

What They Do Well

Kubernetes automatically handles load balancing of the pods in a round-robin approach. Though, this can be configured to use an external load balancer if that is what you prefer.

Scaling is also quite simple. Pods are easily replicated or removed throughout the clusters. Scaling down is simple as well. Both can be done through the OpenShift UI or the command line.

They are also very good for fault tolerance. If one pod crashes and becomes unresponsive, it is automatically replaced. This is very important for the high availability expectations for the web.

Related Posts:  How Do You Assess Your Organization’s Cloud Readiness & Maturity Level?

Pods represent not only containers of code, they are also storage pods. For instance, you could have a MySQL pod. This gives uniformity to all pieces of your app, both code and storage.

Build Processes

OpenShift works well with build processes. You can add a build that points to a snapshot in a code repository. If you want to have a more extensive Continuous Integration build, OpenShift integrates with Jenkins pipelines.

And when it’s time to update, your code has rolling deployments. One by one, the pods are replaced, with no downtime. This is the default setting. They can also be configured to replace more than one at a time or a percentage.

Security

The other thing that OpenShift offers is security. Without it, access to one pod means access to all. It offers both authorization and authentication.

RESTful containers are the preferred approach, however Openshift can manage stateful and/or legacy apps as well.

Overall Kubernetes and OpenShift have become my preferred stack for managing containers in the development lifecycle. Definitely give this stack a look if you’re considering a tool to help you manage your Docker Containers.

Series

This is part one in an OpenShift series. See part two for a hands-on technical walkthrough.

  1. This Post –> Introduction to Managing Docker Containers with OpenShift and Kubernetes
  2. OpenShift Quick Start – Installing OpenShift locally & adding a Container with an API service to a Pod
  3. Scaling Pods and Managing Cluster with the Command Line Interface
  4. Continuous Build and Deploy with Jenkins 2 Pipelines
  5. Using a STI (Source to Image) Utility to Create and Deploy Spring Boot Java Image
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments