Learn Kubernetes Using Minikube & Docker on MacOS

I recently learned how to manage a Kubernetes cluster using a neat tool called Minikube which runs a single-node cluster inside a VM on your local workstation.

Minikube is a great local development environment and a way to learn the most common commands that build up your “muscle memory” which helps your GyShiDo.

This post combines a simple Hello World guide with additional context for anyone brand new to Kubernetes. 

Intro

Kubernetes is a popular open source system for automating the deployment, scaling and management of containerized applications by grouping them into logical units for ease of operation.

A Kubernetes cluster consists of two types of resources: 

  • The Master coordinates the cluster
  • Nodes are workers that run applications

Kubernetes is based on Google’s internal task scheduler called Borg which was used to run their large scale production workloads.

Today, the open source project is supported by a collection of technology partners who also use it as part of their 24/7 infrastructure. The project also features a robust community, including a GitHub projectSlack channel, and Google dev group which you should definitely check out.

Installation

Installing Minikube on MacOS (Sierra) is easy. A few simple commands and your cluster installed and running. 

For help with Linux and Windows OS check out the Minikube installation guide (always RTFM).

Pre-requisites

To use Minikube, we need a hypervisor and a container solution as well as the Kubernetes command-line tool kubectl

I’ll be using my go-to choices of VirtualBox and Docker but there are other options if you prefer.

We can install everything we need using a single compound brew command:

$ brew update && brew install kubectl && brew cask install docker minikube virtualbox

Now verify your installations were successful by checking what versions you are running:

docker --version           # docker version X.X.X, build abc123
docker-compose --version   # docker-compose version X.X.X, build abc123 
docker-machine --version   # docker-machine version X.X.X, build abc123 
minikube version           # minikube version vX.X.X 
kubectl version --client   # Client Version: version.Info(...) 

Minikube

Let’s dive in by starting the minikube cluster:

$ minikube start

Starting local Kubernetes v1.6.4 cluster...
Starting VM...
SSH-ing files into VM...
Setting up certs...
Starting cluster components...
Connecting to cluster...
Setting up kubeconfig...
Kubernetes is available at https://192.168.99.100:8443.
Kubectl is now configured to use the cluster.

Our local Kubernetes cluster is now up and running. And, notice that kubectl is configured to use our cluster.

Now let’s confirm our minikube status:

$ minikube status 
minikube: Running 
localkube: Running

Kubernetes also has a pretty simple to use GUI, let’s view the dashboard for your Minikube cluster:

$ minikube dashboard

Kubernetes

The Kubernetes Basics doc comes with helpful diagrams and an interactive tutorial ‘shell’ that’s great for introducing you to the most commonly used commands. 

I strongly recommend you invest 20-30 minutes completing the interactive tutorials.

Kubectl

The Kubernetes command-line tool, kubectl, is used to deploy and manage applications on Kubernetes. 

When using kubectl, every command is executed within a context that determines which cluster your commands will operate.

First let’s find out what cluster kubectl knows about:  

$ kubectl cluster-info
Kubernetes master is running at https://192.168.99.100:8443

The address 192.168.99.100 is the local minikube cluster external IP is assigned by DHCP.

Now let’s do some basic inspection of the cluster by confirming how many nodes we have running (hint: minikube runs a single node using a VM):

$ kubectl get nodes
NAME     STATUS AGE VERSION
minikube Ready  1m  v1.6.4

Note: If you just started your cluster,  your STATUS might be NotReady. Check again in a minute or so and it should update to Ready.

Now let’s determine what pods might be running:

$ kubectl get pods --all-namespaces
NAMESPACE   NAME                        READY STATUS  RESTARTS AGE
kube-system kube-addon-manager-minikube 1/1   Running 1        5m
kube-system kube-dns-196007617-bhrsr    3/3   Running 1        5m
kube-system kubernetes-dashboard-r0rtq  1/1   Running 1        5m

kubeconfig

Right now the only cluster should be ‘minikube’ but let’s ensure we’re using it by setting our context:

$ kubectl config use-context minikube
Switched to context "minikube".

And, if you want to confirm your current-context:

$ kubectl config current-context
minikube

IMPORTANT: always ensure your current-context is correct before running kubectl commands.

Now that we set and confirmed our context, let’s view the entire kubeconfig:

$ kubectl config view
apiVersion: v1
clusters:
- cluster:
 certificate-authority: /Users/jmac/.minikube/ca.crt
 server: https://192.168.99.100:8443
 name: minikube
contexts:
- context:
 cluster: minikube
 user: minikube
 name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
 user:
 client-certificate: /Users/jmac/.minikube/apiserver.crt
 client-key: /Users/jmac/.minikube/apiserver.key

Deploying Hello World

The minikube project on GitHub offers a quick start demo we can deploy that uses a pre-built Docker application called hello-minikube (available from google’s public container repository) – the equivalent of Kubernetes for “Hello World!”.

So let’s deploy and run it!

$ kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080
deployment "hello-minikube" created

Success! Now let’s confirm our application deployment:

$ kubectl get deployments
NAME           DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
hello-minikube 1       1       1          1         18s

What about pods?

$ kubectl get pods
NAME                           READY STATUS  RESTARTS AGE
hello-minikube-938614450-zfrhp 1/1   Running 0        55s

Great, now we must expose our app to an externally accessible IP (external to the kubernetes cluster, not the internet): 

$ kubectl expose deployment hello-minikube --type=NodePort
service "hello-minikube" exposed

The type NodePort is used because minikube doesn’t support the type LoadBalancer.

Let’s see what services are running:

$ kubectl get services
NAME           CLUSTER-IP EXTERNAL-IP PORT(S)        AGE
hello-minikube 10.0.0.189 <nodes>     8080:31406/TCP 1m
kubernetes     10.0.0.1   <none>      443/TCP        1h

To obtain the URL for your exposed service, you can use minikube commands:

$ minikube service hello-minikube --url
http://192.168.99.100:31406

You can also curl the service from CLI by wrapping the same command to show the resonse:

$ curl $(minikube service hello-minikube --url)
CLIENT VALUES:
client_address=172.17.0.1
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://192.168.99.100:8080/

SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001

HEADERS RECEIVED:
accept=*/*
host=192.168.99.100:31406
user-agent=curl/7.51.0
BODY:

Nice! Time to put away the toys – let’s delete the service and the deployment:

$ kubectl delete deployment,service hello-minikube
deployment "hello-minikube" deleted
service "hello-minikube" deleted

…and confirm the deployment and service is removed:

$ kubectl get deployments
No resources found.

$ kubectl get pods
No resources found.

Deploy a Web Service using Docker

Now that we know how to deploy a pre-built app, let’s create our own and deploy it.

$ mkdir hello-nodejs && cd hello-nodejs && touch Dockerfile server.js

Create a Web Service

Create a basic http server using nodeJS that always returns HTTP 200 and “Hello World!” response

$ vim server.js

var http = require('http');
var handleRequest = function(request, response){
    console.log("rx request for url:" + request.url);
    response.writeHead(200)
    response.end('Hello World!')
};

var www = http.createServer(handleRequest);
www.listen(8080);

Create a Container using Docker

Now modify the Dockerfile to define what version of node you need and how to start the server:

FROM node:6.9.2
EXPOSE 8080
COPY server.js .
CMD node server.js

Docker environment variables must be set for minikube:

$ eval $(minikube docker-env)

Simple enough?  Ok, let’s build the container!

$ docker build -t hello-node:v1 .

Sending build context to Docker daemon 3.072 kB
Step 1 : FROM node:6.9.2
---> faaadb4aaf9b
Step 2 : EXPOSE 8080
---> Using cache
---> e78d6f95b487
Step 3 : COPY server.js .
---> Using cache
---> 30a49bb02305
Step 4 : CMD node server.js
---> Using cache
---> eb22cf1abcf6
Successfully built eb22cf1abcf6

Deploy the App

Boom! Ship it!

$ kubectl run hello-nodejs --image=hello-nodejs:v1 --port=8080
deployment "hello-nodejs" created

Let’s confirm by checking the deployment and pods

$ kubectl get deployments
NAME           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
hello-nodejs   1         1         1            1           1m
$ kubectl get pods
NAME                          READY     STATUS    RESTARTS   AGE
hello-node-2686040790-0t8q4   1/1       Running   0          1m

Now, let’s expose the new app:

$ kubectl expose deployment hello-nodejs --type=NodePort
service "hello-nodejs" exposed

And confirm services…

$ kubectl get services
NAME         CLUSTER-IP  EXTERNAL-IP  PORT(S)         AGE
hello-nodejs 10.0.0.13   <nodes>      8080:32272/TCP  2m
kubernetes   10.0.0.1    <none>       443/TCP         1d

And finally let’s confirm our nodejs service is functioning…

$ curl $(minikube service hello-nodejs --url)
Hello World!

Huzzah!  We did it! We wrote our own simple web service, containerized it and deployed it to a Kubernetes cluster!

Delete the App

Now that we know how to deploy an app, let’s decommission the app by deleting the deployment and the service:

$ kubectl delete deployment,service hello-nodejs
deployment "hello-nodejs" deleted
service "hello-nodejs" deleted

And let’s confirm by checking deployments and services:

$ kubectl get deployments,services
NAME            CLUSTER-IP  EXTERNAL-IP  PORT(S)  AGE
svc/kubernetes  10.0.0.1    <none>       443/TCP  1d

Shutdown Minikube

And now that we’ve accomplished all that, let’s shut down minikube:

$ minikube stop
Stopping local Kubernetes cluster...
Machine stopped.

Additional Info

This section dives a little deeper into a few concepts I skimmed over in the guide above.

Why Containers?

The “Old Way” to deploy applications was to install the applications onto a host using a package manager. This has the disadvantage of entangling the librarires shared by both applications and the  host OS, creating entanglements that result in combinations that cannot be supported. 

The “New Way” is to deploy containers based on operating-system level virtualization rather than hardware virtualization. The containers are isolated from each other and the host, having their own filesystems and no ability to see each other’s processes thus bounding resource usage. Being decoupled from the underlying infrastructure, they become portable across both clouds and OS distributions.

Kubectl Techniques

The kubectl command-line tool supports several different approaches to create and manage Kubernetes objects. There are three techniques commonly employed: Imperative commands, Imperative object configuration and Declarative object configuration.

When using imperative commands, a user operates directly on live objects in the cluster, providing operations to kubectl command as arguments or flags.

Examples:

$ kubectl run nginx --image nginx
$ kubectl create deployment nginx --image nginx

When using imperative object configuration the kubectl command specifies the operation, optional flags and at least one file name.The file specified must contained a full definition of the object in YAML or JSON format.

Examples:

$ kubectl create -f nginx.yaml
$ kubectl delete -f nginx.yaml -f redis.yaml
$ kubectl replace -f nginx.yaml

When using declarative object configuration, a user operates on the object configuration files stored locally, however, the user does not define the operations to be taken on the files.  

Examples:

$ kubectl apply -f configs/
$ kubectl apply -R -f configs/

Warning: A Kubernetes object should be managed only using one technique. Mixing and matching techniques for the same object results in undefined behavior.

Free Online Training

Google is offering a free online course with Udacity called Scalable Microservices with Kubernetes that I highly recommend. 

It’s designed to teach you about managing application containers using Kubernetes and features sections with Kelsey Hightower (author of Kubernetes the Hard Way) and Adrian Cockcroft, poster-child for Netflix’s all-in cloud strategy.

The skill level is intermediate and the timeline is approximately 1 month but it is easy to follow and heck, it’s free!

Final Thoughts

I am still learning. I’m attempting to setup Halyard, which is another tool for configuring Spinnaker which is an orchestration tool for Kubernetes. All of these tools are creating abstraction layers upon abstraction layers that can make understanding how a tech stack works very difficult.

And, I am no expert. I basically read the manual, attempted to follow instructions, failed, re-read the docs or did google searches and tried again.

This is how I learn. It’s a painful process sometimes, filled with distractions and setbacks and frustration, but, if you can get past the initial frustration when using open source software, you can really learn some extremely useful skills for your next project or gig that could make you the difference between failure and success. 

So, keep at it!

References

All original content and image references are linked in this post. I encourage you to click those links and follow them to learn more about everything covered in this post. 

1 Trackbacks & Pingbacks

  1. KubeWeekly #92 – KubeWeekly

Comments are closed.

%d bloggers like this: