Deploy a Kubernetes Cluster using Kubespray

Image from Gordon Smith’s post in

Kubernetes is the next big thing. If you are here I am assuming you already know of Kubernetes. If you don’t you better get started soon.

Kubernetes also called K8s, is a system for automating deployment, scaling and management of containerized applications — basically a container Orchestration tool.

Google had been using Borg for years, an in-house container orchestration tool they had built and in 2014 they open-sourced it to advance the technology along with Linux Foundation.

So let’s get started..

There are multiple ways to set up a Kubernetes Cluster. One of them is using Kubespray which uses Ansible. The official Github link for installation via kubespray is just about crisp but has a lot of reading between the lines. I spent days getting this right. If you are getting started with Kubernetes then use can follow the steps.

We would be going through the following to deploy the cluster.

I have created a 1 master 2 node cluster.

I have used another machine to deploy the whole cluster which I call my base machine

So, I would require 4 VM’s (1 Base Machine and 3 for my Kubernetes Cluster)

Since I already have an AWS account, I will be using it to spin up 4 Ubuntu machines. You may choose to use Google Cloud or Microsoft Azure.

Infra Requirements

Create the following infra on AWS.

Base machine: Used to clone the kubespray repo and trigger the ansible playbooks from there.

Base Machine

Type: t2.micro — 1 Core x 1 GB Memory

OS: Ubuntu 16.04

Number of Instances: 1

Cluster Machines

Create the 3 instances in one shot so that they remain in the same security group and subsequent changes in the security group will reflect on the whole cluster.

Type: t2.small — 1 Core x 2 GB RAM

OS: Ubuntu 16.04

Number of Instances: 3

Using t2.micro for the cluster machines will fail. There is a check in the installation which fails further installation if the memory is not sufficient.

Also, when you create your instances on AWS, create a new pem file for the cluster. Pem file is like a private key used for authentication. Save this pem file as K8s.pem on your local machine. This will be used later by you/ansible to ssh into the cluster machines.

Network Configurations

On the AWS console in the EC2 section. Click on the security group corresponding any instance of the cluster (since they all belong to the same security )

Click on Inbound rules

Click on Edit and under Type click on ‘’All Traffic” to allow internal communication within the cluster


Tools to be installed on the Base Machine

Install latest ansible on debian based distributions.

$ sudo apt-get update
$ sudo apt-get install software-properties-common
$ sudo apt-add-repository ppa:ansible/ansible
$ sudo apt-get update
$ sudo apt-get install ansible

Check Installation

$ ansible — version

Install Jinja 2.9 (or newer)

Execute below commands to install Jinja 2.9 or upgrade existing Jinja to version 2.9

$ sudo apt-get install python-pip

$ pip2 install jinja2 — upgrade

Install python-netaddr

$ sudo apt-get install python-netaddr

Allow IPv4 forwarding

You can check IPv4 forwarding is enabled or disabled by executing below command.

$ sudo sysctl net.ipv4.ip_forward

If the value is 0 then, IPv4 forwarding is disabled. Execute below command to enable it.

$ sudo sysctl -w net.ipv4.ip_forward=1

Check Firewall Status

$ sudo ufw status

If the status is active then diable is using the following

$ sudo ufw disable

Set configuration parameters for the Kube Cluster

Clone the kubespray github repository

Clone the repo onto this base machine

$ git clone

Copy the key file into the Base machine

Navigate into the kubespray folder

$ cd kubespray

Now you can either copy the pem file which you used to create the cluster on AWS into this directory from your local machine OR just copy the contents into a new file on the base machine.

Navigate to the location where you have downloaded your pem file from AWS when you created your cluster. This I have downloaded on my local machine (which is different from the base machine and the cluster machines).

View the contents of K8s.pem file on your local machine using the command line.

$ cat K8s.pem

Copy the contents of the file

Connect / ssh onto the Base machine

On Base Machine

$ vim K8s.pem

This will create and open a new file by the name K8s.pem. Paste the contents here.

To save Hit Esc key and then type :wq

Change permissions of this file.

$ chmod 600 K8s.pem

Modify the inventory file as per your cluster

Copy the inventory sample inventory and create your own duplicate as per your cluster
$ cp -rfp inventory/sample inventory/mycluster

Since I will be creating a 1 master 2 node cluster, I have accordingly updated the inventory file. Update Ansible inventory file with inventory builder. Run the following 2 commands to update the inventory file

Replace the sample IP’s with Private IP’s of the newly created instances before running the command

$ declare -a IPS=(
$ CONFIG_FILE=inventory/mycluster/hosts.ini python3 contrib/inventory_builder/ ${IPS[@]}

Now edit/verify the hosts.ini file to ensure there is one master and 2 nodes as shown below. Keep only node1 under [kube-master] group and node2 and node3 under [kube-node] group.

Hosts.ini file


node1 ansible_host= ip=

node2 ansible_host= ip=

node3 ansible_host= ip=


















The above is how the file finally looks.

Verify other kube cluster configuration parameters

Review and change parameters under ``inventory/mycluster/group_vars``

$ vim inventory/mycluster/group_vars/all.yml

Change the value of the variable ‘boostrap_os’ from ‘none ’to ‘ubuntu’ in the file all.yml.

Save and exit the file.

Make necessary changes in the k8s-cluster.yml file if any.

$ vim inventory/mycluster/group_vars/k8s-cluster.yml

Save and exit the file

Deploy Kubespray with Ansible Playbook

$ ansible-playbook -i inventory/mycluster/hosts.ini cluster.yml — private-key=K8s.pem — flush-cache -s

Check your Deployment

Now SSH into the Master Node and check your installation

Command to fetch nodes in the namespace ‘kube-system’

$ kubectl -n kube-system get nodes

Command to fetch services in the namespace ‘kube-system’

$ kubectl -n kube-system get services

Wohhoooo!!! We are done!!!

You now have your kubernetes cluster up and running.

DevOps Proffessional | Foodie | Travel | Avid Reader | Auto Enthusiast | Sarcasm is ingrained in me! Blog: