Multi-Cloud Setup of Kubernetes

Deepak Sharma
9 min readJun 9, 2021

What is a Multi-Node cluster in Kubernetes?

A Kubernetes cluster is a set of node machines for running containerized applications. If you’re running Kubernetes , you’re running a cluster. At a minimum, a cluster contains a control plane and one or more compute machines, or nodes . Nodes actually run the applications and workloads.

A multi-node cluster in Kubernetes is a setup with various nodes among which one is known as the master node and the rest are the worker nodes.

What is Kubernetes?

Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications. Kubernetes is an open-source container-orchestration system for automating computer application deployment, scaling, and management. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation.

As applications grow to span multiple containers deployed across multiple servers, operating them becomes more complex. To manage this complexity, Kubernetes provides an open source API that controls how and where those containers will run.

What is Docker?

Docker is a container management service. The keywords of Docker are develop, ship and run anywhere. The whole idea of Docker is for developers to easily develop applications, ship them into containers which can then be deployed anywhere.

AWS (Amazon Web Services)

AWS is one of the biggest cloud providers with supports various technologies. Provides services for building, testing, monitoring, deploying, and running the whole business on the cloud. They also support technologies like Augmented Reality, Virtual Reality, quantum technologies, robotics, etc.

One of the services is EC2 (Elastic Cloud Computing). Here they provide virtual machines with the support of many major operating systems and resources such as ram, CPU, networking, etc.

We will configure the Kubernetes cluster over EC2 instances.

GCP(Google Cloud Platform)

Google Cloud consists of a set of physical assets, such as computers and hard disk drives, and virtual resources, such as virtual machines (VMs), that are contained in Google’s data centers around the globe. Each data center location is in a region. Regions are available in Asia, Australia, Europe, North America, and South America. Each region is a collection of zones, which are isolated from each other within the region. Each zone is identified by a name that combines a letter identifier with the name of the region.

This distribution of resources provides several benefits, including redundancy in case of failure and reduced latency by locating resources closer to clients. This distribution also introduces some rules about how resources can be used together.

Microsoft Azure

The Azure cloud platform is more than 200 products and cloud services designed to help you bring new solutions to life — to solve today’s challenges and create the future. Build, run and manage applications across multiple clouds, on-premises and at the edge, with the tools and frameworks of your choice.

Ansible

Ansible is an open-source automation platform. It is a simple automation language that can perfectly describe an IT application infrastructure in Ansible Playbooks. It is also an automation engine that runs Ansible Playbooks.

Ansible can manage powerful automation tasks and can adapt to different workflows and environments. At the same time, new users of Ansible can very quickly use it to become more productive.

First let us launch an EC2 instance. We will be configuring it as our master node.

Below are the services we need to launch on AWS using Ansible

  1. Create a VPC (Virtual Private Cloud)
  2. Create subnets in that VPC.
  3. Create an internet gateway.
  4. Create routing table.
  5. Create an internet gateway.
  6. Creating security group.
  7. Launch ec2-instances in that subnet of respective VPC.

VPC Architecture

Creating VPC

- name: VPC for EC2
ec2_vpc_net:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
name: "{{ vpc_name }}"
cidr_block: "{{ vpcCidrBlock }}"
region: "{{ region }}"
# enable dns support
dns_support: yes
# enable dns hostnames
dns_hostnames: yes
tenancy: default
state: "{{ state }}"
register: ec2_vpc_net_result

Creating subnets in the VPC.

- name: Subnet for VPC
ec2_vpc_subnet:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
vpc_id: "{{ ec2_vpc_net_result.vpc.id }}"
region: "{{ region }}"
az: "{{ zone }}" # az is the availability zone
state: "{{ state }}"
cidr: "{{ subNetCidrBlock }}"
# enable public ip
map_public: yes
resource_tags:
Name: "{{ subnet_name }}"
register: subnet_result

Creating internet gateway

# create an internet gateway for the vpc
- name: create ec2 vpc internet gateway
ec2_vpc_igw:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
vpc_id: "{{ ec2_vpc_net_result.vpc.id }}"
region: "{{ region }}"
state: "{{ state }}"
tags:
Name: "{{ igw_name }}"
register: igw_result

Creating routing table

- name: Creating routing table for EC2 VPC Public Subnet
ec2_vpc_route_table:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
vpc_id: "{{ ec2_vpc_net_result.vpc.id }}"
region: "{{ region }}"
state: "{{ state }}"
tags:
Name: "{{ route_table_name }}"
subnets: [ "{{ subnet_result.subnet.id }}" ]# create routes
routes:
- dest: "{{ destinationCidrBlock }}"
gateway_id: "{{ igw_result.gateway_id }}"
register: public_route_table

Creating security group

- ec2_group:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
vpc_id: "{{ ec2_vpc_net_result.vpc.id }}"
region: "{{ region }}"
state: "{{ state }}"
name: "{{ security_group_name }}"
description: "{{ security_group_name }}"
tags:
Name: "{{ security_group_name }}"
rules:
- proto: all
cidr_ip: "{{ port22CidrBlock }}"
rule_desc: allow all traffic
register: security_group_results

Launching EC2-instance

- name: "Provisioning OS on AWS using Ansible"
ec2:
key_name: "ansiblekey"
instance_type: "t2.micro"
image: "ami-08e0ca9924195beba"
wait: yes
count: 1
vpc_subnet_id: "{{ subnet_result.subnet.id }}"
assign_public_ip: yes
region: "ap-south-1"
state: present
group_id: "{{ security_group_results.group_id }}"
aws_access_key: "{{aws_access_key}}"
aws_secret_key: "{{aws_secret_key}}"
instance_tags:
Name: "{{ item }}"
loop: "{{ OS_Names }}"
  • Vars file of EC2 playbook :
---# vars file for EC2-launch 
image: "ami-089c6f2e3866f0f14"
instance_type: "t2.micro"
region: "us-east-2"
key: testingkey
vpc_subnet_id: "subnet-2321516f"
security_group_id: "sg-07a58bacace819405"
OS_Names:
- "K8S_Master" aws_access_key: 'xxxxxxxxxxxxxx'
aws_secret_key: 'xxxxxxxxxxxxxxxxxxxxxxxxxx'

We can confirm that the instance has launched after running the playbook by checking the AWS console.

Now let us configure it as our master node for the kubernetes cluster.

Steps to be performed in all the nodes.

1. First, install docker and start the services. For setting up Kubernetes need a docker driver with systemd. By default, systemd commands are not supported by containers.

2. Configure the systemd to the docker.

3. Configuring Kubernetes repository for yum.

4. Install Kubectl, Kubelete, and kubeadm.

Kubectl: The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters.

Kubelet: An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.

The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy. The kubelet doesn’t manage containers that were not created by Kubernetes.

Kubeadm: It is a Kubernetes cluster management tool. It performs the necessary actions to create a Kubernetes cluster. It also useful for upgrading, joining multiple nodes, manages Kubernetes certificates, external authentications for the cluster.

  • Now we can start kubelet services
  • You also need to install iproute-tc to maintain the network inside Kubernetes cluster.

iproute-tc:- The Traffic Control utility manages queueing disciplines, their classes, and attached filters and actions. It is the standard tool to configure QoS in Linux. Simply it manages the network traffic in the cluster.

- name: Install Docker
package:
name: “docker”
state: present- name: Changing docker driver
copy:
src: daemon.json
dest: “/etc/docker/daemon.json”- name: Starting Docekr services
service:
name: docker
state: restarted
enabled: yes- name: Configuring Kubernetes Repository
copy:
dest: “/etc/yum.repos.d/kubernetes.repo”
src: kubernetes.repo- name: Installing the k8s applications
command: “yum install -y kubelet kubeadm kubectl — disableexcludes=kubernetes”- name: Starting kubelet services
service:
name: “kubelet”
state: restarted
- name: Installing iproute-tc
package:
name: ‘iproute-tc’
state: present- name: Pulling images
shell: kubeadm config images pull- name: Configuring network
shell: “echo ‘1’ > /proc/sys/net/bridge/bridge-nf-call-iptables”

The above steps are common for the master and worker nodes, as you move forward you can see the differences.

Setting up the Master Node

  • Config Kubernetes admin file
  • Starting the kubeadm init services
  • kubeadm generates a token, which helps the worker node to connect with the master node.
  • Configure the flannel with Kubernetes

Flannel is a virtual networking layer designed specifically for containers. This helps the pods and nodes to keep connected to the master node, it will configure automatically to every worker node.

- name: Initializing Kubeadm Servicessetup
command: kubeadm init — pod-network-cidr=10.240.0.0/16 — ignore-preflight-errors=NumCPU — ignore-preflight-errors=Mem
ignore_errors: true- name: Creating .kube directory
file:
path: ~/.kube
state: directory
mode: 0755- name: link the admin.conf with .kube/admin file
file:
src: /etc/kubernetes/admin.conf
dest: ~/.kube/config
state: link
mode: 0644- name: Generating a token
command: kubeadm token create — print-join-command
register: token- name: Set the kubeadm join command globally
set_fact:
kubernetes_join_command: >
{{ token.stdout }}
when: token.stdout is defined
delegate_to: “{{ item }}”
delegate_facts: true
with_items: “{{ groups[‘all’] }}”- name: Transfering network file
copy:
src: kube-flannel.yml
dest: /root/kube-flannel.yml- name: Creating an Overlay Network to connect worker nodes
command: kubectl apply -f /root/kube-flannel.yml

That’s it we are done with setting up the master node.

Now, let us set up the worker nodes and connect them to complete the multi-cloud m cluster

First let us set up a virtual instance on Microsoft Azure,

  • Here, first we need to create a resource group
  • Then we need to create a virtual network
  • After that we need to add subnet
  • Then we create public IP address
  • We will be enabling SSH in order to configure it with ansible later on
  • We need to create virtual network interface card
  • Finally we create the VM
# This playbook create an Azure VM with public IP, and open 22 port for SSH, and add ssh public key to the VM.
# This playbook create an Azure VM with public IP
# Change variables below to customize your VM deployment- name: Create Azure VM
hosts: localhost
connection: local
vars:
resource_group: "{{ resource_group_name }}"
vm_name: testvm
location: eastus
ssh_key: "<KEY>"
tasks:
- name: Create a resource group
azure_rm_resourcegroup:
name: "{{ resource_group }}"
location: "{{ location }}" - name: Create virtual network
azure_rm_virtualnetwork:
resource_group: "{{ resource_group }}"
name: "{{ vm_name }}"
address_prefixes: "10.0.0.0/16" - name: Add subnet
azure_rm_subnet:
resource_group: "{{ resource_group }}"
name: "{{ vm_name }}"
address_prefix: "10.0.1.0/24"
virtual_network: "{{ vm_name }}" - name: Create public IP address
azure_rm_publicipaddress:
resource_group: "{{ resource_group }}"
allocation_method: Static
name: "{{ vm_name }}" - name: Create Network Security Group that allows SSH
azure_rm_securitygroup:
resource_group: "{{ resource_group }}"
name: "{{ vm_name }}"
rules:
- name: SSH
protocol: Tcp
destination_port_range: 22
access: Allow
priority: 1001
direction: Inbound - name: Create virtual network interface card
azure_rm_networkinterface:
resource_group: "{{ resource_group }}"
name: "{{ vm_name }}"
virtual_network: "{{ vm_name }}"
subnet: "{{ vm_name }}"
public_ip_name: "{{ vm_name }}"
security_group: "{{ vm_name }}"- name: Create VM
azure_rm_virtualmachine:
resource_group: "{{ resource_group }}"
name: "{{ vm_name }}"
vm_size: Standard_DS1_v2
admin_username: azureuser
ssh_password_enabled: false
ssh_public_keys:
- path: /home/azureuser/.ssh/authorized_keys
key_data: "{{ ssh_key }}"
network_interfaces: "{{ vm_name }}"
image:
offer: CentOS
publisher: OpenLogic
sku: 7.5
version: latest

This will launch a CentOS VM.

Now let us launch a VM over GCP,

  • First we need to create a compute disk
  • Then we need to create an address (this is for the IP address)
  • Finally we configure and create the instance
- name: Create an instance
hosts: localhost
gather_facts: no
vars:
gcp_project: my-project
gcp_cred_kind: serviceaccount
gcp_cred_file: /home/my_account.json
zone: "us-central1-a"
region: "us-central1"tasks:
- name: create a disk
gcp_compute_disk:
name: 'disk-instance'
size_gb: 50
source_image: 'projects/ubuntu-os-cloud/global/images/family/ubuntu-1604-lts'
zone: "{{ zone }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
scopes:
- https://www.googleapis.com/auth/compute
state: present
register: disk
- name: create a address
gcp_compute_address:
name: 'address-instance'
region: "{{ region }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
scopes:
- https://www.googleapis.com/auth/compute
state: present
register: address
- name: create a instance
gcp_compute_instance:
state: present
name: test-vm
machine_type: n1-standard-1
disks:
- auto_delete: true
boot: true
source: "{{ disk }}"
network_interfaces:
- network: null # use default
access_configs:
- name: 'External NAT'
nat_ip: "{{ address }}"
type: 'ONE_TO_ONE_NAT'
zone: "{{ zone }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
scopes:
- https://www.googleapis.com/auth/compute
register: instance- name: Wait for SSH to come up
wait_for: host={{ address.address }} port=22 delay=10 timeout=60- name: Add host to groupname
add_host: hostname={{ address.address }} groupname=new_instances

We are done with launching the instances now we need to install docker and kubernetes as mentioned above.

Setting up as Worker Nodes

Just run the kubeadm join command with the token generated in the master

- name: connecting to the master node  shell: >  {{ kubernetes_join_command }}

That’s it we are done it setting up the cluster.

To check everything is working fine

  • Now, let’s check the status of the cluster by logging in to our EC2 master node.
  • The Kubelet service is active and running. ($ Systemctl status kubelet)
  • Docker is also active and running. ($ Systemctl status docker)

To check the status of the pods,

$ kubectl get nodes

We can create an application of our choice and deploy it in the Kubernetes cluster.

#deploy the app 
$ kubectl create deployment myapp --image=vimal13/apache-webserver-php#expose the deployment to the real world
$ kubectl expose service myapp --port=80 --type=NodePort#We will get a link to connect to the application

We also can perform the above steps in the GUI console in a similar fashion for a better understanding of the process.

thanks for reading…….

--

--