Multi cloud Multi node kubernetes cluster ✨

Tirth Patel
4 min readApr 25, 2021

Task description 📄

📌 CREATE A MULTI-CLOUD SETUP of K8S cluster:

🔅 Lunch node in AWS
🔅 Lunch node in Azure
🔅 Lunch node in Local machine
🔅 And one over network on local system /Cloud -> Master Node
🔅 Then setup multi node kubernetes cluster.

In this article, we will learn how to setup a multi cloud setup for multi node kubernetes cluster. We will launch two EC2 instance on AWS. One as a Master node and other as a slave node. Then, we will launch one Virtual Machine in Azure cloud and configure as a k8s slave node. Finally, we will launch a Virtual OS in our laptop using Oracle Virtual box and connect it to the k8s master running in AWS.

Lets get started …

Launching Master node

Firstly, create two EC2 instance and tag them appropriately. Now, install docker engine on Master node.

yum install docker -y 

Also, in kubernetes it is mandatory to have cgroup driver as systemd. So, create a daemon.json file in /etc/docker defining cgroup driver as systemd.

vim /etc/docker/daemon.json {
“exec-opts”:[“native.cgroupdriver=systemd”]
}

Now, restart docker service and enable it.

systemctl restart docker systemctl enable docker --now

Now, second step is to configure kubernetes repo so that can install kubeadm, kubelet and kubectl.

vim /etc/yum.repos.d/kubernetes.repo[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

Install required packages and download necessary images to run master node.

yum install kubeadm kubectl kubelet -y kubeadm config images pullsystemctl enable kubelet --now

Install iproute-tc and change iptable in /etc/sysctl.d/k8s.conf

yum install iproute-tc vim /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
sysctl --system

Apply flannel

kubectl apply  -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Almost done! Now initialize the cluster with option — control-plane-endpoint “PUBLICIP:PORT”. port will be 6443.

kubeadm init --control-plane-endpoint "PUBLICIP:PORT" --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=NumCPU --ignore-preflight-errors=MEM kubeadm init --control-plane-endpoint "" --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=NumCPU --ignore-preflight-errors=MEM

Now, you can configure your master node as client.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Now, master node is configured properly. Lets setup nodes.

Configuring Local System as k8s Slave

Run a Virtual OS on top of oracle virtual box and follow below steps.

Here, I am using Red Hat Enterprise Linux 8 as virtual OS.

Install docker engine and configure kubernetes repo and install required packages.

yum install docker -y#Configuring Docker cgroup diver as systemd vim /etc/docker/daemon.json{
“exec-opts”:[“native.cgroupdriver=systemd”]
}
#Restart docker and enable it systemctl restart docker
systemctl enable docker --now
#Configuring kubernetes repovim /etc/yum.repos.d/kubernetes.repo[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
# Installing required packages
yum install kubeadm kubectl kubelet -y
# Enabling kubelet service
systemctl enable kubelet --now
# Install iproute-tc and configure IPtable
yum install iproute-tc
vim /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
sysctl --system

Disable swap in local system

swapoff -a

Now, our slave is ready to join. Go to master node and print the join command.

kubeadm token create --print-join-command 

Go to virtual OS and type the join command.

Type kubectl get nodes in master node.

Now, lets connect the slave node running in AWS.

Do same thing as above and run the join command.

Check total nodes by running kubectl get nodes in master.

Launching k8s slave in Azure

Make sure to add the inbound rule and outbound rule. Here, I am allowing all ports.

Now, do the same steps as above. In azure VM, podman is pre installed. So, you can remove podman and configure docker repo and install docker-ce. Do same steps as above configurations of slave and run the join command.

Check at master node.

We have successfully connected Azure VM, localhost and one Amazon EC2 as a k8s slave node to k8s master node running on AWS EC2.

🔰 Keep Learning, Keep Sharing 🔰

Stay healthy 😃

--

--