1. AWS EC2 Instance Provisioning Instance Type: Recommend t3.medium or larger for masters, t3.large or larger for workers. AMI: RHEL 8 or 9 (e.g., RHEL-8.8.0_HVM-20230713-x86_64-47-Hourly2-GP2 ) Number of Instances: 3 Master Nodes (e.g., k8s-master-1 , k8s-master-2 , k8s-master-3 ) 2 Worker Nodes (e.g., k8s-worker-1 , k8s-worker-2 ) Storage: Minimum 30GB GP3/GP2 for OS, consider more for worker nodes. Security Group: Create a security group allowing: SSH (Port 22) from your IP. All TCP/UDP traffic within the security group (for inter-node communication). Kubernetes Ports: Master: 6443 (API Server), 2379-2380 (etcd), 10250 (Kubelet), 10251 (kube-scheduler), 10252 (kube-controller-manager) Worker: 10250 (Kubelet), 30000-32767 (NodePort Services) Networking: All instances in the same VPC and subnet. Assign Public IP for SSH access. 2. Initial Server Configuration (All Nodes) SSH into each instance as ec2-user and switch to root : sudo su - 2.1. Update System & Install Dependencies yum update -y yum install -y device-mapper-persistent-data lvm2 wget net-tools vim bash-completion 2.2. Disable SELinux setenforce 0 sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config 2.3. Disable Swap swapoff -a sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab 2.4. Configure Kernel Modules & Sysctl cat <<EOF | tee /etc/modules-load.d/k8s.conf overlay br_netfilter EOF modprobe overlay modprobe br_netfilter cat <<EOF | tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.ipv4.ip_forward = 1 EOF sysctl --system 2.5. Install Containerd # Add Docker repository for containerd yum install -y yum-utils yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo # Install containerd yum install -y containerd.io # Configure containerd and restart mkdir -p /etc/containerd containerd config default | tee /etc/containerd/config.toml sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml systemctl enable containerd --now systemctl restart containerd 2.6. Add Kubernetes Repository cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/ enabled=1 gpgcheck=1 gpgkey=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/repodata/repomd.xml.key exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni EOF Note: Adjust v1.28 to your desired Kubernetes version. 2.7. Install Kubeadm, Kubelet, Kubectl yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes systemctl enable --now kubelet 3. Master Node Configuration (One Master Node Only - k8s-master-1) 3.1. Initialize Kubernetes Control Plane Run this command ONLY on the first master node (e.g., k8s-master-1 ). Use the private IP of the first master node as --apiserver-advertise-address . kubeadm init \ --upload-certs \ --control-plane-endpoint "k8s-master-1-private-ip" \ --pod-network-cidr "10.244.0.0/16" \ --kubernetes-version "v1.28.0" Replace k8s-master-1-private-ip with the actual private IP of your first master node. Adjust --kubernetes-version to match your installed version. Important: Save the output, especially the kubeadm join command for workers and the kubeadm join --control-plane command for other masters. 3.2. Configure Kubeconfig (on k8s-master-1) mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config 3.3. Install Pod Network Add-on (on k8s-master-1) This example uses Flannel. Other options include Calico, Cilium, etc. kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml 3.4. Verify Cluster Status (on k8s-master-1) kubectl get nodes kubectl get pods -A Wait until all core pods ( kube-system namespace) are running. 4. Join Other Master Nodes (k8s-master-2, k8s-master-3) On k8s-master-2 and k8s-master-3 , run the kubeadm join command for control plane nodes that was provided in the output of kubeadm init from k8s-master-1 . It will look similar to this: kubeadm join <control-plane-endpoint>:6443 \ --token <token> \ --discovery-token-ca-cert-hash sha256:<hash> \ --control-plane --certificate-key <certificate-key> After joining, verify on k8s-master-1 : kubectl get nodes All three master nodes should eventually show as Ready . 5. Join Worker Nodes (k8s-worker-1, k8s-worker-2) On each worker node ( k8s-worker-1 , k8s-worker-2 ), run the kubeadm join command for worker nodes that was provided in the output of kubeadm init from k8s-master-1 . It will look similar to this: kubeadm join <control-plane-endpoint>:6443 \ --token <token> \ --discovery-token-ca-cert-hash sha256:<hash> After joining, verify on any master node: kubectl get nodes All worker nodes should eventually show as Ready . 6. Optional: Install Metrics Server (on any Master) For kubectl top commands to work. kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml Wait a few minutes, then test: kubectl top nodes kubectl top pods -A 7. Troubleshooting Common Issues Pods stuck in Pending : Check kubectl describe pod <pod-name> -n <namespace> for scheduler errors, resource constraints, or network issues. Pods stuck in ContainerCreating : Check containerd logs ( journalctl -u containerd ), image pull issues, or cgroup driver mismatch. Nodes not Ready : Check kubelet logs ( journalctl -u kubelet ). Ensure swap is off, SELinux is permissive, and required kernel modules are loaded. Flannel issues: Ensure --pod-network-cidr matches Flannel's configuration. Check Flannel pod logs. Firewall: Ensure all necessary ports are open in AWS Security Groups and any host-level firewalls (e.g., firewalld ). Token/Hash expiry: kubeadm join tokens expire after 24 hours. Generate a new token if needed: kubeadm token create --print-join-command 8. Post-Setup Steps Install a Load Balancer (e.g., NGINX Ingress Controller) for external access to services. Set up persistent storage (e.g., AWS EBS CSI Driver). Implement monitoring (e.g., Prometheus, Grafana). Configure logging (e.g., ELK stack). Regularly update Kubernetes components.