Troubleshooting Pods Stuck in Terminating Stage in Kubernetes

  • Post category:Containers / Kubernetes
  • Post last modified:August 2, 2024

Introduction:

In a Kubernetes cluster, it’s not uncommon to encounter pods stuck in a terminating state. This issue can disrupt the normal operation of your application and cause frustration. Fortunately, with a few troubleshooting steps and SSH commands, you can identify the root cause of the problem and resolve it effectively. In this article, we will explore various techniques to troubleshoot and rectify pods stuck in the terminating state in Kubernetes.

Understanding Pod Termination in Kubernetes:

Before we dive into troubleshooting, let’s briefly understand how pod termination works in Kubernetes. When you delete a pod, Kubernetes initiates the termination process by sending a termination signal (SIGTERM) to the pod. The pod’s containers then receive the signal and begin the shutdown process. Once all containers have gracefully terminated, Kubernetes proceeds with the final cleanup by deleting the pod.

Identifying Pods Stuck in Terminating State:

To troubleshoot pods stuck in the terminating state, you first need to identify the affected pods. Run the following command to list all pods in the cluster:

kubectl get pods --all-namespaces

Look for pods with the “Terminating” status that remain stuck for an extended period.

Troubleshooting Steps: Now, let’s explore the troubleshooting steps to resolve the issue.

1. Verify Pod Status: Check the events associated with the problematic pod using the following command:

kubectl describe pod <pod-name> -n <namespace>

Look for any error messages or events that indicate the reason for the pod’s termination delay.

2. Check for Pending Deletion:
Sometimes, Kubernetes might fail to delete a pod due to issues with the underlying components. Check for any pending deletions by running the following command:

kubectl get pod <pod-name> -n <namespace> -o json | jq '.metadata.finalizers'

If the output includes a finalizer entry, it indicates that the pod deletion is pending. To force delete the pod, execute:

kubectl delete pod <pod-name> -n <namespace> --grace-period=0 --force

3. Verify Node Connectivity:
Ensure that the node hosting the pod is accessible. SSH into the node using the following command:

ssh <username>@<node-ip>

Replace <username> with your SSH username and <node-ip> with the IP address of the node hosting the pod. Once connected, check for any processes or resources consuming excessive CPU or memory.

4. Drain the Node:
If the node appears to have issues, you can drain it to move all the pods to other nodes. Use the following command:

kubectl drain <node-name> --ignore-daemonsets

Replace <node-name> with the name of the problematic node. This command safely evicts the pods from the node, allowing them to be rescheduled elsewhere.

5. Manually Delete the Pod:
If all else fails, you can manually delete the pod from the node. SSH into the node, identify the container ID of the stuck pod, and then forcefully terminate the container using the following commands:

docker ps | grep <pod-name>
docker rm -f <container-id>

Replace <pod-name> with the name of the problematic pod and <container-id> with the ID of the container associated with the pod.

Troubleshooting pods stuck in the terminating state can be a complex task, but with the help of SSH commands and the steps outlined in this article, you can effectively identify and resolve the issue. Remember to follow best practices and consult the Kubernetes documentation for further guidance.

Ashutosh Dixit

I am currently working as a Senior Technical Support Engineer with VMware Premier Services for Telco. Before this, I worked as a Technical Lead with Microsoft Enterprise Platform Support for Production and Premier Support. I am an expert in High-Availability, Deployments, and VMware Core technology along with Tanzu and Horizon.

Leave a Reply