How to remove a stuck namespace

How to remove a stuck namespace

With the help of the Kubernetes API

Update: Scroll down to see some feedback from the community for alternative ways to solve the issue!

Yet another article

Yes, this will be blog article number 200000000 on this subject. Today at work I got stuck on an extreme persistent namespace. Little guy just didn't want to go away.

image.png

It just stuck in the Terminating state. Wait as long as you want, this pesky namespace will to not go anywhere. And yes, I tried it with waiting!

First I put all my CLI hacks in action:

k delete ns cdi --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
namespace "cdi" force deleted

No chance to get it the namespace to disappear!

But what is the reason behind these phenomena? Why sometimes a namespace get stuck?

Finalizers

What are Finalizers? Here an extract from the official Kubernetes documentation

Finalizers are namespaced keys that tell Kubernetes to wait until specific conditions are met before it fully deletes resources marked for deletion. Finalizers alert controllers to clean up resources the deleted object owned.

When you try to delete the namespace resource, the API server handling the delete request notices the values in the finalizers field and does the following:

  • Modifies the object to add a metadata.deletionTimestamp field with the time you started the deletion.

  • Prevents the object from being removed until its metadata.finalizers field is empty.

  • Returns a 202 status code (HTTP "Accepted")

My next move was to patch the resource by manually setting the finalizer field to an empty list:

kubectl patch ns/cdi -p '{"metadata":{"finalizers":[]}}' --type=merge
namespace/cdi patched

Welp! This did not help, neither. The namespace was still stuck in the terminating state.

image.png

Then I thought about things I learned in my CKA course! I mean, we all pay a good amount of money for it! There should be an outcome, or not? There is a way to directly communicate with the Kubernetes API server by creating a local proxy server.

Using the finalize endpoint

The steps are very simple, first get the JSON definition of the resource with the following command:

k get ns cdi -o json > cid.json

Open the JSON file and remove the kubernetes item from the finalizer array, and save the file.

image.png

Now we start a proxy server or application-level gateway between our machine and the Kubernetes API server. The proxy will be available on http://127.0.0.1:8001

Type this command in a new window.

 k proxy
Starting to serve on 127.0.0.1:8001

Now we can call the Kubernetes API http://127.0.0.1:8001/api/v1/namespaces/{NAME_OF_STUCKED_NAMESPACE}/finalize endpoint for the namespace with our modified JSON. Change the variable {NAME_OF_STUCKED_NAMESPACE} with your namespace.

curl -k -H "Content-Type: application/json" -X PUT --data-binary @cid.json http://127.0.0.1:8001/api/v1/namespaces/cdi/finalize

{
  "kind": "Namespace",
  "apiVersion": "v1",
  "metadata": {
    "name": "cdi",
    "uid": "6c957c91-1f1a-463f-99fe-04a121c48ec6",
    "resourceVersion": "37800934",
    "creationTimestamp": "2022-07-26T13:04:08Z",
    "deletionTimestamp": "2022-08-11T11:32:26Z",
    "labels": {
      "cdi.kubevirt.io": ""
    },
    "managedFields": [
      {
        "manager": "kubectl-create",
        "operation": "Update",
        "apiVersion": "v1",
        "time": "2022-07-26T13:04:08Z",
        "fieldsType": "FieldsV1",
        "fieldsV1": {"f:metadata":{"f:labels":{".":{},"f:cdi.kubevirt.io":{}}},"f:status":{"f:phase":{}}}
      },
      {
        "manager": "kube-controller-manager",
        "operation": "Update",
        "apiVersion": "v1",
        "time": "2022-08-11T11:32:32Z",
        "fieldsType": "FieldsV1",
        "fieldsV1": {"f:status":{"f:conditions":{".":{},"k:{\"type\":\"NamespaceContentRemaining\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"NamespaceDeletionContentFailure\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"NamespaceDeletionDiscoveryFailure\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"NamespaceDeletionGroupVersionParsingFailure\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"NamespaceFinalizersRemaining\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}
      }
    ]
  },
  "spec": {

  },
  "status": {
    "phase": "Terminating",
    "conditions": [
      {
{
        "type": "NamespaceDeletionDiscoveryFailure",
        "status": "True",
        "lastTransitionTime": "2022-08-11T11:32:31Z",
        "reason": "DiscoveryFailed",
        "message": "Discovery failed for some groups, 2 failing: unable to retrieve the complete list of server APIs: subresources.kubevirt.io/v1: the server is currently unable to handle the request, subresources.kubevirt.io/v1alpha3: the server is currently unable to handle the request"
      },
      {
        "type": "NamespaceDeletionGroupVersionParsingFailure",
        "status": "False",
        "lastTransitionTime": "2022-08-11T11:32:32Z",
        "reason": "ParsedGroupVersions",
        "message": "All legacy kube types successfully parsed"
      },
      {
        "type": "NamespaceDeletionContentFailure",
        "status": "False",
        "lastTransitionTime": "2022-08-11T11:32:32Z",
        "reason": "ContentDeleted",
        "message": "All content successfully deleted, may be waiting on finalization"
      },
      {
        "type": "NamespaceContentRemaining",
        "status": "False",
        "lastTransitionTime": "2022-08-11T11:32:32Z",
        "reason": "ContentRemoved",
        "message": "All content successfully removed"
      },
      {
        "type": "NamespaceFinalizersRemaining",
        "status": "False",
        "lastTransitionTime": "2022-08-11T11:32:32Z",
        "reason": "ContentHasNoFinalizers",
        "message": "All content-preserving finalizers finished"
      }
    ]
  }
}

And boom, the namespace is gone! For good!

image.png

Took some time, and I feel like this:

image.png

Community contribution

Here is some alternative way from Suman Chakraborty

kubectl get namespace "stucked-namespace" -o json \ | tr -d "\n" | sed "s/\"finalizers\": \[[^]]\+\]/\"finalizers\": []/" \ | kubectl replace --raw /api/v1/namespaces/stucked-namespace/finalize -f -

And a very good article written by Andrew Block

Here the link to the article -> cloud.redhat.com/blog/the-hidden-dangers-of..

Another fantastic contribution from Pieter Lange

Here the link to the repo of kill-kube-ns

Another great tool is from Darren Shepherd