8000 e2e: create and delete namespaces using labels by parikshithb · Pull Request #2050 · RamenDR/ramen · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

e2e: create and delete namespaces using labels #2050

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

parikshithb
Copy link
Member
@parikshithb parikshithb commented May 22, 2025

This update ensures only namespaces created by ramen e2e can be deleted
safely using label.

  • Implemented adding ramen-e2e label (app.kubernetes.io/managed-by=ramen-e2e)
    during ns creation to prevent accidental deletion by checking
    whether the lable exist.
  • Add CreateAppNamespaces/DeleteAppNamespaces functions to delete ns on
    both drclusters
  • Consolidate volsync annotation logic into AddVolsyncAnnotation function
  • Add WaitForAppNamespacesDelete for consistent ns cleanup waiting on
    both drclusters

Testing:

  1. Namespaces created during tests on both dr clusters:
$kubectl get namespace --context=dr2 | grep e2e
e2e-appset-deploy-cephfs-busybox      Active   18s
e2e-appset-deploy-rbd-busybox         Active   18s
e2e-disapp-deploy-cephfs-busybox      Active   18s
e2e-disapp-deploy-rbd-busybox         Active   18s
e2e-subscr-deploy-cephfs-busybox      Active   18s
e2e-subscr-deploy-rbd-busybox         Active   18s

$ kubectl get namespace --context=dr1 | grep e2e
e2e-appset-deploy-cephfs-busybox      Active   21s
e2e-appset-deploy-rbd-busybox         Active   21s
e2e-disapp-deploy-cephfs-busybox      Active   21s
e2e-disapp-deploy-rbd-busybox         Active   21s
e2e-subscr-deploy-cephfs-bu
8000
sybox      Active   21s
e2e-subscr-deploy-rbd-busybox         Active   21s
  1. ramen-e2e label added:
apiVersion: v1
kind: Namespace
metadata:
  annotations:
    volsync.backube/privileged-movers: "true"
  creationTimestamp: "2025-05-22T08:32:45Z"
  labels:
    app.kubernetes.io/managed-by: ramen-e2e
    kubernetes.io/metadata.name: e2e-appset-deploy-rbd-busybox
  name: e2e-appset-deploy-rbd-busybox
  resourceVersion: "148444"
  uid: 67b328ac-f9a7-4203-96ba-f3a965ca5f39
spec:
  finalizers:
  - kubernetes
status:
  phase: Active
  1. Namespaces deleted during undeploy:
$ kubectl get namespace --context=dr2 | grep e2e

$ kubectl get namespace --context=dr1 | grep e2e

Based on #2035

Fixes #1958

Replace --timeout=5m with --wait=false in the kubectl
delete command to ensure that DeleteDiscoveredApps
returns immediately after initiating deletion instead
of waiting for resources to be fully deleted. This change
makes the function's behavior consistent with other delete
operations that separately handle deletion initiation and
waiting for completion, improving control flow.

Signed-off-by: Parikshith <parikshithb@gmail.com>
This commit adds infra to wait for resources are fully deleted
before tests continue, preventing conditions where subsequent
tests might be affected by lingering resources.

- Add WaitForResourceDelete generic function to wait for
  resources to be fully deleted
- Implement wait funcs for different resource types:
  - ApplicationSet
  - ConfigMap
  - Placement
  - ManagedClusterSetBinding

Signed-off-by: Parikshith <parikshithb@gmail.com>
This change updates the undeploy function for the
ApplicationSet by adding wait for all resource deleted
with a deadline part of the context.

Signed-off-by: Parikshith <parikshithb@gmail.com>
This separation infra allows for more control over the namespace
deletion process to explicitly wait for namespace deletion
after initiating namespace deletion. Updated undeploy functions
of Disapp, Subscription and the EnsureChannelDeleted function by
adding the separate wait for namespace with a shared deadline
from context.

Signed-off-by: Parikshith <parikshithb@gmail.com>
Since we are doing common delete ops on both clusters
current comment needs to updated to indicate same.

Signed-off-by: Parikshith <parikshithb@gmail.com>
This change replace waitDRPCDeleted with WaitForDRPCDelete
with the shared deadline to DRPC and other related
resource deletion like placement and mcsb.

Signed-off-by: Parikshith <parikshithb@gmail.com>
This update ensures only namespaces created by ramen e2e can be deleted
safely using "ramen-e2e" label.

- Implemented adding ramen-e2e label (app.kubernetes.io/managed-by=ramen-e2e)
  during ns creation to prevent accidental deletion by checking
  whether the lable exist.
- Add CreateAppNamespaces/DeleteAppNamespaces functions to delete ns on
  both drclusters
- Consolidate volsync annotation logic into AddVolsyncAnnotation function
- Add WaitForAppNamespacesDelete for consistent ns cleanup waiting on
  both drclusters

Signed-off-by: Parikshith <parikshithb@gmail.com>
@parikshithb parikshithb changed the title App ns e2e: create and delete namespaces using labels May 22, 2025
@parikshithb parikshithb removed the request for review from raghavendra-talur May 22, 2025 11:29
const (
// Namespace annotation for volsync to grant elevated permissions for mover pods
// More info: https://volsync.readthedocs.io/en/stable/usage/permissionmodel.html#controlling-mover-permissions
volsyncPrivilegedMovers = "volsync.backube/privileged-movers"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets raname this to "volsyncPrivilegedMoversAnnotation" to make the code more clear.

8000
volsyncPrivilegedMovers = "volsync.backube/privileged-movers"

// Label to identify namespaces created by ramen e2e
ramenE2EManagedLabel = "app.kubernetes.io/managed-by"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not ramen label, let's call it managedByLabel


// Label to identify namespaces created by ramen e2e
ramenE2EManagedLabel = "app.kubernetes.io/managed-by"
ramenE2EManagedValue = "ramen-e2e"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be just "ramenE2e" - it can be used in many places.


func CreateNamespace(ctx types.Context, cluster types.Cluster, namespace string) error {
log := ctx.Logger()

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unneeded change

ns := &corev1.Namespace{
ObjectMeta: metav1.ObjectMeta{
Name: namespace,
Labels: map[string]string{
ramenE2EManagedLabel: ramenE2EManagedValue,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With the new names this will be much more clear:

managedByLabel: ramenE2e

@@ -32,56 +41,86 @@ func CreateNamespace(ctx types.Context, cluster types.Cluster, namespace string)
return err
}

log.Debugf("Namespace %q already exist in cluster %q", namespace, cluster.Name)
log.Debugf("Namespace %q already exists in cluster %q", namespace, cluster.Name)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the namespace exists, we want to check if it is using our laebl (so we will delete it later.

If not, we are running a test in a user created namespace which is problematic, but we cannot avoid this (e.g. leftover namespace created by older version of ramen-e2e or ramenctl). Let's log a warning about this.

log.Debugf("Namespace %q in cluster %q is not managed by ramen-e2e, skipping deletion",
namespace, cluster.Name)

return nil
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets extract a function for checking if the namespace is managed by ramen:

isManagedByRamenE2e(types.Context, types.Cluster, client.Obj) (bool, error)

It should return true/false based on the label, or fail if the object does not exist or the check failed. The caller will use k8serrors.IsNotFound(err) so don't need to check internally.

// CreateAppNamespaces creates a namespace on both drclusters with ramen-e2e label.
// The label ensure safe deletion by allowing only ramen-e2e managed namespaces to be removed.
// This prevents accidental deletion of namespaces not created by the e2e test framework.
func CreateAppNamespaces(ctx types.Context, namespace string) error {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we can, lets split these changes, so we have 2 commits:

  • First adding the infra for creating a namespace with our label and deleting only the namespaces we created
  • Second using the infra to create, delete, and wait for namespaces in all deployers

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Test clean does not delete application namespaces on managed clusters
2 participants
0