8000 e2e: Wait for resource deletion during undeploy and unprotect by parikshithb · Pull Request #2035 · RamenDR/ramen · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

e2e: Wait for resource deletion during undeploy and unprotect #2035

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

parikshithb
Copy link
Member
@parikshithb parikshithb commented May 13, 2025

Improves how resources are cleaned up and waited. It implements a more consistent separation between resource deletion initiation and wait/verification of complete deletion of the resource with deadline.

Changes:

  • Deadline-based resource deletion
    Implement a generic waitForResourceDelete function that accepts a deadline parameter
    Replace individual timeouts with a shared deadline approach for all deletion operations
    Add resource-specific wait functions for ApplicationSets, ConfigMaps, Placements, DRPCs, namespace and mcsb.

  • Separation of deletion and waiting
    Refactor namespace deletion to separate the deletion request from the wait operation
    Change kubectl commands to use --wait=false to return immediately and handle waiting ourselves

  • Update deployers and actions to use new wait functions with deadline

Example wait logs of different resources added:

  • appset undeploy
2025-05-21T15:13:45.282+0530	DEBUG	appset-deploy-rbd-busybox	util/wait.go:99	Waiting until ApplicationSet "argocd/appset-deploy-rbd-busybox" is deleted in cluster "hub"
2025-05-21T15:13:45.284+0530	DEBUG	appset-deploy-rbd-busybox	util/wait.go:107	ApplicationSet "argocd/appset-deploy-rbd-busybox" deleted in cluster "hub"
2025-05-21T15:13:45.284+0530	DEBUG	appset-deploy-rbd-busybox	util/wait.go:99	Waiting until ConfigMap "argocd/appset-deploy-rbd-busybox" is deleted in cluster "hub"
2025-05-21T15:13:45.287+0530	DEBUG	appset-deploy-rbd-busybox	util/wait.go:107	ConfigMap "argocd/appset-deploy-rbd-busybox" deleted in cluster "hub"
2025-05-21T15:13:45.287+0530	DEBUG	appset-deploy-rbd-busybox	util/wait.go:99	Waiting until Placement "argocd/appset-deploy-rbd-busybox" is deleted in cluster "hub"
2025-05-21T15:13:45.289+0530	DEBUG	appset-deploy-rbd-busybox	util/wait.go:107	Placement "argocd/appset-deploy-rbd-busybox" deleted in cluster "hub"
  • dissapp undeploy:
2025-05-21T15:11:45.721+0530	DEBUG	disapp-deploy-cephfs-busybox	util/wait.go:99	Waiting until Namespace "e2e-disapp-deploy-cephfs-busybox" is deleted in cluster "dr1"
2025-05-21T15:11:51.734+0530	DEBUG	disapp-deploy-cephfs-busybox	util/wait.go:107	Namespace "e2e-disapp-deploy-cephfs-busybox" deleted in cluster "dr1"
2025-05-21T15:11:51.734+0530	DEBUG	disapp-deploy-cephfs-busybox	util/wait.go:99	Waiting until Namespace "e2e-disapp-deploy-cephfs-busybox" is deleted in cluster "dr2"
2025-05-21T15:11:51.735+0530	DEBUG	disapp-deploy-cephfs-busybox	util/wait.go:107	Namespace "e2e-disapp-deploy-cephfs-busybox" deleted in cluster "dr2"

  • subscription undeploy:
2025-05-21T15:13:16.455+0530	DEBUG	subscr-deploy-cephfs-busybox	util/wait.go:99	Waiting until Namespace "e2e-subscr-deploy-cephfs-busybox" is deleted in cluster "hub"
2025-05-21T15:13:22.472+0530	DEBUG	subscr-deploy-cephfs-busybox	util/wait.go:107	Namespace "e2e-subscr-deploy-cephfs-busybox" deleted in cluster "hub"
  • disapp unprotect:
2025-05-21T15:10:42.483+0530	DEBUG	disapp-deploy-cephfs-busybox	util/wait.go:99	Waiting until DRPlacementControl "ramen-ops/disapp-deploy-cephfs-busybox" is deleted in cluster "hub"
2025-05-21T15:11:42.682+0530	DEBUG	disapp-deploy-cephfs-busybox	util/wait.go:107	DRPlacementControl "ramen-ops/disapp-deploy-cephfs-busybox" deleted in cluster "hub"
2025-05-21T15:11:42.682+0530	DEBUG	disapp-deploy-cephfs-busybox	util/wait.go:99	Waiting until Placement "ramen-ops/disapp-deploy-cephfs-busybox" is deleted in cluster "hub"
2025-05-21T15:11:42.683+0530	DEBUG	disapp-deploy-cephfs-busybox	util/wait.go:107	Placement "ramen-ops/disapp-deploy-cephfs-busybox" deleted in cluster "hub"
2025-05-21T15:11:42.683+0530	DEBUG	disapp-deploy-cephfs-busybox	util/wait.go:99	Waiting until ManagedClusterSetBinding "ramen-ops/default" is deleted in cluster "hub"
2025-05-21T15:11:42.683+0530	DEBUG	disapp-deploy-cephfs-busybox	util/wait.go:107	ManagedClusterSetBinding "ramen-ops/default" deleted in cluster "hub"
  • channel:
2025-05-21T15:13:45.291+0530	INFO	util/channel.go:89	Deleted channel "e2e-gitops/https-github-com-ramendr-ocm-ramen-samples-git" in cluster "hub"
2025-05-21T15:13:45.295+0530	DEBUG	util/wait.go:99	Waiting until Namespace "e2e-gitops" is deleted in cluster "hub"

Fixes #2019

@parikshithb parikshithb removed the request for review from raghavendra-talur May 13, 2025 12:03
@parikshithb parikshithb changed the title e2e: Wait for resource during undeploy and unprotect e2e: Wait for resource deletion during undeploy and unprotect May 13, 2025
Copy link
Member
@nirs nirs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome!

@nirs
Copy link
Member
nirs commented May 13, 2025

@parikshithb we need to consume this in ramnectl. If will probably reveal hidden bugs in ramen when running ramenctl test clean when storage is broken.

@nirs
Copy link
Member
nirs commented May 13, 2025

Looking at the tests, underlay takes some time time as we expect, except for applet tests - underlay seems too quick to be true. Please check that we delete and wait for all resources.

--- PASS: TestDR (6.39s)
    --- PASS: TestDR/subscr-deploy-rbd-busybox (506.01s)
        --- PASS: TestDR/subscr-deploy-rbd-busybox/Deploy (11.55s)
        --- PASS: TestDR/subscr-deploy-rbd-busybox/Enable (95.78s)
        --- PASS: TestDR/subscr-deploy-rbd-busybox/Failover (210.62s)
        --- PASS: TestDR/subscr-deploy-rbd-busybox/Relocate (120.60s)
        --- PASS: TestDR/subscr-deploy-rbd-busybox/Disable (60.17s)
        --- PASS: TestDR/subscr-deploy-rbd-busybox/Undeploy (7.29s)
    --- PASS: TestDR/disapp-deploy-rbd-busybox (559.66s)
        --- PASS: TestDR/disapp-deploy-rbd-busybox/Deploy (3.05s)
        --- PASS: TestDR/disapp-deploy-rbd-busybox/Enable (96.75s)
        --- PASS: TestDR/disapp-deploy-rbd-busybox/Failover (208.40s)
        --- PASS: TestDR/disapp-deploy-rbd-busybox/Relocate (148.33s)
        --- PASS: TestDR/disapp-deploy-rbd-busybox/Disable (90.34s)
        --- PASS: TestDR/disapp-deploy-rbd-busybox/Undeploy (12.80s)
    --- PASS: TestDR/subscr-deploy-cephfs-busybox (686.72s)
        --- PASS: TestDR/subscr-deploy-cephfs-busybox/Deploy (11.67s)
        --- PASS: TestDR/subscr-deploy-cephfs-busybox/Enable (95.70s)
        --- PASS: TestDR/subscr-deploy-cephfs-busybox/Failover (271.56s)
        --- PASS: TestDR/subscr-deploy-cephfs-busybox/Relocate (271.46s)
        --- PASS: TestDR/subscr-deploy-cephfs-busybox/Disable (30.10s)
        --- PASS: TestDR/subscr-deploy-cephfs-busybox/Undeploy (6.23s)
    --- PASS: TestDR/disapp-deploy-cephfs-busybox (715.81s)
        --- PASS: TestDR/disapp-deploy-cephfs-busybox/Deploy (3.13s)
        --- PASS: TestDR/disapp-deploy-cephfs-busybox/Enable (154.38s)
        --- PASS: TestDR/disapp-deploy-cephfs-busybox/Failover (240.53s)
        --- PASS: TestDR/disapp-deploy-cephfs-busybox/Relocate (274.96s)
        --- PASS: TestDR/disapp-deploy-cephfs-busybox/Disable (30.18s)
        --- PASS: TestDR/disapp-deploy-cephfs-busybox/Undeploy (12.62s)
    --- PASS: TestDR/appset-deploy-rbd-busybox (730.45s)
        --- PASS: TestDR/appset-deploy-rbd-busybox/Deploy (0.35s)
        --- PASS: TestDR/appset-deploy-rbd-busybox/Enable (96.96s)
        --- PASS: TestDR/appset-deploy-rbd-busybox/Failover (271.07s)
        --- PASS: TestDR/appset-deploy-rbd-busybox/Relocate (301.71s)
        --- PASS: TestDR/appset-deploy-rbd-busybox/Disable (60.18s)
        --- PASS: TestDR/appset-deploy-rbd-busybox/Undeploy (0.18s)
    --- PASS: TestDR/appset-deploy-cephfs-busybox (851.34s)
        --- PASS: TestDR/appset-deploy-cephfs-busybox/Deploy (0.36s)
        --- PASS: TestDR/appset-deploy-cephfs-busybox/Enable (132.04s)
        --- PASS: TestDR/appset-deploy-cephfs-busybox/Failover (326.06s)
        --- PASS: TestDR/appset-deploy-cephfs-busybox/Relocate (332.59s)
        --- PASS: TestDR/appset-deploy-cephfs-busybox/Disable (60.23s)
        --- PASS: TestDR/appset-deploy-cephfs-busybox/Undeploy (0.07s)

@parikshithb parikshithb force-pushed the wait_delete branch 3 times, most recently from b54130c to 33c6bf7 Compare May 15, 2025 14:45
@parikshithb parikshithb requested a review from nirs May 15, 2025 14:47
Copy link
Member
@nirs nirs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good!

nirs added a commit to nirs/ramen that referenced this pull request May 18, 2025
Previously we used util.Timeout when waiting for resources. In flow when
we have multiple waits, we could wait 10 minutes for each change before
failing, which is way too much.

This change adds a deadline to the underlying context.Context returned
by TestContext.Context(). We set the deadline to 10 minutes after the
current time when starting one of the e2e operations (deploy, undeploy,
enable, disable, failover, relocate). Since we we wait using
util.Sleep(), the call will fail with context.DaadlineExceeded on
timeouts.

Advantages of using context based deadline:

- Limiting the timeout for flows including multiple waits, getting more
  predictable test time.
- Simplify waiting for multiple objects with same deadline, added in
  RamenDR#2035
- Allows ramenctl to control the deadline when running multiple
  operations in parallel
- Enforcing the deadline in external calls using a context
- Simplify wait loops

Signed-off-by: Nir Soffer <nsoffer@redhat.com>
nirs added a commit to nirs/ramen that referenced this pull request May 18, 2025
Previously we used util.Timeout when waiting for resources. In flow when
we have multiple waits, we could wait 10 minutes for each change before
failing, which is way too much.

This change adds a deadline to the underlying context.Context returned
by TestContext.Context(). We set the deadline to 10 minutes after the
current time when starting one of the e2e operations (deploy, undeploy,
enable, disable, failover, relocate). Since we we wait using
util.Sleep(), the call will fail with context.DaadlineExceeded on
timeouts.

Advantages of using context based deadline:

- Limiting the timeout for flows including multiple waits, getting more
  predictable test time.
- Simplify waiting for multiple objects with same deadline, added in
  RamenDR#2035
- Allows ramenctl to control the deadline when running multiple
  operations in parallel
- Enforcing the deadline in external calls using a context
- Simplify wait loops

This change requires a change in ramenctl, setting a deadline on the
context passed to e2e.

Signed-off-by: Nir Soffer <nsoffer@redhat.com>
nirs added a commit to nirs/ramen that referenced this pull request May 18, 2025
Previously we used util.Timeout when waiting for resources. In flow when
we have multiple waits, we could wait 10 minutes for each change before
failing, which is way too much.

This change adds a deadline to the underlying context.Context returned
by TestContext.Context(). We set the deadline to 10 minutes after the
current time when starting one of the e2e operations (deploy, undeploy,
enable, disable, failover, relocate). Since we we wait using
util.Sleep(), the call will fail with context.DaadlineExceeded on
timeouts.

Advantages of using context based deadline:

- Limiting the timeout for flows including multiple waits, getting more
  predictable test time.
- Simplify waiting for multiple objects with same deadline, added in
  RamenDR#2035
- Allows ramenctl to control the deadline when running multiple
  operations in parallel
- Enforcing the deadline in external calls using a context
- Simplify wait loops

This change requires a change in ramenctl, setting a deadline on the
context passed to e2e.

Signed-off-by: Nir Soffer <nsoffer@redhat.com>
nirs added a commit to nirs/ramen that referenced this pull request May 19, 2025
Previously we used util.Timeout when waiting for resources. In flow when
we have multiple waits, we could wait 10 minutes for each change before
failing, which is way too much.

This change adds a deadline to the underlying context.Context returned
by TestContext.Context(). We set the deadline to 10 minutes after the
current time when starting one of the e2e operations (deploy, undeploy,
enable, disable, failover, relocate). Since we we wait using
util.Sleep(), the call will fail with context.DaadlineExceeded on
timeouts.

Advantages of using context based deadline:

- Limiting the timeout for flows including multiple waits, getting more
  predictable test time.
- Simplify waiting for multiple objects with same deadline, added in
  RamenDR#2035
- Allows ramenctl to control the deadline when running multiple
  operations in parallel
- Enforcing the deadline in external calls using a context
- Simplify wait loops

It would be nice to add WithDeadline() to types.Context interface, but
it does not work since we cannot overload this interface to return a
TestContext in the TestContext interface, and having an interface
returning a Context from TestContext is unwanted. For now we implement
WithDeadline() on the concrete type so we cannot use it in the actual
tests. Test code that need shorter timeouts can use .Context() directly.

This change requires a change in ramenctl, setting a deadline on the
context passed to e2e.

Signed-off-by: Nir Soffer <nsoffer@redhat.com>
nirs added a commit to nirs/ramen that referenced this pull request May 19, 2025
Previously we used util.Timeout when waiting for resources. In flow when
we have multiple waits, we could wait 10 minutes for each change before
failing, which is way too much.

This change adds a deadline to the underlying context.Context returned
by TestContext.Context(). We set the deadline to 10 minutes after the
current time when starting one of the e2e operations (deploy, undeploy,
enable, disable, failover, relocate). Since we we wait using
util.Sleep(), the call will fail with context.DaadlineExceeded on
timeouts.

Advantages of using context based deadline:

- Limiting the timeout for flows including multiple waits, getting more
  predictable test time.
- Simplify waiting for multiple objects with same deadline, added in
  RamenDR#2035
- Allows ramenctl to control the deadline when running multiple
  operations in parallel
- Enforcing the deadline in external calls using a context
- Simplify wait loops

It would be nice to add WithDeadline() to types.Context interface, but
it does not work since we cannot overload this interface to return a
TestContext in the TestContext interface, and having an interface
returning a Context from TestContext is unwanted. For now we implement
WithDeadline() on the concrete type so we cannot use it in the actual
tests. Test code that need shorter timeouts can use .Context() directly.

This change requires a change in ramenctl, setting a deadline on the
context passed to e2e.

Signed-off-by: Nir Soffer <nsoffer@redhat.com>
nirs added a commit to nirs/ramen that referenced this pull request May 19, 2025
Previously we used util.Timeout when waiting for resources. In flow when
we have multiple waits, we could wait 10 minutes for each change before
failing, which is way too much.

This change adds a deadline to the underlying context.Context returned
by TestContext.Context(). We set the deadline to 10 minutes after the
current time when starting one of the e2e operations (deploy, undeploy,
enable, disable, failover, relocate). Since we we wait using
util.Sleep(), the call will fail with context.DaadlineExceeded on
timeouts.

Advantages of using context based deadline:

- Limiting the timeout for flows including multiple waits, getting more
  predictable test time.
- Simplify waiting for multiple objects with same deadline, added in
  RamenDR#2035
- Allows ramenctl to control the deadline when running multiple
  operations in parallel
- Enforcing the deadline in external calls using a context
- Simplify wait loops

It would be nice to add WithDeadline() to types.Context interface, but
it does not work since we cannot overload this interface to return a
TestContext in the TestContext interface, and having an interface
returning a Context from TestContext is unwanted. For now we implement
WithDeadline() on the concrete type so we cannot use it in the actual
tests. Test code that need shorter timeouts can use .Context() directly.

This change requires a change in ramenctl, setting a deadline on the
context passed to e2e.

Signed-off-by: Nir Soffer <nsoffer@redhat.com>
nirs added a commit to nirs/ramen that referenced this pull request May 20, 2025
Previously we used util.Timeout when waiting for resources. In flow when
we have multiple waits, we could wait 10 minutes for each change before
failing, which is way too much.

This change adds a deadline to the underlying context.Context returned
by TestContext.Context(). We set the deadline to 10 minutes after the
current time when starting one of the e2e operations (deploy, undeploy,
enable, disable, failover, relocate). Since we we wait using
util.Sleep(), the call will fail with context.DaadlineExceeded on
timeouts.

Advantages of using context based deadline:

- Limiting the timeout for flows including multiple waits, getting more
  predictable test time.
- Simplify waiting for multiple objects with same deadline, added in
  RamenDR#2035
- Allows ramenctl to control the deadline when running multiple
  operations in parallel
- Enforcing the deadline in external calls using a context
- Simplify wait loops

It would be nice to add WithTimeout() to types.Context interface, but
it does not work since we cannot overload this interface to return a
TestContext in the TestContext interface, and having an interface
returning a Context from TestContext is unwanted. For now we implement
WithTimeout() on the concrete type so we cannot use it in the actual
tests. Test code that need shorter timeouts can use .Context() directly.

This change requires a change in ramenctl, setting a deadline on the
context passed to e2e.

Signed-off-by: Nir Soffer <nsoffer@redhat.com>
nirs added a commit that referenced this pull request May 20, 2025
Previously we used util.Timeout when waiting for resources. In flow when
we have multiple waits, we could wait 10 minutes for each change before
failing, which is way too much.

This change adds a deadline to the underlying context.Context returned
by TestContext.Context(). We set the deadline to 10 minutes after the
current time when starting one of the e2e operations (deploy, undeploy,
enable, disable, failover, relocate). Since we we wait using
util.Sleep(), the call will fail with context.DaadlineExceeded on
timeouts.

Advantages of using context based deadline:

- Limiting the timeout for flows including multiple waits, getting more
  predictable test time.
- Simplify waiting for multiple objects with same deadline, added in
  #2035
- Allows ramenctl to control the deadline when running multiple
  operations in parallel
- Enforcing the deadline in external calls using a context
- Simplify wait loops

It would be nice to add WithTimeout() to types.Context interface, but
it does not work since we cannot overload this interface to return a
TestContext in the TestContext interface, and having an interface
returning a Context from TestContext is unwanted. For now we implement
WithTimeout() on the concrete type so we cannot use it in the actual
tests. Test code that need shorter timeouts can use .Context() directly.

This change requires a change in ramenctl, setting a deadline on the
context passed to e2e.

Signed-off-by: Nir Soffer <nsoffer@redhat.com>
@nirs
Copy link
Member
nirs commented May 20, 2025

@parikshithb #2043 is merged, please rebase.

Replace --timeout=5m with --wait=false in the kubectl
delete command to ensure that DeleteDiscoveredApps
returns immediately after initiating deletion instead
of waiting for resources to be fully deleted. This change
makes the function's behavior consistent with other delete
operations that separately handle deletion initiation and
waiting for completion, improving control flow.

Signed-off-by: Parikshith <parikshithb@gmail.com>
This commit adds infra to wait for resources are fully deleted
before tests continue, preventing conditions where subsequent
tests might be affected by lingering resources.

- Add WaitForResourceDelete generic function to wait for
  resources to be fully deleted
- Implement wait funcs for different resource types:
  - ApplicationSet
  - ConfigMap
  - Placement
  - ManagedClusterSetBinding

Signed-off-by: Parikshith <parikshithb@gmail.com>
This change updates the undeploy function for the
ApplicationSet by adding wait for all resource deleted
with a deadline part of the context.

Signed-off-by: Parikshith <parikshithb@gmail.com>
This separation infra allows for more control over the namespace
deletion process to explicitly wait for namespace deletion
after initiating namespace deletion. Updated undeploy functions
of Disapp, Subscription and the EnsureChannelDeleted function by
adding the separate wait for namespace with a shared deadline
from context.

Signed-off-by: Parikshith <parikshithb@gmail.com>
Since we are doing common delete ops on both clusters
current comment needs to updated to indicate same.

Signed-off-by: Parikshith <parikshithb@gmail.com>
This change replace waitDRPCDeleted with WaitForDRPCDelete
with the shared deadline to DRPC and other related
resource deletion like placement and mcsb.

Signed-off-by: Parikshith <parikshithb@gmail.com>
@parikshithb
Copy link
Member Author

tested this update using ramenctl

diff --git a/go.mod b/go.mod
index 4203c2d..d78f329 100644
--- a/go.mod
+++ b/go.mod
@@ -87,3 +87,5 @@ require (
        sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect
        sigs.k8s.io/structured-merge-diff/v4 v4.4.1 // indirect
 )
+
+replace github.com/ramendr/ramen/e2e v0.0.0-20250424122329-3f0bedcd598d => ../ramen/e2e

Broke rbd-mirror daemon by scaling it down on cluster dr2 while running ramenctl test run, protection failed for rbd apps as expected.

./ramenctl test run -o run
⭐ Using report "run"
⭐ Using config "config.yaml"

🔎 Validate config ...
   ✅ Config validated

🔎 Setup environment ...
   ✅ Environment setup

🔎 Run tests ...
   ✅ Application "appset-deploy-rbd" deployed
   ✅ Application "appset-deploy-cephfs" deployed
   ✅ Application "disapp-deploy-cephfs" deployed
   ✅ Application "disapp-deploy-rbd" deployed
   ✅ Application "subscr-deploy-cephfs" deployed
   ✅ Application "subscr-deploy-rbd" deployed
   ✅ Application "disapp-deploy-cephfs" protected
   ✅ Application "subscr-deploy-cephfs" protected
   ✅ Application "appset-deploy-cephfs" protected
   ✅ Application "disapp-deploy-cephfs" failed over
   ✅ Application "subscr-deploy-cephfs" failed over
   ✅ Application "appset-deploy-cephfs" failed over
   ✅ Application "disapp-deploy-cephfs" relocated
   ✅ Application "disapp-deploy-cephfs" unprotected
   ✅ Application "disapp-deploy-cephfs" undeployed
   ❌ Failed to protect application "appset-deploy-rbd"
   ❌ Failed to protect application "disapp-deploy-rbd"
   ❌ Failed to protect application "subscr-deploy-rbd"
   ✅ Application "subscr-deploy-cephfs" relocated
   ✅ Application "subscr-deploy-cephfs" unprotected
   ✅ Application "subscr-deploy-cephfs" undeployed
   ✅ Application "appset-deploy-cephfs" relocated
   ✅ Application "appset-deploy-cephfs" unprotected
   ✅ Application "appset-deploy-cephfs" undeployed

🔎 Gather data ...
   ✅ Gathered data from cluster "hub"
   ✅ Gathered data from cluster "dr1"
   ✅ Gathered data from cluster "dr2"

❌ failed (3 passed, 3 failed, 0 skipped, 0 canceled)

Ran ramenctl test clean to validate unprotect and undeploy cleans up with wait.

 ./ramenctl test clean -o clean
⭐ Using report "clean"
⭐ Using config "config.yaml"

🔎 Validate config ...
   ✅ Config validated

🔎 Clean tests ...
   ✅ Application "subscr-deploy-cephfs" cleaned up
   ✅ Application "appset-deploy-cephfs" cleaned up
   ✅ Application "disapp-deploy-cephfs" cleaned up
   ✅ Application "disapp-deploy-rbd" cleaned up
   ✅ Application "appset-deploy-rbd" cleaned up
   ✅ Application "subscr-deploy-rbd" cleaned up

🔎 Clean environment ...
   ✅ Environment cleaned

✅ passed (6 passed, 0 failed, 0 skipped, 0 canceled)

appset cleanup deletes and waits until drpc, appset, placement and mcsb is deleted:

2025-05-21T18:15:49.692+0530	DEBUG	appset-deploy-rbd	deployers/crud.go:413	Deleted applicationset "argocd/appset-deploy-rbd" in cluster "hub"
2025-05-21T18:15:49.703+0530	DEBUG	appset-deploy-rbd	deployers/crud.go:302	Deleted configMap "argocd/appset-deploy-rbd" in cluster "hub"
2025-05-21T18:15:49.887+0530	DEBUG	appset-deploy-rbd	dractions/crud.go:91	Deleted drpc "argocd/appset-deploy-rbd" in cluster "hub"
2025-05-21T18:15:49.887+0530	DEBUG	appset-deploy-rbd	util/wait.go:99	Waiting until DRPlacementControl "argocd/appset-deploy-rbd" is deleted in cluster "hub"
2025-05-21T18:15:50.885+0530	DEBUG	appset-deploy-rbd	deployers/crud.go:154	Deleted placement "argocd/appset-deploy-rbd" in cluster "hub"
2025-05-21T18:15:50.885+0530	DEBUG	appset-deploy-rbd	util/wait.go:99	Waiting until ApplicationSet "argocd/appset-deploy-rbd" is deleted in cluster "hub"
2025-05-21T18:15:50.887+0530	DEBUG	appset-deploy-rbd	util/wait.go:107	ApplicationSet "argocd/appset-deploy-rbd" deleted in cluster "hub"
2025-05-21T18:15:50.887+0530	DEBUG	appset-deploy-rbd	util/wait.go:99	Waiting until ConfigMap "argocd/appset-deploy-rbd" is deleted in cluster "hub"
2025-05-21T18:15:50.888+0530	DEBUG	appset-deploy-rbd	util/wait.go:107	ConfigMap "argocd/appset-deploy-rbd" deleted in cluster "hub"
2025-05-21T18:15:50.888+0530	DEBUG	appset-deploy-rbd	util/wait.go:99	Waiting until Placement "argocd/appset-deploy-rbd" is deleted in cluster "hub"
2025-05-21T18:16:28.579+0530	DEBUG	appset-deploy-rbd	util/wait.go:107	DRPlacementControl "argocd/appset-deploy-rbd" deleted in cluster "hub"
2025-05-21T18:16:28.580+0530	INFO	appset-deploy-rbd	dractions/actions.go:134	Workload unprotected
2025-05-21T18:16:28.579+0530	DEBUG	appset-deploy-rbd	util/wait.go:107	Placement "argocd/appset-deploy-rbd" deleted in cluster "hub"
2025-05-21T18:16:28.580+0530	INFO	appset-deploy-rbd	deployers/appset.go:92	Workload undeployed

disapp cleanup deletes and waits for dprcs, placement, mcsb and app namespaces to be deleted:

2025-05-21T18:15:49.699+0530	DEBUG	disapp-deploy-rbd	dractions/crud.go:91	Deleted drpc "ramen-ops/disapp-deploy-rbd" in cluster "hub"
2025-05-21T18:15:50.685+0530	DEBUG	disapp-deploy-rbd	deployers/crud.go:154	Deleted placement "ramen-ops/disapp-deploy-rbd" in cluster "hub"
2025-05-21T18:15:50.687+0530	DEBUG	disapp-deploy-rbd	deployers/crud.go:78	ManagedClusterSetBinding "ramen-ops/default" not found in cluster "hub"
2025-05-21T18:15:50.687+0530	DEBUG	disapp-deploy-rbd	deployers/crud.go:81	Deleted ManagedClusterSetBinding "ramen-ops/default" in cluster "hub"
2025-05-21T18:15:50.687+0530	DEBUG	disapp-deploy-rbd	util/wait.go:99	Waiting until DRPlacementControl "ramen-ops/disapp-deploy-rbd" is deleted in cluster "hub"
2025-05-21T18:15:50.687+0530	DEBUG	disapp-deploy-rbd	util/wait.go:99	Waiting until DRPlacementControl "ramen-ops/disapp-deploy-rbd" is deleted in cluster "hub"
2025-05-21T18:15:51.491+0530	DEBUG	disapp-deploy-rbd	deployers/crud.go:444	Deleted discovered app "test-disapp-deploy-rbd/busybox" in cluster "dr1"
2025-05-21T18:15:52.673+0530	DEBUG	disapp-deploy-rbd	deployers/crud.go:444	Deleted discovered app "test-disapp-deploy-rbd/busybox" in cluster "dr2"
2025-05-21T18:15:52.681+0530	DEBUG	disapp-deploy-rbd	util/wait.go:99	Waiting until Namespace "test-disapp-deploy-rbd" is deleted in cluster "dr1"
2025-05-21T18:16:00.716+0530	DEBUG	disapp-deploy-rbd	util/wait.go:107	DRPlacementControl "ramen-ops/disapp-deploy-rbd" deleted in cluster "hub"
2025-05-21T18:16:00.716+0530	DEBUG	disapp-deploy-rbd	util/wait.go:99	Waiting until Placement "ramen-ops/disapp-deploy-rbd" is deleted in cluster "hub"
2025-05-21T18:16:00.717+0530	DEBUG	disapp-deploy-rbd	util/wait.go:107	Placement "ramen-ops/disapp-deploy-rbd" deleted in cluster "hub"
2025-05-21T18:16:00.717+0530	DEBUG	disapp-deploy-rbd	util/wait.go:99	Waiting until ManagedClusterSetBinding "ramen-ops/default" is deleted in cluster "hub"
2025-05-21T18:16:00.717+0530	DEBUG	disapp-deploy-rbd	util/wait.go:107	ManagedClusterSetBinding "ramen-ops/default" deleted in cluster "hub"
2025-05-21T18:16:00.717+0530	INFO	disapp-deploy-rbd	dractions/disapp.go:122	Workload unprotected
2025-05-21T18:16:03.705+0530	DEBUG	disapp-deploy-rbd	util/wait.go:107	Namespace "test-disapp-deploy-rbd" deleted in cluster "dr1"
2025-05-21T18:16:03.705+0530	DEBUG	disapp-deploy-rbd	util/wait.go:99	Waiting until Namespace "test-disapp-deploy-rbd" is deleted in cluster "dr2"
2025-05-21T18:16:03.706+0530	DEBUG	disapp-deploy-rbd	util/wait.go:107	Namespace "test-disapp-deploy-rbd" deleted in cluster "dr2"
2025-05-21T18:16:03.706+0530	INFO	disapp-deploy-rbd	deployers/disapp.go:112	Workload undeployed

subapp cleanup deletes and waits for dprcs and subscription namespace to be deleted:

2025-05-21T18:15:49.697+0530	DEBUG	subscr-deploy-rbd	util/wait.go:99	Waiting until DRPlacementControl "test-subscr-deploy-rbd/subscr-deploy-rbd" is deleted in cluster "hub"
2025-05-21T18:15:50.287+0530	DEBUG	subscr-deploy-rbd	util/wait.go:99	Waiting until Namespace "test-subscr-deploy-rbd" is deleted in cluster "hub"
2025-05-21T18:16:34.212+0530	DEBUG	subscr-deploy-rbd	util/wait.go:107	DRPlacementControl "test-subscr-deploy-rbd/subscr-deploy-rbd" deleted in cluster "hub"
2025-05-21T18:16:41.410+0530	DEBUG	subscr-deploy-rbd	util/wait.go:107	Namespace "test-subscr-deploy-rbd" deleted in cluster "hub"

Complete clean logs:
test-clean.log

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

e2e: wait for resources to be deleted after undeploy and unprotect
2 participants
0