8000 [ACIX-883] Auto-retry e2e tests in CI by Ishirui · Pull Request #37862 · DataDog/datadog-agent · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

[ACIX-883] Auto-retry e2e tests in CI #37862

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 9 commits into
base: main
Choose a base branch
from

Conversation

Ishirui
Copy link
Contributor
@Ish
8000
irui Ishirui commented Jun 11, 2025

What does this PR do?

Following-up on ACIX-859 and #36422, try a new method for automatically rerunning failing e2e tests. This new method does not re-run tests that are already known as flaky and whose failure is accepted, contrary to what the previous method would do.

Motivation

Increase CI reliability and diminish the impact (lost time) of flaky (but not yet marked as flaky) tests

Describe how you validated your changes

TBD

Possible Drawbacks / Trade-offs

Increased pipeline runtime + lost visibility on which tests are failing/flaky

Additional Notes

@Ishirui Ishirui self-assigned this Jun 11, 2025
@github-actions github-actions bot added team/agent-devx short review PR is simple enough to be reviewed quickly labels Jun 11, 2025
@Ishirui Ishirui added the qa/no-code-change No code change in Agent code requiring validation label Jun 11, 2025
Copy link
cit-pr-commenter bot commented Jun 11, 2025

Regression Detector

Regression Detector Results

Metrics dashboard
Target profiles
Run ID: 28cfa4ea-0d2a-4c9f-b239-20a3827be827

Baseline: bcf595d
Comparison: 43d0f2e
Diff

Optimization Goals: ✅ No significant changes detected

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI trials links
docker_containers_cpu % cpu utilization +4.78 [+1.72, +7.85] 1 Logs
uds_dogstatsd_to_api_cpu % cpu utilization +1.99 [+1.09, +2.89] 1 Logs
quality_gate_logs % cpu utilization +0.76 [-2.06, +3.58] 1 Logs bounds checks dashboard
ddot_logs memory utilization +0.28 [+0.19, +0.38] 1 Logs
otlp_ingest_logs memory utilization +0.25 [+0.12, +0.38] 1 Logs
docker_containers_memory memory utilization +0.17 [+0.09, +0.24] 1 Logs
file_tree memory utilization +0.15 [+0.00, +0.30] 1 Logs
ddot_metrics memory utilization +0.07 [-0.05, +0.19] 1 Logs
file_to_blackhole_1000ms_latency egress throughput +0.07 [-0.52, +0.66] 1 Logs
file_to_blackhole_0ms_latency_http1 egress throughput +0.03 [-0.54, +0.60] 1 Logs
file_to_blackhole_100ms_latency egress throughput +0.01 [-0.56, +0.58] 1 Logs
file_to_blackhole_300ms_latency egress throughput +0.00 [-0.65, +0.66] 1 Logs
tcp_dd_logs_filter_exclude ingress throughput +0.00 [-0.02, +0.02] 1 Logs
uds_dogstatsd_to_api ingress throughput +0.00 [-0.27, +0.27] 1 Logs
file_to_blackhole_1000ms_latency_linear_load egress throughput -0.01 [-0.24, +0.22] 1 Logs
file_to_blackhole_0ms_latency egress throughput -0.02 [-0.65, +0.61] 1 Logs
file_to_blackhole_500ms_latency egress throughput -0.07 [-0.66, +0.52] 1 Logs
otlp_ingest_metrics memory utilization -0.09 [-0.25, +0.07] 1 Logs
file_to_blackhole_0ms_latency_http2 egress throughput -0.13 [-0.70, +0.45] 1 Logs
tcp_syslog_to_blackhole ingress throughput -0.15 [-0.20, -0.09] 1 Logs
quality_gate_idle_all_features memory utilization -0.22 [-0.31, -0.13] 1 Logs bounds checks dashboard
uds_dogstatsd_20mb_12k_contexts_20_senders memory utilization -0.28 [-0.32, -0.23] 1 Logs
quality_gate_idle memory utilization -0.43 [-0.51, -0.36] 1 Logs bounds checks dashboard

Bounds Checks: ✅ Passed

perf experiment bounds_check_name replicates_passed links
docker_containers_cpu simple_check_run 10/10
docker_containers_memory memory_usage 10/10
docker_containers_memory simple_check_run 10/10
file_to_blackhole_0ms_latency lost_bytes 10/10
file_to_blackhole_0ms_latency memory_usage 10/10
file_to_blackhole_0ms_latency_http1 lost_bytes 10/10
file_to_blackhole_0ms_latency_http1 memory_usage 10/10
file_to_blackhole_0ms_latency_http2 lost_bytes 10/10
file_to_blackhole_0ms_latency_http2 memory_usage 10/10
file_to_blackhole_1000ms_latency memory_usage 10/10
file_to_blackhole_1000ms_latency_linear_load memory_usage 10/10
file_to_blackhole_100ms_latency lost_bytes 10/10
file_to_blackhole_100ms_latency memory_usage 10/10
file_to_blackhole_300ms_latency lost_bytes 10/10
file_to_blackhole_300ms_latency memory_usage 10/10
file_to_blackhole_500ms_latency lost_bytes 10/10
file_to_blackhole_500ms_latency memory_usage 10/10
quality_gate_idle intake_connections 10/10 bounds checks dashboard
quality_gate_idle memory_usage 10/10 bounds checks dashboard
quality_gate_idle_all_features intake_connections 10/10 bounds checks dashboard
quality_gate_idle_all_features memory_usage 10/10 bounds checks dashboard
quality_gate_logs intake_connections 10/10 bounds checks dashboard
quality_gate_logs lost_bytes 10/10 bounds checks dashboard
quality_gate_logs memory_usage 10/10 bounds checks dashboard

Explanation

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

CI Pass/Fail Decision

Passed. All Quality Gates passed.

  • quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
  • quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.

@agent-platform-auto-pr
Copy link
Contributor
agent-platform-auto-pr bot commented Jun 11, 2025

Static quality checks

❌ Please find below the results from static quality gates
Comparison made with ancestor bcf595d

Error

Quality gate Delta On disk size (MiB) Delta On wire size (MiB)
docker_agent_windows1809_core DataNotFound DataNotFound DataNotFound DataNotFound
Gate failure full details
Quality gate Error type Error message
docker_agent_windows1809_core StackTrace Traceback (most recent call last):
File "/go/src/github.com/DataDog/datadog-agent/tasks/quality_gates.py", line 156, in parse_and_trigger_gates
gate_mod.entrypoint(**gate_inputs)
File "/go/src/github.com/DataDog/datadog-agent/tasks/static_quality_gates/static_quality_gate_docker_agent_windows1809_core.py", line 5, in entrypoint
generic_docker_agent_quality_gate(
File "/go/src/github.com/DataDog/datadog-agent/tasks/static_quality_gates/lib/docker_agent_lib.py", line 116, in generic_docker_agent_quality_gate
image_on_wire_size, image_on_disk_size = get_image_url_size(ctx, metric_handler, gate_name, url)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/go/src/github.com/DataDog/datadog-agent/tasks/static_quality_gates/lib/docker_agent_lib.py", line 44, in get_image_url_size
image_on_disk_size = calculate_image_on_disk_size(ctx, url)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/go/src/github.com/DataDog/datadog-agent/tasks/static_quality_gates/lib/docker_agent_lib.py", line 10, in calculate_image_on_disk_size
ctx.run(f"crane pull {url} output.tar")
File "/.pyenv/versions/3.12.6/lib/python3.12/site-packages/invoke/context.py", line 104, in run
return self._run(runner, command, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.pyenv/versions/3.12.6/lib/python3.12/site-packages/invoke/context.py", line 113, in _run
return runner.run(command, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.pyenv/versions/3.12.6/lib/python3.12/site-packages/invoke/runners.py", line 395, in run
return self._run_body(command, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.pyenv/versions/3.12.6/lib/python3.12/site-packages/invoke/runners.py", line 451, in _run_body
return self.make_promise() if self._asynchronous else self._finish()
^^^^^^^^^^^^^^
File "/.pyenv/versions/3.12.6/lib/python3.12/site-packages/invoke/runners.py", line 518, in _finish
raise UnexpectedExit(result)
invoke.exceptions.UnexpectedExit: Encountered a bad command exit code!

Command: 'crane pull registry.ddbuild.io/ci/datadog-agent/agent:v68250201-43d0f2ec-7-win1809-servercore-amd64 output.tar'

Exit code: 1

Stdout: already printed

Stderr: already printed


Static quality gates prevent the PR to merge! You can check the static quality gates confluence page for guidance. We also have a toolbox page available to list tools useful to debug the size increase.

Successful checks

Info

Quality gate Delta On disk size (MiB) Delta On wire size (MiB)
agent_deb_amd64 $${0}$$ $${697.05}$$ < $${697.37}$$ $${-0.02}$$ $${176.09}$$ < $${177.03}$$
agent_deb_amd64_fips $${0}$$ $${695.35}$$ < $${695.59}$$ $${-0.01}$$ $${175.57}$$ < $${176.51}$$
agent_heroku_amd64 $${0}$$ $${358.68}$$ < $${359.67}$$ $${-0}$$ $${96.53}$$ < $${97.47}$$
agent_msi $${0}$$ $${958.87}$$ < $${959.86}$$ $${0}$$ $${146.31}$$ < $${147.27}$$
agent_rpm_amd64 $${0}$$ $${697.04}$$ < $${697.36}$$ $${-0.01}$$ $${177.7}$$ < $${178.56}$$
agent_rpm_amd64_fips $${0}$$ $${695.34}$$ < $${695.58}$$ $${-0.03}$$ $${177.54}$$ < $${178.43}$$
agent_rpm_arm64 $${0}$$ $${687.06}$$ < $${687.37}$$ $${-0.01}$$ $${161.11}$$ < $${161.99}$$
agent_rpm_arm64_fips $${0}$$ $${685.47}$$ < $${685.72}$$ $${-0.05}$$ $${160.23}$$ < $${161.11}$$
agent_suse_amd64 $${0}$$ $${697.04}$$ < $${697.36}$$ $${-0.01}$$ $${177.7}$$ < $${178.56}$$
agent_suse_amd64_fips $${0}$$ $${695.34}$$ < $${695.58}$$ $${-0.03}$$ $${177.54}$$ < $${178.43}$$
agent_suse_arm64 $${0}$$ $${687.06}$$ < $${687.37}$$ $${-0.01}$$ $${161.11}$$ < $${161.99}$$
agent_suse_arm64_fips $${0}$$ $${685.47}$$ < $${685.72}$$ $${-0.05}$$ $${160.23}$$ < $${161.11}$$
docker_agent_amd64 $${+0}$$ $${780.85}$$ < $${781.16}$$ $${-0}$$ $${268.81}$$ < $${269.63}$$
docker_agent_arm64 $${0}$$ $${794.32}$$ < $${794.62}$$ $${-0}$$ $${256.16}$$ < $${257.0}$$
docker_agent_jmx_amd64 $${+0}$$ $${972.04}$$ < $${972.35}$$ $${+0}$$ $${337.78}$$ < $${338.6}$$
docker_agent_jmx_arm64 $${0}$$ $${974.11}$$ < $${974.41}$$ $${-0}$$ $${321.1}$$ < $${321.97}$$
docker_agent_windows1809 $${-0}$$ $${1180.52}$$ < $${1488.0}$$ $${-0}$$ $${416.18}$$ < $${488.95}$$
docker_agent_windows1809_core_jmx $${+0}$$ $${6032.18}$$ < $${6361.0}$$ $${0}$$ $${2048.0}$$ < $${2049.0}$$
docker_agent_windows1809_jmx $${+0}$$ $${1302.15}$$ < $${1609.5}$$ $${+0.05}$$ $${458.49}$$ < $${531.32}$$
docker_agent_windows2022 $${+0.01}$$ $${1199.66}$$ < $${1507.0}$$ $${-0.03}$$ $${428.92}$$ < $${501.7}$$
docker_agent_windows2022_core $${-0.02}$$ $${5883.77}$$ < $${6311.0}$$ $${0}$$ $${2048.0}$$ < $${2049.0}$$
docker_agent_windows2022_core_jmx $${+0.24}$$ $${6005.42}$$ < $${6313.0}$$ $${0}$$ $${2048.0}$$ < $${2049.0}$$
docker_agent_windows2022_jmx $${+0.04}$$ $${1321.4}$$ < $${1628.16}$$ $${-0.02}$$ $${471.2}$$ < $${543.98}$$
docker_cluster_agent_amd64 $${+0}$$ $${212.85}$$ < $${213.79}$$ $${-0}$$ $${72.39}$$ < $${73.33}$$
docker_cluster_agent_arm64 $${-0}$$ $${228.76}$$ < $${229.64}$$ $${-0}$$ $${68.67}$$ < $${69.6}$$
docker_cws_instrumentation_amd64 $${0}$$ $${7.08}$$ < $${7.12}$$ $${+0}$$ $${2.95}$$ < $${3.29}$$
docker_cws_instrumentation_arm64 $${0}$$ $${6.69}$$ < $${6.92}$$ $${+0}$$ $${2.7}$$ < $${3.07}$$
docker_dogstatsd_amd64 $${0}$$ $${39.22}$$ < $${39.57}$$ $${-0}$$ $${15.12}$$ < $${15.76}$$
docker_dogstatsd_arm64 $${0}$$ $${37.88}$$ < $${38.2}$$ $${+0}$$ $${14.53}$$ < $${14.83}$$
dogstatsd_deb_amd64 $${0}$$ $${30.45}$$ < $${31.4}$$ $${+0}$$ $${8.0}$$ < $${8.95}$$
dogstatsd_deb_arm64 $${0}$$ $${29.02}$$ < $${29.97}$$ $${+0}$$ $${6.94}$$ < $${7.89}$$
dogstatsd_rpm_amd64 $${0}$$ $${30.45}$$ < $${31.4}$$ $${+0}$$ $${8.01}$$ < $${8.96}$$
dogstatsd_suse_amd64 $${0}$$ $${30.45}$$ < $${31.4}$$ $${+0}$$ $${8.01}$$ < $${8.96}$$
iot_agent_deb_amd64 $${0}$$ $${50.44}$$ < $${51.38}$$ $${-0}$$ $${12.85}$$ < $${13.79}$$
iot_agent_deb_arm64 $${0}$$ $${47.91}$$ < $${48.85}$$ $${-0}$$ $${11.14}$$ < $${12.09}$$
iot_agent_deb_armhf $${0}$$ $${47.48}$$ < $${48.42}$$ $${-0.01}$$ $${11.2}$$ < $${12.16}$$
iot_agent_rpm_amd64 $${0}$$ $${50.44}$$ < $${51.38}$$ $${-0}$$ $${12.86}$$ < $${13.81}$$
iot_agent_rpm_arm64 $${0}$$ $${47.91}$$ < $${48.85}$$ $${+0}$$ $${11.16}$$ < $${12.11}$$
iot_agent_suse_amd64 $${0}$$ $${50.44}$$ < $${51.38}$$ $${-0}$$ $${12.86}$$ < $${13.81}$$

@Ishirui Ishirui force-pushed the pierrelouis.veyrenc/ACIX-883-retry-e2e-tests branch from bcee97a to ad4f5d4 Compare June 12, 2025 09:41
@github-actions github-actions bot added medium review PR review might take time and removed short review PR is simple enough to be reviewed quickly labels Jun 12, 2025
@Ishirui Ishirui force-pushed the pierrelouis.veyrenc/ACIX-883-retry-e2e-tests branch from ad4f5d4 to 685fb37 Compare June 12, 2025 09:45
@Ishirui Ishirui marked this pull request as ready for review June 12, 2025 13:29
@Ishirui Ishirui requested review from a team as code owners June 12, 2025 13:29
Copy link
Contributor Author
@Ishirui Ishirui left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Todo:

  • Default retries to 0
  • If retry logic enabled, pass "nth retry" and "max retries" to TearDownSuite method to prevent tearing down the infra on test failure if there are going to be further retries

Try to make it so that if we don't pass any --max-retries flag, the behavior does not change at all

  • In order to do pass these env vars, add it here so it gets loaded as a global go var

Ishirui added 7 commits June 19, 2025 15:47
The regex we are constructing in order to retry a specific set of failed tests was not working as expected without the non-capturing group. Specifically, it was selecting way too many tests, including some that were not even requested in the beginning (due to similar names) !
This should not be too necessary in CI as a later step already does some cleanup - but it might be useful if running locally with `--max-retries`
@Ishirui Ishirui force-pushed the pierrelouis.veyrenc/ACIX-883-retry-e2e-tests branch from 685fb37 to 9994407 Compare June 19, 2025 13:50
@Ishirui Ishirui requested review from a team as code owners June 19, 2025 13:50
@agent-platform-auto-pr
Copy link
Contributor
agent-platform-auto-pr bot commented Jun 19, 2025

Gitlab CI Configuration Changes

⚠️ Diff too large to display on Github.

Changes Summary

Removed Modified Added Renamed
0 114 0 0

ℹ️ Diff available in the job log.

@Ishirui Ishirui force-pushed the pierrelouis.veyrenc/ACIX-883-retry-e2e-tests branch from 9994407 to f24099f Compare June 19, 2025 14:09
@@ -429,6 +465,28 @@ def write_result_to_log_files(logs_per_test, log_folder, test_depth=1):
f.write("".join(logs))


def _create_test_selection_regex(test_names: list[str]) -> str:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if this was the best place to put this, maybe I should move it to a directory under libs/ or something ?

@Ishirui Ishirui requested a review from KevinFairise2 June 19, 2025 14:57
@@ -196,7 +196,7 @@ new-e2e-unit-tests:
- pulumi login "s3://dd-pulumi-state?region=us-east-1&awssdk=v2&profile=$AWS_PROFILE"
script:
- dda inv -- -e gitlab.generate-ci-visibility-links --output=$EXTERNAL_LINKS_PATH
- dda inv -- -e new-e2e-tests.run --targets ./pkg/utils --junit-tar junit-${CI_JOB_ID}.tgz ${EXTRA_PARAMS}
- dda inv -- -e new-e2e-tests.run --targets ./pkg/utils --max-retries 3 --junit-tar junit-${CI_JOB_ID}.tgz ${EXTRA_PARAMS}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's not put it on the unit tests, it should be only unit testing the function we have to create and destroy stacks, if this is flaky there is something strange


# In some cases the retry logic might leave resources undestroyed - make sure they are cleaned up
if max_retries > 0:
_clean_stacks(ctx, skip_destroy=not keep_stacks)
Copy link
9E88
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we do that it will break the feature we have that allows team to keep the stack when there are some failures, with: https://github.com/DataDog/datadog-agent/blob/main/test/new-e2e/tests/otel/otel-agent/loadbalancing_test.go#L37

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think we can find a way to make sure that this call ignore the stack we explicitly marked to be kept?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ack, I feared something like this would be the case. I think this means I need to have all the logic for determining whether to cleanup or not in go in TearDownSuite. That'll be a bit more work, especially since I am not sure how to access all the data I need 😅. I'll have to take a further look.

Copy link
Contributor Author
@Ishirui Ishirui Jun 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note after meeting with @KevinFairise2: there are actually a million different edge cases and complex logic to handle here, implementing this will require a lot more work. In particular, we should not do a blind clean of all pulumi-managed stacks, but instead implement a mechanism to run only the TearDownSuite method.

@github-actions github-actions bot added long review PR is complex, plan time to review it and removed medium review PR review might take time labels Jun 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
changelog/no-changelog long review PR is complex, plan time to review it qa/no-code-change No code change in Agent code requiring validation team/agent-devx
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants
0