-
Notifications
You must be signed in to change notification settings - Fork 1.3k
[ACIX-883] Auto-retry e2e tests in CI #37862
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Regression DetectorRegression Detector ResultsMetrics dashboard Baseline: bcf595d Optimization Goals: ✅ No significant changes detected
|
perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
---|---|---|---|---|---|---|
➖ | docker_containers_cpu | % cpu utilization | +4.78 | [+1.72, +7.85] | 1 | Logs |
➖ | uds_dogstatsd_to_api_cpu | % cpu utilization | +1.99 | [+1.09, +2.89] | 1 | Logs |
➖ | quality_gate_logs | % cpu utilization | +0.76 | [-2.06, +3.58] | 1 | Logs bounds checks dashboard |
➖ | ddot_logs | memory utilization | +0.28 | [+0.19, +0.38] | 1 | Logs |
➖ | otlp_ingest_logs | memory utilization | +0.25 | [+0.12, +0.38] | 1 | Logs |
➖ | docker_containers_memory | memory utilization | +0.17 | [+0.09, +0.24] | 1 | Logs |
➖ | file_tree | memory utilization | +0.15 | [+0.00, +0.30] | 1 | Logs |
➖ | ddot_metrics | memory utilization | +0.07 | [-0.05, +0.19] | 1 | Logs |
➖ | file_to_blackhole_1000ms_latency | egress throughput | +0.07 | [-0.52, +0.66] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency_http1 | egress throughput | +0.03 | [-0.54, +0.60] | 1 | Logs |
➖ | file_to_blackhole_100ms_latency | egress throughput | +0.01 | [-0.56, +0.58] | 1 | Logs |
➖ | file_to_blackhole_300ms_latency | egress throughput | +0.00 | [-0.65, +0.66] | 1 | Logs |
➖ | tcp_dd_logs_filter_exclude | ingress throughput | +0.00 | [-0.02, +0.02] | 1 | Logs |
➖ | uds_dogstatsd_to_api | ingress throughput | +0.00 | [-0.27, +0.27] | 1 | Logs |
➖ | file_to_blackhole_1000ms_latency_linear_load | egress throughput | -0.01 | [-0.24, +0.22] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency | egress throughput | -0.02 | [-0.65, +0.61] | 1 | Logs |
➖ | file_to_blackhole_500ms_latency | egress throughput | -0.07 | [-0.66, +0.52] | 1 | Logs |
➖ | otlp_ingest_metrics | memory utilization | -0.09 | [-0.25, +0.07] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency_http2 | egress throughput | -0.13 | [-0.70, +0.45] | 1 | Logs |
➖ | tcp_syslog_to_blackhole | ingress throughput | -0.15 | [-0.20, -0.09] | 1 | Logs |
➖ | quality_gate_idle_all_features | memory utilization | -0.22 | [-0.31, -0.13] | 1 | Logs bounds checks dashboard |
➖ | uds_dogstatsd_20mb_12k_contexts_20_senders | memory utilization | -0.28 | [-0.32, -0.23] | 1 | Logs |
➖ | quality_gate_idle | memory utilization | -0.43 | [-0.51, -0.36] | 1 | Logs bounds checks dashboard |
Bounds Checks: ✅ Passed
perf | experiment | bounds_check_name | replicates_passed | links |
---|---|---|---|---|
✅ | docker_containers_cpu | simple_check_run | 10/10 | |
✅ | docker_containers_memory | memory_usage | 10/10 | |
✅ | docker_containers_memory | simple_check_run | 10/10 | |
✅ | file_to_blackhole_0ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http1 | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http1 | memory_usage | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http2 | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http2 | memory_usage | 10/10 | |
✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_1000ms_latency_linear_load | memory_usage | 10/10 | |
✅ | file_to_blackhole_100ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_300ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_300ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_500ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 | |
✅ | quality_gate_idle | intake_connections | 10/10 | bounds checks dashboard |
✅ | quality_gate_idle | memory_usage | 10/10 | bounds checks dashboard |
✅ | quality_gate_idle_all_features | intake_connections | 10/10 | bounds checks dashboard |
✅ | quality_gate_idle_all_features | memory_usage | 10/10 | bounds checks dashboard |
✅ | quality_gate_logs | intake_connections | 10/10 | bounds checks dashboard |
✅ | quality_gate_logs | lost_bytes | 10/10 | bounds checks dashboard |
✅ | quality_gate_logs | memory_usage | 10/10 | bounds checks dashboard |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
CI Pass/Fail Decision
✅ Passed. All Quality Gates passed.
- quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
Static quality checks❌ Please find below the results from static quality gates Error
Gate failure full details
Static quality gates prevent the PR to merge! You can check the static quality gates confluence page for guidance. We also have a toolbox page available to list tools useful to debug the size increase. Successful checksInfo
|
bcee97a
to
ad4f5d4
Compare
ad4f5d4
to
685fb37
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Todo:
- Default retries to 0
- If retry logic enabled, pass "nth retry" and "max retries" to
TearDownSuite
method to prevent tearing down the infra on test failure if there are going to be further retries
Try to make it so that if we don't pass any
--max-retries
flag, the behavior does not change at all
- In order to do pass these env vars, add it here so it gets loaded as a global go var
The regex we are constructing in order to retry a specific set of failed tests was not working as expected without the non-capturing group. Specifically, it was selecting way too many tests, including some that were not even requested in the beginning (due to similar names) !
This should not be too necessary in CI as a later step already does some cleanup - but it might be useful if running locally with `--max-retries`
685fb37
to
9994407
Compare
Gitlab CI Configuration ChangesChanges Summary
ℹ️ Diff available in the job log. |
9994407
to
f24099f
Compare
@@ -429,6 +465,28 @@ def write_result_to_log_files(logs_per_test, log_folder, test_depth=1): | |||
f.write("".join(logs)) | |||
|
|||
|
|||
def _create_test_selection_regex(test_names: list[str]) -> str: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if this was the best place to put this, maybe I should move it to a directory under libs/
or something ?
@@ -196,7 +196,7 @@ new-e2e-unit-tests: | |||
- pulumi login "s3://dd-pulumi-state?region=us-east-1&awssdk=v2&profile=$AWS_PROFILE" | |||
script: | |||
- dda inv -- -e gitlab.generate-ci-visibility-links --output=$EXTERNAL_LINKS_PATH | |||
- dda inv -- -e new-e2e-tests.run --targets ./pkg/utils --junit-tar junit-${CI_JOB_ID}.tgz ${EXTRA_PARAMS} | |||
- dda inv -- -e new-e2e-tests.run --targets ./pkg/utils --max-retries 3 --junit-tar junit-${CI_JOB_ID}.tgz ${EXTRA_PARAMS} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's not put it on the unit tests, it should be only unit testing the function we have to create and destroy stacks, if this is flaky there is something strange
|
||
# In some cases the retry logic might leave resources undestroyed - make sure they are cleaned up | ||
if max_retries > 0: | ||
_clean_stacks(ctx, skip_destroy=not keep_stacks) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we do that it will break the feature we have that allows team to keep the stack when there are some failures, with: https://github.com/DataDog/datadog-agent/blob/main/test/new-e2e/tests/otel/otel-agent/loadbalancing_test.go#L37
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think we can find a way to make sure that this call ignore the stack we explicitly marked to be kept?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ack, I feared something like this would be the case. I think this means I need to have all the logic for determining whether to cleanup or not in go in TearDownSuite
. That'll be a bit more work, especially since I am not sure how to access all the data I need 😅. I'll have to take a further look.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note after meeting with @KevinFairise2: there are actually a million different edge cases and complex logic to handle here, implementing this will require a lot more work. In particular, we should not do a blind clean of all pulumi-managed stacks, but instead implement a mechanism to run only the TearDownSuite
method.
What does this PR do?
Following-up on ACIX-859 and #36422, try a new method for automatically rerunning failing e2e tests. This new method does not re-run tests that are already known as flaky and whose failure is accepted, contrary to what the previous method would do.
Motivation
Increase CI reliability and diminish the impact (lost time) of flaky (but not yet marked as flaky) tests
Describe how you validated your changes
TBD
Possible Drawbacks / Trade-offs
Increased pipeline runtime + lost visibility on which tests are failing/flaky
Additional Notes