-
Notifications
You must be signed in to change notification settings - Fork 2.4k
Unittester failures summary #16833
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unittester failures summary #16833
Conversation
Thanks for the PR! LGTM - the CI failure is unrelated. Can we start enabling these flags on the various CI runs as well? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! Just the one question
Made the summary look the same as summary of Python runner results - for now it's only for CompareValues(). Will add more |
I need to add maybe some test cases to test that all the failures will get into a summary |
there is the I've also modified the return types of most of the I think that the filename should be defined somewhere else, not in the |
We still will have to scroll in case of long logs like this |
I was going to say yea. I think if you summarize everything, it's a bit too long of a message. The reason we want this is just to see what files have failing test cases. It's much nicer to look at the files, then run the tests locally to find out exactly what has failed. If you want to have a summary with also the failures, then I suggest having two flags |
I think the current approach is fine - showing the error again is helpful as well. This is useful if we want to e.g. check if an error is expected. Sometimes errors are also hard to reproduce locally. The HTTPFS example is a bit excessive because there are many errors and the errors are in |
…w the unittester to print the failed test names + failed line as a run summary
…to the test file changed returning types for some of the logger methods
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we pick this up again? After digging through some more logs this is still very much a needed change. I've left some more comments:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the fixes! Looks good. Some more comments below. I also noticed that we are not reporting time out failures in run_tests_one_by_one.py
, see e.g. the "Release Assertions with Clang" run. Can we report those as well?
[2573/4670]: test/sql/join/asof/test_asof_join_missing.test_slowStill running...
Still running...
Still running...
(TIMED OUT)
2d3d0f7
to
46df878
Compare
Thanks! |
Currently when we have a failure in the CI (running unit tests), the actual test failure is hidden in mountains of successful runs.
We should print the output of all failing tests again at the end of running the unittester so we can more easily find the failing tests. This can make it easier to look through the CI.
This PR adds the Failures Summary printed out right in the end of the logs by default.
Passing --summarize-failures flag to the
./build/release/test/unittest
should enable summary output in the end of test run. There is a python script to run unit tests one by one (scripts/run_tests_one_by_one.py
) which should print summary by default.Python script captures the whole failure details message easily, but the
unittest
- only the failing case with the problematic line number.Should look like this for
unittest
:And for
scripts/run_tests_one_by_one.py
: