-
-
Notifications
You must be signed in to change notification settings - Fork 650
Non-unique collection indexes #3721
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Non-unique collection indexes #3721
Conversation
23974d4
to
435717e
Compare
@tpw56j After this PR is merged I would kindly soft-freeze the |
435717e
to
05d1978
Compare
Unfortunately, the idea of universal indexes turned out to be unsuccessful. Instead of speeding up, there was a significant slowdown. |
4e0c2f3
to
4612678
Compare
c64979f
to
d5345b4
Compare
@SchoolGuy This PR is almost ready, all that remains is to clean up the code.
The best results were:
0001_e207a9f921ba4c4979162e40a88af2bcce24a1e7_20240621_084042_uncommited-changes.json - main branch Since the results were mixed, I think we need to do more testing. Including tests with the number of objects > 10 where indexes can perform better. |
@tpw56j Once 3.3.5 is out I will focus on this PR. Until then feel free to experiment with it as much as desired. |
d5345b4
to
ef27317
Compare
4650b34
to
2a61bbc
Compare
@SchoolGuy With the testing parameters test_rounds = 5, objs_default_count = 10, profiles_count = 100, systems_count = 1000, the estimated time to complete the tests on my hardware was 65 days. That's why I'm now trying to select the parameters so that the tests can be completed in a week. On bare Rockchip RK3399 (arm64) hardware with a small number of objects, the results were as follows: Click metest_rounds = 5
|
@tpw56j Thanks for those numbers. That seems like quite a lot of time. Currently, I am waiting for an OS to be put onto the worker that we get sponsored. After that, I can deploy the runner application and start moving jobs onto the runner. It might take a week or two until that is ready. |
084829b
to
35fa208
Compare
fd84f5f
to
6cf3775
Compare
Coverage summary from CodacySee diff coverage on Codacy
Coverage variation details
Coverage variation is the difference between the coverage for the head and common ancestor commits of the pull request branch: Diff coverage details
Diff coverage is the percentage of lines that are covered by tests out of the coverable lines that the pull request added or modified: See your quality gate settings Change summary preferencesCodacy stopped sending the deprecated coverage status on June 5th, 2024. Learn more |
ebf2111
to
5cd6908
Compare
@SchoolGuy I sped up the performance tests in this PR. Previously, most of the time was spent on preparing/deleting a set of objects for the tests with As a result, if previously with |
5cd6908
to
2c25139
Compare
@SchoolGuy I compared the main branch + fixes for #3816, #3817 with this PR on bare metal (Nanopi R4S) and got quite satisfactory results from my point of view. Therefore, I think that this PR is completely ready for review. As expected, the performance drop in this PR was on operations where indexes cannot be used, but only introduce additional overhead. This is deserialization, copying distros, as well as deleting and renaming systems. And the best results were achieved when editing an attribute of an item inherited in descendants. At the same time, the total time of sequential execution of all tests due to the use of non-unique indexes decreased from 1714m to 92m. Test parameters:
Worst results:
Best results:
Full results:
P.S. Unfortunately, I did not split this PR into commits separate from the implementation of indexes, but I did it for the backport to 3.3 in #3818. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do believe that when this PR is paired with #3814 many of the performance regressions in the add_*
operations can be circumvented as the felt reaction time of the API should be much better. While I am not happy about the regressions in the copy_*
areas, we will have to pay some price to gain the performance. It is still much much better than without caching. As such I find the changes valid. Thank you again for this very large contribution!
I will try and take care of your for pyright. If possible I will of course split the commit to break out the change for syncing the systems. |
2c25139
to
74f9369
Compare
74f9369
to
c09aaed
Compare
c09aaed
to
2a151ae
Compare
The failing tests have nothing to do with the changes in this PR (partly) but are still issues with the shared GitHub Runners. I am still not able to find time to set them up, as such we need to allow the CI to fail in these areas sadly. |
Linked Items
Fixes #3720, #3817, #3816
Description
PR adds the ability to create non-unique collection indexes and set their properties in the settings..
Behaviour changes
Old:
New:
Category
This is related to a:
Tests