-
Notifications
You must be signed in to change notification settings - Fork 896
Simplify layout get view at #1443
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simplify layout get view at #1443
Conversation
Mhh I simplified it already. And I tested it on all 2^32^2 combinations its always the same |
And as I understand the comment there, the binary search has been exchanged because it was too slow |
Let's see some benchmarks and numbers before we argue about performance, otherwise we're just guessing. Edit: some more questions/points.
|
1b05e5c
to
2f2ab77
Compare
2f2ab77
to
d561d14
Compare
Now I think I know what happened. The more complicated STL which recently replaced it was almost 8x slower. |
@Technius, yes the benchmarking code is already in there... in the first commit. It's copy-pasted from the original benchmarking ( hence the leftover plotting support ). I've force pushed to remove the bits you saw leftover in the last commit. Thanks. All options have been tested to make sure they give the same results. The size_t had affected the original algorithm but that was fixed for new measurements. |
Can you test it with -O3? The STL functions should've been inlined. |
I've run a benchmark, there is no difference between the new STL-Iterator like linear search version and the old index based Version. There is no STL overhead. The binary search version has a stable run time of O(log(n)) and is very small for all vector sizes (~60 to ~120 nano seconds). The linear search times are smaller for a difference below 100 to 150 pages. But are very high for a difference of 10000 pages (~ 8000 ns). I think the best solution is to use the binary search, since the difference between the algorithms is very small between 0 and 150 pages, above the linear search performs bad, but still ok (8ms for a 10000 page diff search). I assume the benchmarks haven't been created correctly (without turning on optimizations). |
I think I see the difference between your benchmark and my measurements. My results were from measurements accumulated within a running Xournal++. The difference is that in Xournal++ a vast majority of the time the user is moving at most one row and/or column away. My results are consistently like this ( for a 500 page pdf ): Here are some worst case for binary search: The binary search is nice and clean - it's the one to go with. Edit: Yes, These were measured from a release build ( i.e. with -O3 ). |
Mmh the new STL algorithm is mostly the same as the old index based. With optimisations enabled it takes rather the same time or ist Up to 10 nanoseconds faster. I think both old benchmarks were created with no optimizations enabled and or in DEBUG mode. Thus iterators and datastructures and functions are build with a big boilerplate of debug and validation code. This also changes the inlining behavior. The times of both algorithms compared should be nearly the same, because they are the same. And yes stick with binary_search, because it has a very stable and smal runtime even we have a worst case scenario. The linear search has a extremely bad worst case run time. |
Since this is the last behavioral change going into 1.0.14, please make sure to test it thoroughly. |
7bba1a3
to
87573e3
Compare
The fast algorithm was too difficult to understand/debug and prone to error on future code refactoring... so uncomment the concise stl-based binary search which was already there for testing and use it instead.