8000 Simplify layout get view at by JJones780 · Pull Request #1443 · xournalpp/xournalpp · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Simplify layout get view at #1443

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Aug 31, 2019

Conversation

JJones780
Copy link
Collaborator

The fast algorithm was too difficult to understand/debug and prone to error on future code refactoring... so uncomment the concise stl-based binary search which was already there for testing and use it instead.

@Febbe
Copy link
Collaborator
Febbe commented Aug 26, 2019

Mhh I simplified it already. And I tested it on all 2^32^2 combinations its always the same

@Febbe
Copy link
Collaborator
Febbe commented Aug 26, 2019

And as I understand the comment there, the binary search has been exchanged because it was too slow

@Technius
Copy link
Member
Technius commented Aug 27, 2019

Let's see some benchmarks and numbers before we argue about performance, otherwise we're just guessing.

Edit: some more questions/points.

  • There's some debugging code there: looks like benchmarking code?
  • Does this change existing behavior?

@JJones780 JJones780 force-pushed the simplifyLayoutGetViewAt branch from 1b05e5c to 2f2ab77 Compare August 27, 2019 08:21
@JJones780 JJones780 force-pushed the simplifyLayoutGetViewAt branch from 2f2ab77 to d561d14 Compare August 27, 2019 08:24
@JJones780
Copy link
Collaborator Author

Now I think I know what happened.
Yes, the commented-out binary search was 3x slower. In the comment it mentioned "overhead"; that referred to STL overhead.

The more complicated STL which recently replaced it was almost 8x slower.

@JJones780
Copy link
Collaborator Author
JJones780 commented Aug 27, 2019

@Technius, yes the benchmarking code is already in there... in the first commit. It's copy-pasted from the original benchmarking ( hence the leftover plotting support ). I've force pushed to remove the bits you saw leftover in the last commit. Thanks.
Here's one of the graphs from the original speed tests.
Binary is the only STL solution appearing there as this was from March.
xd4.pdf

All options have been tested to make sure they give the same results. The size_t had affected the original algorithm but that was fixed for new measurements.

@Febbe
Copy link
Collaborator
8000 Febbe commented Aug 28, 2019

Can you test it with -O3? The STL functions should've been inlined.

@Febbe
Copy link
Collaborator
Febbe commented Aug 29, 2019

I've run a benchmark, there is no difference between the new STL-Iterator like linear search version and the old index based Version. There is no STL overhead. The binary search version has a stable run time of O(log(n)) and is very small for all vector sizes (~60 to ~120 nano seconds). The linear search times are smaller for a difference below 100 to 150 pages. But are very high for a difference of 10000 pages (~ 8000 ns).
search_bench.zip

I think the best solution is to use the binary search, since the difference between the algorithms is very small between 0 and 150 pages, above the linear search performs bad, but still ok (8ms for a 10000 page diff search).

I assume the benchmarks haven't been created correctly (without turning on optimizations).

Febbe
Febbe previously requested changes Aug 29, 2019
@JJones780
Copy link
Collaborator Author
JJones780 commented Aug 29, 2019

I think I see the difference between your benchmark and my measurements. My results were from measurements accumulated within a running Xournal++. The difference is that in Xournal++ a vast majority of the time the user is moving at most one row and/or column away.

My results are consistently like this ( for a 500 page pdf ):
2.40208e+06 new STL-iterator
1e+06 old index based ( from last week )
1.19468e+06 binary search
in ms where lower is better.

Here are some worst case for binary search:
index agreeR agreeC new old binary
6259 555=555=555 0=0=0 4.48116e+06 1.00205e+06 1.63577e+06
6448 301=301=301 0=0=0 4.72137e+06 1.00002e+06 2.08262e+06

The binary search is nice and clean - it's the one to go with.

Edit: Yes, These were measured from a release build ( i.e. with -O3 ).

@Febbe
Copy link
Collaborator
Febbe commented Aug 29, 2019

Mmh the new STL algorithm is mostly the same as the old index based. With optimisations enabled it takes rather the same time or ist Up to 10 nanoseconds faster. I think both old benchmarks were created with no optimizations enabled and or in DEBUG mode. Thus iterators and datastructures and functions are build with a big boilerplate of debug and validation code. This also changes the inlining behavior.

The times of both algorithms compared should be nearly the same, because they are the same.

And yes stick with binary_search, because it has a very stable and smal runtime even we have a worst case scenario. The linear search has a extremely bad worst case run time.

@Febbe Febbe added this to the v1.0.14 milestone Aug 29, 2019
@Technius
Copy link
Member

Since this is the last behavioral change going into 1.0.14, please make sure to test it thoroughly.

@JJones780 JJones780 force-pushed the simplifyLayoutGetViewAt branch from 7bba1a3 to 87573e3 Compare August 31, 2019 08:19
@JJones780 JJones780 dismissed Febbe’s stale review August 31, 2019 08:31

changes made

@JJones780 JJones780 merged commit 7a728c6 into xournalpp:master Aug 31, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants
0