10000 ReleaseAtLeastNPages always madvise pages with smallest addresses, causing pointless decommit for large spans · Issue #1574 · gperftools/gperftools · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

ReleaseAtLeastNPages always madvise pages with smallest addresses, causing pointless decommit for large spans #1574

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
JackieCui00 opened this issue Dec 2, 2024 · 4 comments

Comments

@JackieCui00
Copy link

In the PageHeap::ReleaseAtLeastNPages function, we decommit spans from large_normal_.begin(), which have smallest length and address, and then move them to large_returned_

However, in PageHeap::AllocLarge, spans in large_returned_ with smaller start address take priority over those in large_normal_. As a result, it’s likely that the span we select is the same one we just decommitted in the previous PageHeap::ReleaseAtLeastNPages call.

This behavior can lead to slow decommit as well as performance degradation.

So is there any specific reason for decommitting spans from large_normal_.begin() instead of large_normal_.rbegin() in PageHeap::ReleaseAtLeastNPages ?

code from PageHeap::ReleaseAtLeastNPages :

gperftools/src/page_heap.cc

Lines 612 to 622 in ab49985

if (release_index_ == kMaxPages) {
if (large_normal_.empty()) {
continue;
}
s = (large_normal_.begin())->span;
} else {
SpanList* slist = &free_[release_index_];
if (DLL_IsEmpty(&slist->normal)) {
continue;
}
s = slist->normal.prev;

code from PageHeap::AllocLarge:

gperftools/src/page_heap.cc

Lines 289 to 298 in ab49985

// Try to find better fit from RETURNED spans.
place = large_returned_.upper_bound(SpanPtrWithLength(&bound));
if (place != large_returned_.end()) {
Span *c = place->span;
ASSERT(c->location == Span::ON_RETURNED_FREELIST);
if (best_normal == nullptr
|| c->length < best->length
|| (c->length == best->length && c->start < best->start))
best = place->span;
}

@alk
Copy link
Contributor
alk commented Dec 4, 2024

Thanks for raising this. Sure, we could try to do rbegin. But then we'd return the largest chunks first. It has its own imperfections.

Perhaps the logic of preferring slightly smaller (or same sized by lower address) when finding spans is the logic to be fixed. I welcome you to experiment and propose your change(s).

@alk
Copy link
Contributor
alk commented Dec 4, 2024

I.e. the cost of touching returned memory is not just extra memory usage but also real minor page faults. That's in ~microsecond range per page. High cost.

@JackieCui00
Copy link
Author

@alk Thanks for your quick response.

Thanks for raising this. Sure, we could try to do rbegin. But then we'd return the largest chunks first. It has its own imperfections.

Perhaps the logic of preferring slightly smaller (or same sized by lower address) when finding spans is the logic to be fixed. I welcome you to experiment and propose your change(s).

I understand your concerns. I'll think about improving the strategy in PageHeap::AllocLarge, and do some tests and comparisons.

@JackieCui00
Copy link
Author

Sorry, I accidentally pressed 'Close with comment'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants
0