8000 Allocator bulk deallocation flush threshold by lnkuiper · Pull Request #13796 · duckdb/duckdb · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Allocator bulk deallocation flush threshold #13796

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

lnkuiper
Copy link
Contributor
@lnkuiper lnkuiper commented Sep 6, 2024

When we are close to the memory limit and request a large allocation, e.g., for a hash table, we evict and deallocate many small blocks. It may take some time before all of this memory is actually returned to the OS because allocators like jemalloc will keep the memory around for a while. This will cause our RSS to go over the memory limit once we do the large allocation.

This PR implements a threshold parameter that, if the bulk deallocation is larger than the threshold, causes us to flush outstanding unused allocations before such large allocations are actually performed.

@Mytherin Mytherin merged commit fa5c2fe into duckdb:main Sep 8, 2024
39 checks passed
@Mytherin
Copy link
Collaborator
Mytherin commented Sep 8, 2024

Thanks!

github-actions bot pushed a commit to duckdb/duckdb-r that referenced this pull request Sep 11, 2024
Merge pull request duckdb/duckdb#13796 from lnkuiper/allocator_bulk_deallocation_flush_threshold
github-actions bot added a commit to duckdb/duckdb-r that referenced this pull request Sep 11, 2024
…377)

Merge pull request duckdb/duckdb#13796 from lnkuiper/allocator_bulk_deallocation_flush_threshold

Co-authored-by: krlmlr <krlmlr@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants
0