8000 Investigate memory consumption when compressing large files · Issue #35 · mhx/dwarfs · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
Investigate memory consumption when compressing large files #35
Closed
@mhx

Description

@mhx

I have to admit I've been doing most of my compression tasks on machines with 64GB of memory, so optimizing for low memory consumption hasn't really been a priority yet. There are some knobs you might be able to turn, though. I'm not sure large files per se are an issue, but a large number of files definitely is. You might be able to tweak --memory-limit a bit, which determines how many uncompressed blocks can be queued. If you lower this limit, the compressor pool may run out of blocks more quickly, resulting in overall slower compression. Reducing the number of workers (-N) might also help a bit.

A small update on this (apologies that this is on an unrelated issue). Did some experimentation and found that lowering memory limit and workers works in some instances but not in others. Large files seems to be the biggest hold up, in particular an instance when I tried to put a 3.1 GB file which seemingly had no way of compressing via dwarfs with my 16 GB of memory (even with very low options like -L1m -N1).

What I did find instead was that using the -l0 option and then recompressing the image works in these cases without issue. Creating the initial image with -S24 results in very well recompressed files in these instances, the 3.1 GB file compressing down to 2.3 GB whereas the default block size for -l0 resulted in a 2.6 GB file (which is approx what mksquashfs -comp zstd -b 1M -Xcompression-level 22 also gave me).

Originally posted by @Phantop in #33 (comment)

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions

    0