8000 Consider mounting using `compress-force` instead of `compress` · Issue #1 · Trevo525/btrfdeck · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
This repository has been archived by the owner on Aug 19, 2022. It is now read-only.

Consider mounting using compress-force instead of compress #1

Closed
taotien opened this issue Mar 16, 2022 · 4 comments
Closed

Consider mounting using compress-force instead of compress #1

taotien opened this issue Mar 16, 2022 · 4 comments

Comments

@taotien
Copy link
Contributor
taotien commented Mar 16, 2022

Hello! It's been recommended that when using zstd, applying compress-force is slightly better.

The reason being that zstd's compressibility checker is better than btrfs' internal one, allowing for higher compression ratios without sacrifice. zstd won't attempt to compress if it itself determines that attempted compression would result in a larger file.

@taotien taotien changed the title Conside mount opts using compress-force instead of compress Consider mounting using compress-force instead of compress Mar 16, 2022
8000
@Trevo525
Copy link
Owner
Trevo525 commented Mar 17, 2022

Do you have any sources that you based this on? From what I read about compress-force it means that even if "compression" makes the file bigger, it will compress it. Like you said. I would love to make the change if you can provide some sources to prove to me that zstd won't actually do that. :-)

@taotien
Copy link
Contributor Author
taotien commented Mar 17, 2022

Yes, the btrfs wiki does state that. I can't find the original source of where I got this info (and any I can find right now also lack sources), though I can offer some evidence and tools to help with testing. Sadly I'm in the Q2 shipping bracket so can't help with that just yet.

There's a utility called compsize recommended on the btrfs wiki that allows you to see the compression ratio and library used for compression. It also tells you what isn't compressed, which should mean that there are files where compression was skipped even with -force enabled. Here's what that looks like for my Steam Library, mounted with compress-force=zstd:3:

Processed 51383 files, 1084399 regular extents (1084869 refs), 12836 inline.
Type       Perc     Disk Usage   Uncompressed Referenced  
TOTAL       78%      144G         183G         183G       
none       100%       71G          71G          71G       
zstd        64%       72G         112G         112G   

From what I can gather from btrfs manpage, the reason they recommend against using -force is because it spawns a thread of the compressor rather than use cached heuristics btrfs has made internally. Unfortunately there's some areas that are contradictory/poorly written that are confusing me, but I think what they mean in some parts is that "compression libraries could have a way to test compressibility, but that's not guaranteed and thus we don't recommend it".

My reasoning for using it is that I don't care about performance during writes, especially for games, so I'd much rather get as much storage as I can (and potentially avoid read bottlenecks). There's too many variables on my desktop to do proper testing (RAID0, background processes, etc.), but I haven't had any issues so far.

Can't wait to get my hands on the Deck, I'll definitely be testing it out for myself! I'll also look around for more credible sources when I have the chance.

@ktully
Copy link
ktully commented Apr 11, 2022

BTRFS wiki suggests that compress aborts compression if the first portion is larger, whereas compress-force attempts to compress all portions (potentially wasting CPU on writes), but will still discard files that end up larger overall:

unless the filesystem is mounted with -o compress-force. In that case compression will always be attempted on the file only to be later discarded.

This ties up with the inode code. FORCE_COMPRESS (set from compress-force in super.c) always wins as it's the first check in deciding whether to attempt compression. Whereas for 'compress' it falls through and does a heuristic check later in that function. But in all compression cases there is a post compression check that will prefer the uncompressed file if smaller.

That's generic BTRFS behaviour from the code and docs. But forcing on zstd seems particularly safe, since it appears that zstd also has code internally to abort compression on pages that are becoming larger.

Disclaimer: the above is my impression from a few minutes skimming the code on github - but I've linked to the key code logic that supports what we're saying. I haven't got the repo checked out, and I'm not a filesystem expert, but I have written embedded C code professionally.

@Trevo525
Copy link
Owner

Thank you both for your contributions! I have made the change and will close this issue and link to the commit.

If, in the future, someone has input to say that force is actually wrong, feel free to re-open this issue and I will reconsider.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants
0