When and why to deprecate filesystems
Removing support for old hardware is difficult enough, but there does often come a point where it becomes possible. If a particular device has been unavailable for years and nobody can point to one in operation, it may be time to remove the support from the kernel. Another strong sign is a complete lack of maintainer interest; that led to the recent decision to remove support for the NDS architecture, for example. Filesystems can be harder, though; they are independent of the hardware and can thus live far longer than any particular device type. Users can hold onto a familiar filesystem type for a long time after most of the world has moved on.
Reiserfs is certainly a case in point; this filesystem was first covered in LWN in 1999; it found its way into the 2.4.1 kernel the next year despite a fair amount of opposition based on the allegedly stable nature of 2.4 releases. There were a number of reasons for the inclusion of Reiserfs; chief among them, perhaps, was that it was the first Linux filesystem to support journaling. This filesystem attracted a fair amount of interest in its early days and some distributions adopted it as the default choice, but its own developers quickly moved on to other things; by 2004, Hans Reiser was arguing against enhancing Reiserfs, saying instead that his new Reiser4 filesystem should be adopted instead. In the end, Reiser4 was never merged, but Reiserfs lives on in the kernel.
Recently, Matthew Wilcox observed that maintenance of Reiserfs appears to have stopped. So he naturally wondered if there were still any active Reiserfs users, or whether perhaps the filesystem could be removed. Keeping it around has costs associated with it, after all, and it is getting in the way of some enhancements he would like to make.
As noted above, there is not normally a natural point where kernel developers can conclude that there is no value in keeping a filesystem implementation in the kernel. Reiserfs still works for any users still running it, and they would be within their rights to see its removal as causing just the sort of regression that the kernel community so loudly disallows. So the best that can usually be done is to place a prominent deprecation warning in the kernel itself and wait a few years; if opposition to the removal does not materialize during that time, it is probably safe to do. A patch adding that deprecation has been posted and seems likely to be merged for the 5.18 kernel release.
During the discussion, Byron Stanoszek did surface to confess his ongoing use of Reiserfs and desire to see it supported for a bit longer. Jan Kara responded by noting the limited nature of the support Reiserfs gets now:
Frankly the reality of reiserfs is that it gets practically no development time and little testing. Now this would not be a big problem on its own because what used to work should keep working but the rest of the common filesystem infrastructure keeps moving (e.g. with Matthew's page cache changes, new mount API, ...) and so it can happen that with a lack of testing & development reiserfs will break without us noticing. So I would not consider reiserfs a particularly safe choice these days and rather consider migration to some other filesystem.
Kara did offer to extend the deprecation period for Reiserfs, though, if it were really necessary.
As it happens, Stanoszek raised an issue that plays into the timing of the deprecation of Reiserfs, and highlights one of the issues associated with deprecation in general. Even the most dedicated users of Reiserfs will eventually find themselves wanting to move on because that filesystem has a year-2038 problem. In January of that year, the timestamps used within Reiserfs will overflow, leading to overall confusion. Since these timestamps are buried deeply within the on-disk filesystem format, they can't be fixed with a simple code tweak. Evacuating any data from Reiserfs filesystems before then seems like a prudent thing to do.
This might not seem like an urgent problem, since the deadline is over 15 years away. But there are reasons to take action now, which is why Dave Chinner asserted that the time had come to deprecate all filesystems that are not 2038-ready. He later explained in more detail:
With that in mind, this is why we've already deprecated non-y2038k compliant functionality in XFS so that enterprise kernels can mark it deprecated in their next major (N + 1) release which will be supported for 10 years. They can then remove that support it in the N+2 major release after that (which is probably at least 5 years down the track) so that the support window for non-compliant functionality does not run past y2038k.
The point is that deprecating a filesystem in the mainline kernel does not change the fact that it was included in recent long-term-support kernels. The 5.15 kernel will, if past patterns hold, be supported until 2028; it will have Reiserfs in it that whole time. As Wilcox pointed out, that serves as a sort of lifeline for users who cannot move away from the filesystem immediately, which may be a good thing. But it also poses an ongoing problem for developers charged with maintaining those kernels.
That is because supporting deprecated code in a long-term-stable kernel will be increasingly difficult. Ted Ts'o noted that maintainers of stable kernels will not be able to rely on upstream for fixes anymore, and any fixes that are made may not propagate to all kernels due to the lack of a central location for them. So doing high-quality maintenance of Reiserfs for six years, after it has been removed from the mainline, will be a challenge. Any enterprise kernels based on that stable release may include Reiserfs for longer than that. Enterprise distributors have to think in terms of supporting current kernel features for as long as 15 years; if they want to avoid the additional challenge of supporting year-2038-incapable filesystems after that date, the deprecation process needs to start now.
The implication is that we are likely to see other filesystem deprecations before too long. The NFSv3 protocol, for example, uses 32-bit time values and will break in 2038. The ext3 filesystem has similar problems; in this case, the data can be read as an ext4 filesystem, but the on-disk format simply cannot represent times correctly. So, like the XFSv4 on-disk format, which is also not year-2038 capable, ext3 needs to be migrated away from.
These deprecations will likely cause some unhappiness among users who have
stuck with a working solution for years; switching to a new filesystem type
can make people nervous. But the alternative is worse. Year 2038 provides
a nice (and legitimate) excuse for developers to remove some old and
unloved filesystems a bit more quickly than they might otherwise be able
to. Once that passes, deciding when old filesystems should go will, once
again, be a more difficult problem.
Index entries for this article | |
---|---|
Kernel | Filesystems/Deprecation policy |
Posted Mar 7, 2022 17:32 UTC (Mon)
by post-factum (subscriber, #53836)
[Link] (26 responses)
Posted Mar 7, 2022 18:36 UTC (Mon)
by linuxrocks123 (guest, #34648)
[Link] (23 responses)
I don't use it on my newer MythTV systems since I later discovered that ext4-without-journal works fine for MythTV. ext4-without-journal is my goto filesystem for everything now, although I really wish ext4 would get COW support.
Posted Mar 7, 2022 19:15 UTC (Mon)
by calumapplepie (guest, #143655)
[Link] (21 responses)
Ext4 is so reliable that even if the first 600 MB of your workhorse laptop's disk are overwritten with, say, the Debian Testing installation ISO because you wrote of=/dev/sda instead of of=/dev/sdb, you can recover almost all your data and have a fully functional system again with a bit of effort, some abuse of fsck, and a bit of quality time with debsums. I don't think I can say the same about BTRFS, though I can't say I've tested that theory.
Posted Mar 7, 2022 21:01 UTC (Mon)
by Paf (subscriber, #91811)
[Link] (3 responses)
No, it means checking the extent reference count, no need to know who's using it. It's not expensive. The other reasons you gave are real downsides, though.
But I do agree that EXT4 is not going to suddenly become a COW file system. For one thing, a COW file system is a different design point that is very far from globally superior to an extent based file system. It is different, with advantages and disadvantages.
It could *conceivably* gain some limited COW capabilities like I believe XFS has, where it can be done optionally for specific files.
Posted Mar 8, 2022 6:16 UTC (Tue)
by linuxrocks123 (guest, #34648)
[Link] (1 responses)
Posted Mar 16, 2022 3:14 UTC (Wed)
by nevyn (guest, #33129)
[Link]
Posted Mar 20, 2022 6:50 UTC (Sun)
by oldtomas (guest, #72579)
[Link]
Yes, but. If you look at the garbage collector community you soon realize that /maintaining/ those refcounts is expensive. It means one write access for each change in refcount.
That's why when GC performance starts to matter, most drop the first (naive) refcount implementation.
For file systems things won't be that different.
Posted Mar 7, 2022 22:08 UTC (Mon)
by amarao (subscriber, #87073)
[Link] (16 responses)
Posted Mar 8, 2022 6:03 UTC (Tue)
by calumapplepie (guest, #143655)
[Link] (14 responses)
Also, I object to the implication that any dark magic was required. Magic, yes: but I didn't need anything too dark and evil. Of course, it would've been nice to know that BEFORE I sacrificed those lambs.
Posted Mar 8, 2022 12:29 UTC (Tue)
by Wol (subscriber, #4433)
[Link] (13 responses)
So I now have an 8TB backup drive. I've still to automate backups with systemd timers, but as I run ext4 over lvm (over raid over dm-integrity for my live system) over spinning rust, I just take lvm snapshots as my first stage backup, followed by rsync --in-place to lvm snapshots of my backups.
Just buy the biggest, cheapest (and yes SMR is fine for this) drive you can get your hands on, and set up rsync backups.
Cheers,
Posted Mar 8, 2022 13:01 UTC (Tue)
by geert (subscriber, #98403)
[Link] (12 responses)
Posted Mar 8, 2022 15:34 UTC (Tue)
by Wol (subscriber, #4433)
[Link] (11 responses)
Yes I ought to think about having multiple backups, but one backup is a major advance on none.
Cheers,
Posted Mar 8, 2022 16:04 UTC (Tue)
by nix (subscriber, #2304)
[Link] (10 responses)
Personally I'd be lost without historical backups as well -- ever so often I find I need something "as it was before Sep 15 last year" or something like that, and with proper backups you can get that sort of thing in minutes. This is not even hard. All the tools exist: all you need is a bit of scripting and a cronjob.
Posted Mar 8, 2022 16:54 UTC (Tue)
by Wol (subscriber, #4433)
[Link] (9 responses)
As for historical, that's why I use lvm. Take a snapshot, do the inplace rsync, and I get a full backup for the price of an incremental.
I don't plan to clean out my incrementals, unless I'm running short of disk space.
Cheers,
Posted Mar 8, 2022 17:11 UTC (Tue)
by rgmoore (✭ supporter ✭, #75)
[Link] (2 responses)
If you're limiting yourself to critical files- or if you're willing to spend more- you could back up to the cloud. It's definitely offsite, and it has its own layers of protection. If you're serious, you'd want some solution that would let you encrypt your files locally, so anyone who gets a copy of your cloud files won't be able to read all your secrets. Of course you probably want that for any off-site backup.
Posted Mar 8, 2022 17:21 UTC (Tue)
by geert (subscriber, #98403)
[Link] (1 responses)
Posted Mar 8, 2022 17:36 UTC (Tue)
by rgmoore (✭ supporter ✭, #75)
[Link]
I keep an off-site backup in a locked desk drawer at work, but I still keep that encrypted. Transparent encryption is easy enough that it's silly not to use it on any USB hard drive whose contents you care at all about keeping secret.
Posted Mar 8, 2022 18:16 UTC (Tue)
by smurf (subscriber, #17840)
[Link] (4 responses)
Posted Mar 8, 2022 23:10 UTC (Tue)
by Wol (subscriber, #4433)
[Link] (3 responses)
How does btrfs snapshot magic everything onto a different hard drive?
And to the best of my knowledge, btrfs raid-5 is still in experimental eat-your-data status ...
I use lvm on my live raid-5 array to take live snapshots, and lvm/rsync on my backup drive to take full/incremental backups.
Cheers,
Posted Mar 9, 2022 6:08 UTC (Wed)
by calumapplepie (guest, #143655)
[Link]
Posted Mar 10, 2022 2:31 UTC (Thu)
by bartoc (guest, #124262)
[Link] (1 responses)
Tbh ZFS' raid5 also leaves something to be desired since it avoids the write hole by doing things that can have pretty bad performance consequences down the line (esp with how unwilling ZFS is to muck around with stuff on disk, in general). I'm hoping bcachefs' approach pays off (it does raid5 by initially writing the data in raid10 (or at least mirrored), committing that, then later rewriting that mirror to raid5 and atomically updating the filesystem metadata once it's done.
Posted Mar 14, 2022 15:10 UTC (Mon)
by hkario (subscriber, #94864)
[Link]
also, HDDs lying about bad sectors is not only a RAID-5 issue, it's just as likely to impact RAID-1 setups
Posted Mar 9, 2022 21:27 UTC (Wed)
by nix (subscriber, #2304)
[Link]
You give a drive to a friend you see sometimes and swap it. (I do this with several friends and relatives living at different distances from me.)
You really don't want to use lvm snapshots to do long-term historical backups: IIRC, writes slow down proportional to the number of snapshots in place. For comparison and to see what you can do with an actual automated backup system, I have 9103 backup snapshots currently accessible in my onsite backup (mostly once-every-three-hourly backups of /home). Most of the backups took under a minute to run (and, obviously, happened with no human intervention at all). LVM would grind completely to a halt long before you reached *that* sort of scale.
> I don't plan to clean out my incrementals, unless I'm running short of disk space.
Quite so -- and since bup uses rsync-style deduplication and also compresses the backups, that's going to be a long time.
Posted Mar 8, 2022 7:42 UTC (Tue)
by ibukanov (guest, #3942)
[Link]
Posted Mar 8, 2022 22:20 UTC (Tue)
by clump (subscriber, #27801)
[Link]
Posted Mar 8, 2022 4:07 UTC (Tue)
by kschendel (subscriber, #20465)
[Link]
As I recall, JFS's achilles heel (or, one of them) was that it was horrible at allocating blocks to growing files when there was a lot of concurrent writing / allocating. I worked up a patch set that made the allocation a lot better under those conditions, but we ran into sporadic rare corruption errors. We were trying to figure out whether they were hardware or my patch set, when the company was bought out ...
Posted Mar 18, 2022 19:50 UTC (Fri)
by cypherpunks2 (guest, #152408)
[Link]
Posted Mar 7, 2022 18:48 UTC (Mon)
by error27 (subscriber, #8346)
[Link] (6 responses)
The advantage is if the UML kernel crashes, that's just an application. It doesn't take down your whole system. The disadvantage is that it would be slower but if it's a USB disk, then it's not going to be fast in the first place. We would only do this for less used and less trusted filesystems.
It wouldn't be super hard to create a set of tools which would remove the complexity for the user. Everyone who hears the idea is like, "Yes. That sounds very easy to do from a technical perspective which makes it too boring to do." So no one has implemented the idea. But it's a good idea.
Posted Mar 7, 2022 19:13 UTC (Mon)
by willy (subscriber, #9762)
[Link]
Posted Mar 7, 2022 19:39 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Mar 7, 2022 20:03 UTC (Mon)
by dezgeg (subscriber, #92243)
[Link]
Sadly, last commit seems to be from 2020: https://github.com/lkl/linux
Posted Mar 8, 2022 11:24 UTC (Tue)
by eru (subscriber, #2753)
[Link] (1 responses)
Posted Mar 8, 2022 13:05 UTC (Tue)
by geert (subscriber, #98403)
[Link]
Posted Mar 12, 2022 19:27 UTC (Sat)
by rwmj (subscriber, #5474)
[Link]
Posted Mar 7, 2022 21:11 UTC (Mon)
by arnd (subscriber, #8866)
[Link] (8 responses)
Side note: reiserfs doesn't actually have a y2038 problem at all, it uses unsigned 32-bit timestamps with a range from 1970 to 2106. https://kernelnewbies.org/y2038/vfs lists the limits, I think only ext2/ext3, coda, exofs, hostfs, and ufs1 still have the y2038 problem, and a few others have been changed to interpret the seconds as unsigned.
Posted Mar 8, 2022 2:02 UTC (Tue)
by NYKevin (subscriber, #129325)
[Link]
Posted Mar 8, 2022 8:03 UTC (Tue)
by pm215 (subscriber, #98099)
[Link] (5 responses)
[*] I think of all these as basically the same thing, which may or may not be an incorrect mental model, but which I suspect is not an uncommon one...
Posted Mar 8, 2022 8:22 UTC (Tue)
by TomH (subscriber, #56149)
[Link] (3 responses)
You can increase the size with "tune2fs -I <device>" but a fsck needs to be done first so the filesystem will need to be unmounted and you probably want to read what the manual page says before trying it.
Posted Mar 8, 2022 9:02 UTC (Tue)
by arnd (subscriber, #8866)
[Link]
Posted Mar 8, 2022 17:55 UTC (Tue)
by smurf (subscriber, #17840)
[Link]
SCNR …
Posted Mar 8, 2022 19:37 UTC (Tue)
by pm215 (subscriber, #98099)
[Link]
Posted Mar 8, 2022 8:48 UTC (Tue)
by arnd (subscriber, #8866)
[Link]
Posted Mar 10, 2022 17:42 UTC (Thu)
by jccleaver (subscriber, #127418)
[Link]
This really seems like something to bubble back up, at least in preparation for the actual post-2038 strategy. I can't think of a better solution for how to properly handle something that is perfectly readable fine but which has no way to encode properly-readable vital (i.e. security) metadata about an operation.
Just leave this feature flag disabled for now, and builders can enable it as desired; with a full flip in the years to come.
Posted Mar 7, 2022 22:41 UTC (Mon)
by jhoblitt (subscriber, #77733)
[Link] (3 responses)
Posted Mar 8, 2022 11:27 UTC (Tue)
by jlayton (subscriber, #31672)
[Link]
This is probably a good discussion for the upcoming NFS Bakeathon.
Posted Mar 12, 2022 19:29 UTC (Sat)
by rwmj (subscriber, #5474)
[Link] (1 responses)
Posted Mar 12, 2022 20:25 UTC (Sat)
by jhoblitt (subscriber, #77733)
[Link]
Posted Mar 8, 2022 1:46 UTC (Tue)
by pabs (subscriber, #43278)
[Link] (10 responses)
Posted Mar 8, 2022 2:10 UTC (Tue)
by NYKevin (subscriber, #129325)
[Link] (8 responses)
This only has the slightest chance of working if both the source and the destination FS's are effectively equivalent (e.g. both ext4 with the same set of mount options). Even then, what happens if you try to copy a file (not directory) with a link count greater than one? Does it just break the link, and make a brand-new file on the destination? Or does the kernel somehow know that you were also planning to copy another link to the same file, and therefore it needs to recreate the hard link on the other side when you get around to copying that second link? rsync has enough information to figure that out, but the kernel probably does not.
Posted Mar 8, 2022 2:18 UTC (Tue)
by pabs (subscriber, #43278)
[Link] (2 responses)
Posted Mar 8, 2022 22:42 UTC (Tue)
by NYKevin (subscriber, #129325)
[Link]
The problem is that, in practice, either it errors out every time a feature can't be copied and wasn't explicitly excluded, or else it silently swallows the error and loses your data. That might not sound so bad, until you realize that you're going to lose a lot of rather basic information in some cases; FAT in particular is extremely lossy, and will lose symlinks (and everything else that is not a "regular file" or directory), (probably) simple permissions, and almost certainly all forms of "extended" attributes such as ACLs. I also don't believe it supports hard links, and some particularly old versions don't even support subdirectories.
> A recursive option could let the kernel know that yes you want that link.
This opens up an enormous can of worms. What happens if the process receives a signal while it's inside a recursive kernelspace copy operation? Does the whole operation fail with EINTR, and you have to start over from scratch? What happens if file #2,758 can't be copied, do you just return EIO and make the application deal with it? More generally, how do you handle exceptional circumstances while the copy is taking place?
Posted Mar 10, 2022 2:23 UTC (Thu)
by bartoc (guest, #124262)
[Link]
Posted Mar 8, 2022 16:38 UTC (Tue)
by stevie-oh (subscriber, #130795)
[Link] (4 responses)
Mac OS' HFS had them first, there they are called "resource forks".
In fact, the entire reason NTFS has Alternate Data Streams is that Microsoft wanted to make it possible to use a Windows server as a file server for a group of Macs.
https://en.wikipedia.org/wiki/NTFS#Alternate_data_stream_(ADS)
Posted Mar 9, 2022 7:44 UTC (Wed)
by eru (subscriber, #2753)
[Link] (3 responses)
Posted Mar 10, 2022 2:26 UTC (Thu)
by bartoc (guest, #124262)
[Link]
NTFS supports actual extended attributes too, so you have some risk of collision if you try and merge ADS and ntfs xattr.
Posted Mar 18, 2022 19:55 UTC (Fri)
by cypherpunks2 (guest, #152408)
[Link] (1 responses)
Posted Mar 19, 2022 23:34 UTC (Sat)
by smurf (subscriber, #17840)
[Link]
Posted Mar 8, 2022 2:36 UTC (Tue)
by dtlin (subscriber, #36537)
[Link]
Posted Mar 10, 2022 2:27 UTC (Thu)
by bartoc (guest, #124262)
[Link] (5 responses)
Posted Mar 10, 2022 16:38 UTC (Thu)
by smurf (subscriber, #17840)
[Link]
Posted Mar 17, 2022 18:19 UTC (Thu)
by mrugiero (guest, #153040)
[Link] (3 responses)
Posted Mar 18, 2022 8:21 UTC (Fri)
by smurf (subscriber, #17840)
[Link] (2 responses)
Suppose I find an intractable ext4 bug, no biggie I ask Theodore Ts’o. A problem with reiserfs, well, I can conceivably ask Mr. Reiser sometime next year, but I kindof doubt he's even been able to look at that code during the time he was imprisoned.
Also, the world has moved on, give me a reason why anybody would want to start using reiserfs today, instead of btrfs or ext4 or xfs or …
Posted Mar 18, 2022 14:52 UTC (Fri)
by mrugiero (guest, #153040)
[Link]
Posted Mar 18, 2022 18:49 UTC (Fri)
by flussence (guest, #85566)
[Link]
Posted Mar 17, 2022 14:23 UTC (Thu)
by mrugiero (guest, #153040)
[Link] (8 responses)
Posted Mar 17, 2022 18:25 UTC (Thu)
by rmano (guest, #49886)
[Link] (1 responses)
I still have CD-ROM burned in extfs around from the '90. The good part is that I don't think that I'll need them ever...
Posted Mar 17, 2022 20:58 UTC (Thu)
by mrugiero (guest, #153040)
[Link]
Well, the one who wants to deprecate should make sure the replacement is in place, IMO :) (Which doesn't necessarily means doing it themselves, but at least finding someone)
Posted Mar 18, 2022 4:04 UTC (Fri)
by pabs (subscriber, #43278)
[Link] (1 responses)
Posted Mar 18, 2022 8:20 UTC (Fri)
by smurf (subscriber, #17840)
[Link]
It doesn't matter (much) whether that kernel shall compile to a mere FUSE helper that exports a reiserfs file system, or a full-blown kernel you'd run in a VM (mount the reiserfs image there and export it to the host via NFS or CIFS). Assuming it'll work in that then-current VM; that's mostly likely, IMHO, but by no means assured.
The solution is (assuming that there is no reiserfs support library you can use a shortcut) to extract a libreiserfs from the kernel and remove all the kernel idiosyncrasies from it. At that point it's standard C plus libfuse and thus presumably easier to keep up-to-date with whatever fun incompatibilities the C and/or FUSE people will throw in your path in the future.
Posted Mar 18, 2022 13:40 UTC (Fri)
by HelloWorld (guest, #56129)
[Link] (2 responses)
Posted Mar 18, 2022 14:49 UTC (Fri)
by mrugiero (guest, #153040)
[Link] (1 responses)
Posted Mar 18, 2022 16:49 UTC (Fri)
by farnz (subscriber, #17727)
[Link]
Something like tar or rsync over the hypervisor's networking would work just fine. After all, I can tar up a filesystem on a 1980s vintage UNIX system, or run PKZIP on a 1980s vintage DOS system, and uncompress the resulting archive on a modern Linux PC.
Posted Mar 20, 2022 7:50 UTC (Sun)
by Shabbyx (guest, #104730)
[Link]
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
These days, the EXT filesystem is the 'ol reliable' of linux filesystems. It's stable, easy to recover from crashes, simple enough in structure, and an excellent compromise between keeping data local and spreading things out. It's very low overhead: at the end of the day, you just find the block and change it. COW, even just of individual files rather than all metadata, requires a lot of additional work for every operation. For instance, deleting a file doesn't just mean removing the extents, but it also means checking other files to see if they're using those extents rather than just marking them as free. You also lose the nice, low-fragmentation behavior on hard disks where seeking is costly: file blocks stop tending to be in nice ordered segments, and start tending to be chaotically placed across the disk. These aren't properties you want in a filesystem that is trying its very best to be as stable, reliable, lightweight, and boring as it can be.
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
Wol
When and why to deprecate filesystems
When and why to deprecate filesystems
Wol
When and why to deprecate filesystems
When and why to deprecate filesystems
Wol
When and why to deprecate filesystems
Offsite? Where? :-) For a home system that could be a problem (unless I put a bare drive in the garage...)
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
Wol
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
Looks like it's not yet archived on lore.kernel.org, though.
When and why to deprecate filesystems
missing y2038 support
missing y2038 support
missing y2038 support
missing y2038 support
Ah, I did not know that tune2fs could do it, should have read your reply first. However, it seems this does not help with
missing y2038 support
Changing the inode size not supported for filesystems with the flex_bg feature enabled.
It apparently works on ext4 file systems without the flex_bg feature, and on ext2/ext3 images that do not support flex_bg in the first place, but apparently not on small ext4 images that were created with flex_bg or had it (irreversibly) added later.
missing y2038 support
missing y2038 support
I don't think you can convert between those in place without a full backup/restore.
One way to find out the inode format is to use the "stats" command in debugfs. It appears that at least in e2fsprogs version 1.45.7, the default in mkfs is still to use small inodes for smaller file system images, regardless of the file system type, so an ext2 file system image over 512MB is not actually compatible with the ext2 format unless you manually set the inode size, and smaller ext4 images continue having the y2038 problem:
missing y2038 support
$ truncate --size=511M ext4.img
$ mkfs -t ext4 -q ext4.img
$ echo stats | debugfs ext4.img 2>&1 | grep "Inode size"
Inode size: 128
$ truncate --size=512M ext2.img
$ mkfs -t ext2 -q ext2.img
$ echo stats | debugfs ext2.img 2>&1 | grep "Inode size"
Inode size: 256
The defaults are configured in /etc/mke2fs.conf, which on my system contains:
small = {
inode_size = 128
inode_ratio = 4096
}
Removing this bit makes mkfs use 256 byte inodes by default, regardless of the size.
missing y2038 support
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
Copying from NTFS to a modern Linux file system could use an extended attribute to simulate a stream, unless it contains too much data.
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
It could be a subtree in the kernel tree or a sister project under the Linux Foundation umbrella, but with a policy that you can't deprecate the kernel driver without that in place.
A simplification could be to only implement read features. Then when it gets obsoleted we still can access archives.
When and why to deprecate filesystems
When and why to deprecate filesystems
Besides, testing FUSE filesystems could possibly be done with a single test suite ran much less often as the code is expected to change less.
Another idea to add work to someone else (TM) would be to require this to introduce new filesystems. It'd be nice to other OSes to be able to read whatever Linux brings and it would make it trivial to deprecate them later, as it's much easier to write this when the know-how is already in your head rather than reverse engineering the work someone did 20 years ago.
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems
When and why to deprecate filesystems