Removing ext2 and/or ext3
Removing ext2 and/or ext3
Posted Feb 14, 2011 11:44 UTC (Mon) by rahulsundaram (subscriber, #21946)In reply to: Removing ext2 and/or ext3 by rfunk
Parent article: Removing ext2 and/or ext3
"I investigated and learned that XFS had been explicitly designed for server room situations where the power *never* fails."
I heard of this myth several times before but have never actually seen a citation. Since you have claimed that you did some research and investigation, pointers would be helpful.
Posted Feb 14, 2011 14:56 UTC (Mon)
by rfunk (subscriber, #4054)
[Link] (6 responses)
Meanwhile, I don't care how "robust and scalable" and tested XFS is or claims to be; my experience shows that it's not reliable enough for my purposes, and others have similar experiences. (Again, I was once a big fan of XFS; then I discovered some of its failure modes, and found them unacceptable.)
I'd love to give you the citations about XFS's history, but since the last time I looked into that aspect in depth was around five years ago (and the first time was more than eleven years ago), I no longer have them anywhere near handy.
Posted Feb 14, 2011 16:34 UTC (Mon)
by rahulsundaram (subscriber, #21946)
[Link] (5 responses)
None of the links are new to me but the way you phrase it suggests that you are equating the idea and its implementation. The idea is too old and widely implemented in other filesystems to be controversial and it is a pretty much a required feature to get better performance. Implementation in Ext4 had some rough edges initially and that isn't a current problem.
As far as the robustness of XFS is concerned, personal anecdotes are just not interesting at all since it is not independently verifiable. I can claim that I have used XFS is a number of places and found it very robust but it doesn't really prove anything. What is interesting is where and how it is getting used and so far the deployments don't suggest that it is not worth trusting. Unless you can find a reference to the story of how XFS was designed to be only used in environments where power never fails, I just don't buy it.
Posted Feb 14, 2011 16:42 UTC (Mon)
by dlang (guest, #313)
[Link] (4 responses)
when it was initially merged it had a _lot_ of SGI baggage (shim layers between the XFS code and the rest of the kernel). it has had a lot of cleanup and maintinance, including a lot of testing (and the development of a filesystem test suite that other filesystems are starting to adopt since they don't have anything as comprehensive)
so while I have been using XFS for about 7 years, I would not be surprised to hear that people had problems about 5 years ago. I would be surprised if those problems persisted to today.
personally, I don't trust Ext4 yet, it's just too new, and it's still finding corner cases that have problems. It also is not being tested against multi-disk arrays very much (the developers don't have that sort of hardware, so they test against what they have)
Posted Feb 14, 2011 17:02 UTC (Mon)
by rahulsundaram (subscriber, #21946)
[Link] (3 responses)
"It also is not being tested against multi-disk arrays very much (the developers don't have that sort of hardware, so they test against what they have)"
IIRC, this was tested by Red Hat before making it default in RHEL 6. That however is not the very latest Ext4 code.
Posted Feb 14, 2011 17:12 UTC (Mon)
by dlang (guest, #313)
[Link] (2 responses)
yes, redhat did testing, but I'll bet that their testing was of the 'does it blow up' type of thing rather than performance testing.
In any case, the fact that the developers are not testing against that type of disk subsystem means that they are not looking for, or achieving the best performance when used with those subsystems (this was also confirmed by the Ext4 devs on the kernel mailing list)
I'm not saying that the Ext4 devs are incompetent or not doing the best that they can with what they have, just that the fact that they are not working with such large systems means that they are not running into the same stresses in their testing and profiling that people will run into in the real world with large systems.
the current XFS devs may or may not have access to such large arrays nowdays, but historically SGI was dealing with such arrays and did spend a lot of time researching how to make the filesystem as fast as it could be on such arrays, and that knowledge is part of the design of XFS. the current maintainers could destroy this as they are updating it, but this is not very likely.
Posted Feb 14, 2011 17:51 UTC (Mon)
by rahulsundaram (subscriber, #21946)
[Link] (1 responses)
I wouldn't bet on that. Red Hat has a fairly large filesystem team and performance team and run performance tests routinely, for public benchmarks (useful to convince customers) and otherwise. All the major Ext4 and XFS developers work for large vendors (Google, IBM, Red Hat etc) and I would have expected them to have access to enterprise hardware. XFS is known to scale better on big hardware atleast historically because of its legacy but the gap has reduced considerably in recent kernel versions.
Posted Feb 14, 2011 17:53 UTC (Mon)
by dlang (guest, #313)
[Link]
so this is still pretty recent info.
Removing ext2 and/or ext3
Removing ext2 and/or ext3
Removing ext2 and/or ext3
Removing ext2 and/or ext3
Removing ext2 and/or ext3
Removing ext2 and/or ext3
Removing ext2 and/or ext3