application impact
application impact
Posted Oct 5, 2010 18:07 UTC (Tue) by wingo (guest, #26929)Parent article: Solid-state storage devices and the block layer
I asked Michael Meeks a couple of Fosdems ago about how his iogrind disk profiler was coming, and he said that he totally dropped it, because ssds will kill all these issues. Sounds easier than fixing OpenOffice.org^WLibreOffice issues in code...
Is the "best practice" going to shift away from implementing things like GTK's icon cache and other purely seek-avoiding caches?
Posted Oct 5, 2010 22:29 UTC (Tue)
by zlynx (guest, #2285)
[Link] (5 responses)
GTK applications' current "best practice" of "ignore the RAM use, they can buy more" has already destroyed the usefulness of old hardware with a modern Linux software stack.
Posted Oct 6, 2010 0:16 UTC (Wed)
by mpr22 (subscriber, #60784)
[Link] (3 responses)
Posted Oct 6, 2010 1:23 UTC (Wed)
by dlang (guest, #313)
[Link] (2 responses)
yes we are doing more with our systems, but nowhere near that much more.
Posted Oct 6, 2010 9:23 UTC (Wed)
by marcH (subscriber, #57642)
[Link] (1 responses)
(Here I am ignoring SSDs, still too new to be part of The History)
Posted Oct 6, 2010 11:04 UTC (Wed)
by dlang (guest, #313)
[Link]
in terms of size, drives have grown at least 1000x
in terms of sequential I/O speeds they have improved drastically (I don't think quite 1000x, but probably well over 100x, so I think it's in the ballpark)
in terms of seek time, they've barely improved 10x or so
this is ignoring things like SSDs, high-end raid controllers (with battery backed NVRAM caches) and so on which distort performance numbers upwards.
byt yes, the performance difference between the CPU registers and disk speeds is being stretched over time.
jut the difference in speed between the registers and ram is getting stretched to the point where people are seriously talking that it may be a good idea to start thinking of ram as a block device, accessed in blocks of 128-256 bytes (the cache line size for the CPU), right now the CPU hides this from you by 'transparently' moving the blocks in and out of the cache of the various processors for you so that if you choose to you can ignore this.
but when you are really after performance, a high end system starts looking very strange. You have several sets of processors that share a small amount of high-speed storage (L2/L3 cache) and have larger amount of lower speed storage (the memory directly connected to that CPU), plus a network to access the lower speed storage connected to other CPUs. Then you have a lower speed network to talk to the southbridge chipset to interact with the outside world (things like you monitor/keyboard/disk drives, PCI-e cards, etc).
This is a rough description of NUMA and the types of things that you can run into on large multi-socket systems, but the effect starts showing up on surprisingly small systems (which is why per-cpu variables and such things are used so frequently)
Posted Oct 14, 2010 19:29 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
Three slots, max capacity 256Mb per slot, three 256Mb chips in the machine.
"That's no problem, they can just buy a new machine ..."
Cheers,
application impact
Eight Megabytes And Constantly Swapping. This is not a new phenomenon.
application impact
application impact
application impact
application impact
application impact
Wol