LWN: Comments on "Solid-state storage devices and the block layer" https://lwn.net/Articles/408428/ This is a special feed containing comments posted to the individual LWN article titled "Solid-state storage devices and the block layer". en-us Thu, 16 Jan 2025 13:42:56 +0000 Thu, 16 Jan 2025 13:42:56 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net bogus random entropy sources https://lwn.net/Articles/479747/ https://lwn.net/Articles/479747/ cladisch <div class="FormattedComment"> The Windows 8 Hardware Certification Requirements demand that "Connected Standby"-capable device (i.e., mobile ones) have encryption acceleration and a RNG.<br> <p> <font class="QuotedText">&gt; Business Justification:</font><br> <font class="QuotedText">&gt; Core cryptographic functions are used in Windows to provide platform integrity as well as protection of user data.</font><br> (note the priorities)<br> <p> In completely unrelated news, all recent AMD and Intel processors support AES-NI, and Intel has announced that Ivy Bridge processors will have a RNG.<br> </div> Tue, 07 Feb 2012 07:50:44 +0000 bogus random entropy sources https://lwn.net/Articles/479666/ https://lwn.net/Articles/479666/ dlang <div class="FormattedComment"> some chips do have high quality random number generators built in.<br> </div> Mon, 06 Feb 2012 21:40:11 +0000 bogus random entropy sources https://lwn.net/Articles/479664/ https://lwn.net/Articles/479664/ tconnors <div class="FormattedComment"> <font class="QuotedText">&gt; Modern CPUs have accelerators for all sorts of things as standard equipment. Why not random numbers? We spend countless millions of transistors on ever larger caches and datapaths. Surely they could spare a few for a really high quality true random number generator.</font><br> <p> Because random number generators are only used for cryptography, and only terrorists use cryptography. Are you a terrorist?<br> </div> Mon, 06 Feb 2012 21:33:17 +0000 Solid-state storage devices and the block layer https://lwn.net/Articles/411321/ https://lwn.net/Articles/411321/ eds <div class="FormattedComment"> There are many good reasons to treat NAND flash storage more like disk than like DRAM.<br> <p> 1. Addressing: DRAM is byte/word addressable; NAND flash is not. NAND flash pages are currently 4KB in size and must be read/written<br> 2. Flash management: flash sucks. It has long erase times, needs wear-leveling, needs lots of ECC and redundancy to be reliable. Dealing with flash requires a lot of careful management that nobody's going to want on a DRAM-like path.<br> 3. Speed: flash is a lot faster than disk. But it's still a lot slower than DRAM (a write to a busy NAND part may have to wait up to 1ms).<br> 4. Size: it's very expensive to try to address a terabyte of DRAM. 64-bit CPUs don't actually implement a full 64-bit address space. It's much cheaper to just address huge storage devices in blocks, like a disk.<br> <p> If in a few more years phase-change memory becomes big and cheap enough to give NAND flash a run for its money, then it may be time to start treating nonvolatile memory sort of like DRAM. But that day isn't quite here yet.<br> </div> Fri, 22 Oct 2010 22:04:23 +0000 Solid-state storage devices and the block layer https://lwn.net/Articles/410140/ https://lwn.net/Articles/410140/ jmy3056 <div class="FormattedComment"> I think the analogy presented misses the mark. Instead of equating block IO with Network improvements consider this.<br> <p> Media that stores electronic information that used to spin but now doesn't is a closer parallel with RAM. Optimzations for "disk" IO need to follow a similar path as OS/kernels when dealing with RAM.<br> </div> Fri, 15 Oct 2010 17:42:12 +0000 application impact https://lwn.net/Articles/410038/ https://lwn.net/Articles/410038/ Wol <div class="FormattedComment"> That's fine until they're on a system like mine ...<br> <p> Three slots, max capacity 256Mb per slot, three 256Mb chips in the machine.<br> <p> "That's no problem, they can just buy a new machine ..."<br> <p> Cheers,<br> Wol<br> </div> Thu, 14 Oct 2010 19:29:43 +0000 Getting more entropy https://lwn.net/Articles/409431/ https://lwn.net/Articles/409431/ man_ls I guess that the problem is to prove that an attacker cannot influence the timers so that the result is predictable. For example a guy on a different VM doing odd things with the same CPU. As it is hard to prove a negative statement of this kind, then people may tend to distrust such a source of entropy, even if it sounds really interesting. Sun, 10 Oct 2010 21:56:14 +0000 Getting more entropy https://lwn.net/Articles/409406/ https://lwn.net/Articles/409406/ kleptog <div class="FormattedComment"> A while back just for the fun of it I wrote a kernel driver whose goal was to extract entropy from the timer interrupt. After all, if anything is predictable, then it'd have to be the timer interrupt.<br> <p> The point is that while the interrupt is predictable, between the time that the interrupt fires and the driver finally gets run you have cache misses at various levels, PCI bus transfers, DRAM refresh cycles and even just hyperthreading making things very unpredictable. Conclusion: if there's predictability here, I couldn't find it (there's a toolkit for estimating randomness, it concluded that the output was indistinguishable from real random data).<br> <p> The basic idea was to just use the last few bits of the cycle counter, don't worry about the high order bits. The last bit was enough, but even taking the last four bits didn't show any patterns. It might be worth making such a driver for the purpose of giving otherwise entropy starved machines something to work with. I imagine within VMs the cycle counter becomes even more variable, due to contention with things outside the VM.<br> </div> Sun, 10 Oct 2010 11:55:01 +0000 Solid-state storage devices and the block layer https://lwn.net/Articles/409341/ https://lwn.net/Articles/409341/ jzbiciak <P>Note: I'm not an expert. Please do not mistake me for one. :-) Here are my observations, though, along with things I've read elsewhere.</P> <P>Flash requires wear leveling in order to maximize its life. For the greatest effect, you want to wear level across the entire device, which means picking up and moving otherwise quiescent data so that each sector sees approximately the same number of erasures. That's one aspect.</P> <P>Another aspect is that erase blocks are generally much larger than write sectors. So, when you <I>do</I> erase, you end up erasing quite a lot. Furthermore erasure is about an order of magnitude slower than writing, and writing is about an order of magnitude slower than reading. For a random flash device whose data sheet I just pulled up, a random read takes 25us, page program takes 300us, and block erase takes 2ms. Pages are 2K bytes, whereas erase blocks are 128K bytes.</P> <P>(Warning: This is where I get speculative!) And finally, if you have multiple flash devices (or multiple independent zones on the same flash device), you can take advantage of that fact and the fact that "seeks are free" by redirecting writes to idle flash units if others are busy. That's probably the most interesting area to explore algorithmically, IMO. Given that an erase operation can take a device out of commission for 2ms, picking which device to start an erase operation on and when to do it can have a pretty big impact on performance. If you can do background erase on idle devices, for example, then you can hide the cost.</P> Sat, 09 Oct 2010 15:03:13 +0000 Solid-state storage devices and the block layer https://lwn.net/Articles/409340/ https://lwn.net/Articles/409340/ dlang <div class="FormattedComment"> the issue is that you have to erase large chunks (on the order of 128K bytes), if you are then writing in small chunks (say the 512 byte sectors that are the default, or even the 4K byte filesystem blocks) you can't just erase just before writing.<br> <p> you also have the problem that erasing takes a significant amount of time and power to accomplish, so you don't want to wait until you need to erase to do so and you don't want to erase when you don't need to and are on battery<br> </div> Sat, 09 Oct 2010 14:55:09 +0000 Solid-state storage devices and the block layer https://lwn.net/Articles/409337/ https://lwn.net/Articles/409337/ joern <div class="FormattedComment"> <font class="QuotedText">&gt; Flash does require you to think about how you pool your free sectors, though, and how you schedule writing versus erasing.</font><br> <p> Intriguing. Can you elaborate a bit? What difference does it make vs. the naïve approach of erasing before writing?<br> </div> Sat, 09 Oct 2010 14:10:41 +0000 Solid-state storage devices: most I/O patterns https://lwn.net/Articles/409312/ https://lwn.net/Articles/409312/ giraffedata <blockquote> While workloads will vary, Jens says, most I/O patterns are dominated by random I/O and relatively small requests. </blockquote> <p> There are so many ways to count "most" that this fact is pretty useless. Jens should just say, "some important I/O patterns are ...," which is reason enough to do this work. <p> I see a lot of thought wasted prioritizing things based on arbitrary "mosts": Most I/Os are reads, most files are under 4K, most computers are personal workstations. Sat, 09 Oct 2010 00:00:03 +0000 Solid-state storage devices and the block layer https://lwn.net/Articles/409311/ https://lwn.net/Articles/409311/ giraffedata <blockquote> Probably the main reason why such an unfortunate IOPS jump has been forced in networking is backward compatibility. <p> ... <p> In comparison, the need for backward compatibility in storage is basically inexistent. </blockquote> <p> Well, the the whole reason SSDs exist is backward compatibility with rotating media, and it does slow things down considerably. If not for backward compatibility, we wouldn't use SCSI or even Linux block devices to access solid state storage. Write amplification by read-modify-write wouldn't be a problem if the device weren't trying to emulate a 512-byte-sectored disk drive. <p> Existence of SSDs tells me people aren't willing to replace the entire system at once -- they want to replace just the disk drives. <p> Not knowing the network issues, though, I can believe that backward compatibilty hinders performance less in storage than for ethernet. Fri, 08 Oct 2010 23:48:03 +0000 bogus random entropy sources https://lwn.net/Articles/409039/ https://lwn.net/Articles/409039/ BenHutchings <div class="FormattedComment"> Most network controllers now implement interrupt moderation (deferring interrupts so that multiple packets can be handled at once). With a high enough packet rate, they will interrupt at regular and predictable intervals.<br> <p> </div> Thu, 07 Oct 2010 14:34:25 +0000 bogus random entropy sources https://lwn.net/Articles/409004/ https://lwn.net/Articles/409004/ intgr <div class="FormattedComment"> For virtual machines you already have a paravirtual RNG device called 'virtio-rng' (CONFIG_HW_RANDOM_VIRTIO).<br> <p> But in general, virtual machine disk I/O still reaches a physical disk sooner or later, so entropy can be successfully gathered from interrupt timings. In some virtualization scenarios, you wouldn't want the VM to access host-CPU-specific features anyway.<br> <p> </div> Thu, 07 Oct 2010 12:48:30 +0000 Solid-state storage devices and the block layer https://lwn.net/Articles/409006/ https://lwn.net/Articles/409006/ nix <blockquote> NAND flash has a notion of "sequential page read" versus "random page read". If you're truly reading random bytes a'la DRAM w/out cache, you'll see noticeably slower performance if the two reads are in different pages. </blockquote> That sounds just like normal RAM: if you don't have to specify the row *and* column, you save on one CAS/RAS select cycle. Of course this is hidden behind the MMU and CPU cache management code and so on, so we don't often notice it, but it <i>is</i> there. Thu, 07 Oct 2010 12:38:23 +0000 bogus random entropy sources https://lwn.net/Articles/409000/ https://lwn.net/Articles/409000/ nix <div class="FormattedComment"> From all accounts I've read, the entropy of the numbers derived from the C3's RNG hardware sucks rather badly, probably because there are so many sources of regular noise in a CPU that it's hard to stop some of them leaking in. The figures I've heard are *well* below 0.75, more like 0.4 if you're lucky. And IIRC the C3 doesn't bother to validate them either (certainly from the description in the whitepaper they don't), and because the pair of oscillators comprise a single system, if it breaks down or becomes coupled to something external you *also* cannot tell.<br> <p> </div> Thu, 07 Oct 2010 12:28:52 +0000 bogus random entropy sources https://lwn.net/Articles/408999/ https://lwn.net/Articles/408999/ nix <div class="FormattedComment"> "Diskless embedded systems" of course includes "all virtual machines". So there are a lot of them.<br> <p> </div> Thu, 07 Oct 2010 12:24:09 +0000 Solid-state storage devices and the block layer https://lwn.net/Articles/408918/ https://lwn.net/Articles/408918/ eds <div class="FormattedComment"> Good article.<br> <p> At the extreme high end of PCIe SSDs, a system trying to do lots of small (4k) reads with high parallelism will be limited by having any queue locking at all. Running without a request queue remains an attractive option for these devices.<br> <p> Another future improvement to watch out for is MSI-X interrupts. With MSI-X, it is possible to statically assign an interrupt to a single CPU core in such a way that an I/O retirement could interrupt the originating CPU directly; over about 600K IOPS it becomes important to spread out the interrupt/retirement workload as much as possible.<br> </div> Wed, 06 Oct 2010 22:05:10 +0000 bogus random entropy sources https://lwn.net/Articles/408903/ https://lwn.net/Articles/408903/ paulj <div class="FormattedComment"> Hehe, so shred was using entropy collected from the disk controllers, collected from shred writing to disks..<br> </div> Wed, 06 Oct 2010 19:34:06 +0000 bogus random entropy sources https://lwn.net/Articles/408846/ https://lwn.net/Articles/408846/ drag <div class="FormattedComment"> I know that I have had problems with ssh hanging on new nodes on xen due lack of entropy. But I think this is no longer a problem. <br> </div> Wed, 06 Oct 2010 17:01:33 +0000 bogus random entropy sources https://lwn.net/Articles/408811/ https://lwn.net/Articles/408811/ jzbiciak <P>Probably because they didn't have a time machine. ;-) The document you reference was written this year. The white paper I reference was written in 2003. And if you meant Rev 1, that didn't come out until 2008. </P><P> Maybe you meant the original 800-22? <I>That</I> one came out in 2001. </P><P> (Dates came from <A HREF="http://csrc.nist.gov/publications/PubsSPArch.html">here.</A>) </P> Wed, 06 Oct 2010 13:56:45 +0000 bogus random entropy sources https://lwn.net/Articles/408799/ https://lwn.net/Articles/408799/ intgr <div class="FormattedComment"> <font class="QuotedText">&gt; without getting something more basic and generic like random numbers on there too.</font><br> <p> The solution has always been obvious to cryptographers. Use a solid cryptographical pseudorandom RNG; as long as there is _some_ truly random data in its input -- 128 or so bits worth -- the output will always be irreversible. As long as this randomness exists, it doesn't matter that the attacker can predict all other input.<br> <p> In fact, hardware RNGs should _never_ be used directly, because there may be manufacturing flaws or deliberate sabotage. And unlike deterministic algorithms like AES, non-deterministic hardware RNG sources are almost impossible to verify completely. Also it's really quite easy to replace the hw RNG with a deterministic PRNG that passes all randomness tests, yet whose output is entirely predictable to its designer.<br> <p> So at most, the hw RNG is just one of several randomness sources on any system. As such cryptographers in general don't consider it worthwhile -- only on diskless embedded systems where there really aren't any entropy sources.<br> <p> Unfortunately /dev/random is a poor legacy choice in Linux that goes against this concept.<br> <p> </div> Wed, 06 Oct 2010 11:27:47 +0000 application impact https://lwn.net/Articles/408797/ https://lwn.net/Articles/408797/ dlang <div class="FormattedComment"> it depends on what you are measureing<br> <p> in terms of size, drives have grown at least 1000x<br> <p> in terms of sequential I/O speeds they have improved drastically (I don't think quite 1000x, but probably well over 100x, so I think it's in the ballpark)<br> <p> in terms of seek time, they've barely improved 10x or so<br> <p> this is ignoring things like SSDs, high-end raid controllers (with battery backed NVRAM caches) and so on which distort performance numbers upwards.<br> <p> byt yes, the performance difference between the CPU registers and disk speeds is being stretched over time.<br> <p> jut the difference in speed between the registers and ram is getting stretched to the point where people are seriously talking that it may be a good idea to start thinking of ram as a block device, accessed in blocks of 128-256 bytes (the cache line size for the CPU), right now the CPU hides this from you by 'transparently' moving the blocks in and out of the cache of the various processors for you so that if you choose to you can ignore this.<br> <p> but when you are really after performance, a high end system starts looking very strange. You have several sets of processors that share a small amount of high-speed storage (L2/L3 cache) and have larger amount of lower speed storage (the memory directly connected to that CPU), plus a network to access the lower speed storage connected to other CPUs. Then you have a lower speed network to talk to the southbridge chipset to interact with the outside world (things like you monitor/keyboard/disk drives, PCI-e cards, etc).<br> <p> This is a rough description of NUMA and the types of things that you can run into on large multi-socket systems, but the effect starts showing up on surprisingly small systems (which is why per-cpu variables and such things are used so frequently)<br> </div> Wed, 06 Oct 2010 11:04:48 +0000 application impact https://lwn.net/Articles/408790/ https://lwn.net/Articles/408790/ marcH <div class="FormattedComment"> I doubt that hard drive performance (as considered in this article) has increased 1000x. Has it? The memory hierarchy looks more and more stretched.<br> <p> (Here I am ignoring SSDs, still too new to be part of The History)<br> <p> </div> Wed, 06 Oct 2010 09:23:31 +0000 bogus random entropy sources https://lwn.net/Articles/408785/ https://lwn.net/Articles/408785/ pcampe <div class="FormattedComment"> I don't understand why they didn't follow the guidelines in NIST Standard 800-22 (rev 1a), "A Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications".<br> <p> <p> <p> </div> Wed, 06 Oct 2010 08:40:18 +0000 bogus random entropy sources https://lwn.net/Articles/408768/ https://lwn.net/Articles/408768/ jzbiciak <P>I linked <A HREF="http://www.via.com.tw/en/downloads/whitepapers/initiatives/padlock/evaluation_padlock_rng.pdf">this whitepaper above</A> on the technique VIA used on its C3. They used multiple free-running oscillators to gather entropy. The resulting output varies in quality, from 0.75 to 0.99 bits of entropy per output bit, depending on the decimation factor used and whether or not you enable von Neumann whitening.</P> <P>Given that it generates entropy in the megabits/second range, this is several orders better than you can get from hard disk seeks and user keystrokes, even if you have to throw most of the numbers away. And, given the high apparent entropy of the raw bits, you don't really need to throw many away at all.</P> Wed, 06 Oct 2010 03:51:19 +0000 bogus random entropy sources https://lwn.net/Articles/408767/ https://lwn.net/Articles/408767/ jzbiciak <P>Well, <TT>/dev/urandom</TT> <A HREF="https://secure.wikimedia.org/wikipedia/en/wiki//dev/random">doesn't block when the kernel entropy pool runs out.</A> The hardware crypto acceleration may've been getting used, but that's orthogonal to the question of gathering entropy.</P> Wed, 06 Oct 2010 03:47:30 +0000 bogus random entropy sources https://lwn.net/Articles/408765/ https://lwn.net/Articles/408765/ PaulWay <div class="FormattedComment"> Purely an anecdote, but the other day I had the occasion to use shred to shred two disks at once. The machine was a modern Intel Core Quad system, and the disks were writing at 60MBytes/sec with 3% CPU load. Since modern shred just writes a number of layers of pure random data from /dev/urandom, I have to assume that there was either hardware crypto or randomness generation going on there. Who knew?!<br> <p> Have fun,<br> <p> Paul<br> </div> Wed, 06 Oct 2010 03:36:54 +0000 application impact https://lwn.net/Articles/408756/ https://lwn.net/Articles/408756/ dlang <div class="FormattedComment"> the problem is that system resources have increased by 1000x (or close to it) and people trying to do very similar work find themselves in almost the same situation.<br> <p> yes we are doing more with our systems, but nowhere near that much more.<br> </div> Wed, 06 Oct 2010 01:23:49 +0000 application impact https://lwn.net/Articles/408751/ https://lwn.net/Articles/408751/ mpr22 Eight Megabytes And Constantly Swapping. This is not a new phenomenon. Wed, 06 Oct 2010 00:16:13 +0000 Solid-state storage devices and the block layer https://lwn.net/Articles/408731/ https://lwn.net/Articles/408731/ marcH <div class="FormattedComment"> <font class="QuotedText">&gt; If it's to be believed, IPv6 transition is quite far from "smooth".</font><br> <p> Yes but it would have been much worse (read: impossible) if IPv6 deployment ever required substantial changes to IPv4.<br> <p> This is an interesting article. Except they are wrong when they pretend it is easy to break backward-compatibility with Ethernet or TCP. It is not easy but only "less impossible" than breaking IPv4 backward compatibility.<br> <p> Note: the focus of the article is obviously neither on Ethernet nor on TCP.<br> </div> Tue, 05 Oct 2010 23:30:15 +0000 application impact https://lwn.net/Articles/408716/ https://lwn.net/Articles/408716/ zlynx <div class="FormattedComment"> I sure hope not.<br> <p> GTK applications' current "best practice" of "ignore the RAM use, they can buy more" has already destroyed the usefulness of old hardware with a modern Linux software stack.<br> </div> Tue, 05 Oct 2010 22:29:01 +0000 bogus random entropy sources https://lwn.net/Articles/408700/ https://lwn.net/Articles/408700/ nowster <div class="FormattedComment"> <font class="QuotedText">&gt; I don't understand why more processors don't include a proper hardware random number generator.</font><br> <p> It's actually a hard problem to provide a cheap reliable hardware random number generator. If you look at the effort that a device like Simtec's Entropy Key takes to ensure that each chunk of randomness it delivers is truly random, you'll see why a random number generator is not something that a CPU designer should drop on a spare corner of a CPU die last thing on a Friday afternoon. Semiconductor junction noise generators can be affected by environmental influences: an RNG on a CPU die running hot might have a bias compared with the same one when the CPU is idle and cooler.<br> </div> Tue, 05 Oct 2010 21:58:55 +0000 iSCSI, Solid-state storage devices and the block layer https://lwn.net/Articles/408690/ https://lwn.net/Articles/408690/ jhhaller <div class="FormattedComment"> Is using iSCSI to use Solid State disks mounted on a file server part of the testing and improvement plan? I can imagine this stresses both worlds, namely network interrupt steering along with block devices, all interacting in less than obvious ways.<br> </div> Tue, 05 Oct 2010 21:17:17 +0000 Solid-state storage devices and the block layer https://lwn.net/Articles/408679/ https://lwn.net/Articles/408679/ jzbiciak <P>You can do random writes to random empty sectors. Again, that's nothing like how a hard disk works. I'm still strenuously disagreeing with your earlier statement that flash's properties make it more like a disk than like RAM. It's really an entirely different beast worthy of separate consideration, which is why I think wrapping it up in an SSD limits its potential.</P> <P>With flash, you need entirely new strategies that apply neither to disks nor RAM to get the full benefit from the technology. Much of the effort spent on disks revolves (no pun intended) around eliminating seeks. No such effort is required with RAM or with flash. Flash <I>does</I> require you to think about how you pool your free sectors, though, and how you schedule writing versus erasing. I won't deny that. Rather, I say it only further invalidates your original conjecture that it makes flash <I>more like disks</I>. (I will agree it makes it <I>less like RAM</I> though.)</P> <P>Because seeks are "free", I could totally see load balancing algorithms of the form "write this block to the youngest free sector on the first available flash device", so that a new write doesn't get held up by devices busy with block erases. That looks <I>nothing</I> like what you'd want to do with a disk. It takes advantage of the "free seek" property of the flash while helping to hide the block erase penalty it imposes. Neither property is a property of a disk drive. Of course, neither property is a property of RAM, either.</P> <P>Am I splitting hairs over semantics here? Let me step back and summarize, and see if you agree: Raw flash's random access capability and relatively low access time can make it much more like RAM than disk, especially in terms of bandwidth and latency. Raw flash's limitations on writes, however, require the OS to have flash-specific write strategies. They prevent the OS from treating flash identically to RAM, and will require careful thought to be handled correctly. This is similar to how we had to put careful thought into disk scheduling algorithms, even if flash requires entirely different algorithms to address its unique properties.</P> Tue, 05 Oct 2010 20:38:07 +0000 Solid-state storage devices and the block layer https://lwn.net/Articles/408672/ https://lwn.net/Articles/408672/ dlang <div class="FormattedComment"> flash allows for random access reads, but is much more limited for writes.<br> </div> Tue, 05 Oct 2010 19:42:26 +0000 Solid-state storage devices and the block layer https://lwn.net/Articles/408663/ https://lwn.net/Articles/408663/ jzbiciak <P>It certainly <I>is</I> random access. I can generally send a command for address X followed by a command for address Y to the same chip, where the response time is <I>not</I> a function of the distance between X and Y, except when they overlap. Instead, the performance is most strongly determined by what commands I sent[*]. Reads are much faster than writes, and both are much, much faster than sector erase.</P> <P>The opposite is generally true of disks. There, the cost of an operation is more strongly determined by whether it triggered a seek (and how far the seek went) than if the operation was a read or a write. Both reads and writes require getting the head to a particular position on the platter, ignoring any cache that might be built into the drive. Also, under normal operation, spinning-rust drives don't really have an analog to "sector erase." (Yes, there's the old "low-level format" commands, but those aren't generally used during normal filesystem operation.)</P> <HR> <P>[*] Ok, so that's not 100% true, but essentially true in the current context. NAND flash has a notion of "sequential page read" versus "random page read". If you're truly reading random bytes a'la DRAM w/out cache, you'll see noticeably slower performance if the two reads are in different pages. But, if you're doing block transfers, such as 512-byte sector reads, you're reading the whole page. Hopping between any two sectors always costs about the same. Here, <A HREF="http://www.datasheetcatalog.org/datasheet/SamsungElectronic/mXvvrxv.pdf">read a data sheet!</A> For this particular flash, a random sector read is 10us, sector write is 250us, and page erase is 2ms. The whole page-open/page-close architecture makes it look much more like modern SDRAM than disk.</P> Tue, 05 Oct 2010 19:27:28 +0000 bogus random entropy sources https://lwn.net/Articles/408652/ https://lwn.net/Articles/408652/ jzbiciak <P>VIA's approach on the C3 doesn't sound too unwieldy. This <A HREF="http://www.via.com.tw/en/downloads/whitepapers/initiatives/padlock/evaluation_padlock_rng.pdf">white paper analyzing the generator's output</A> makes for an informative read. The punch line is that it looks like a pretty reasonable source of entropy as long as you do appropriate post processing. The random numbers it generates aren't caveat free, but they're heckuva lot better than disk seeks and keypresses.</P> Tue, 05 Oct 2010 19:10:45 +0000 bogus random entropy sources https://lwn.net/Articles/408655/ https://lwn.net/Articles/408655/ patrick_g >>> <i>I don't understand why more processors don't include a proper hardware random number generator. It's a classic case of something that is significantly easier to do in hardware, I'd think.</i><br><br> I think Intel will is working on this.<br> See these link : http://www.technologyreview.com/computing/25670/ Tue, 05 Oct 2010 19:01:48 +0000