Wed Oct 15 18:54:11 BRT 2008 RAID performance in a 3 SATA disk configuration This benchmark was inspired by the one published at http://linux-raid.osdl.org/index.php/Performance The main differences are the test machine's hardware, the number of disks, and the kernel version: - 3.40Mhz Pentium 4 with 1MB cache; - 4GB of RAM; - Intel S875WP1 motherboard with 2 onboard SATA controllers (Intel on-chip ICH5, serviced by the ata_piix driver, and Promise SATA150TX4, serviced by the sata_promise driver), each controller connected internally to a different PCI bus (so we have roughly 133MBps of total bandwidth per controller); - 3 SATA ST3500641AS 500GB disks, with 2 disks connected to the ICH5 controller and the other one connected to the Promise SATA150TX4 (so we avoid being bottlenecked by the PCI bus bandwidth as much as possible). - Linux kernel 2.6.24, loaded by booting from a Gentoo 2008.0 Mini CD; I employed the same benchmarking tools and method used in the original benchmark cited above, but I included the RAID10,o2 benchmark as I'm running a kernel version that supports it. Other details: - I used the same md chunk size as the original benchmark (256k); - Kernel I/O scheduler was left at the default (anticipatory) for all disks; - I did not turn off NCQ on the disks, because the "echo" recommended in the benchmark cited above returned "permission" errors (as root), so I guess my kernel and/or hardware do not support it. - I left read-ahead at the default (i.e, did not run the "blockdev setra"), ditto for RAID stripe size. Here are my results: RAID type sequential read random read sequential write random write ----------- --------------- ----------- ---------------- ------------ Single disk 45 19 41 37 RAID0 81 69 121 96 RAID1 44 56 41 33 RAID10,n2 57 54 60 50 RAID10,f2 71 53 54 46 RAID10,o2 44 60 60 48 From the above results (now including RAID10,o2) I must conclude that, performance-wise, RAID10,f2 remains unbeatable if you need the safety provided by disk redundancy, and RAID0 is the champion otherwise. Comments, etc contact me at . ==Eof==