Just found this wonderful summary of articles by Jeremy and wanted to give some of my thoughts on the topic.
First lets speak about death of the RAID. I think this is far from the case especially if you consider Software RAID here.
For many workloads you would like to get RAID just for the sake of BBU. As Jeremy mentioned RAID is cheap these days if you buy right one and can offer substantial improvement for write intense workload by safe write buffering and write merging.
Performance is another story. RAID is usually easiest way to get extra performance from your IO subsystem. Spreading the database among say 10 commodity boxes is often expensive for existing applications, even for new applications it will affect development time and complexity and it is well possible it might be no guys inside the company skilled enough to work with distributed systems.
I think to many considerations are being made thinking about commodity hardware and really smart people, while in most cases tasks have to be solved with commodity hardware and commodity people.
RAID can be indeed slower than direct disks. For example for BoardReader now indexing almost billion of forum posts we decided to go ahead with raw drives for search index because it is far faster than using software RAID. But this is only because Sphinx allows using highly parallel architectures which make it possible. The boxes we use indeed even have RAID available which is used for OS hard drive.
Indeed using RAID for OS partition (software) is what we use even for low end boxes. This has zero cost and offer redundancy in case of hard drive failure.
Now regarding infrastructure. When being called for Consulting I often ask how large system is needs to growth and how much effort people would like to spend on the infrastructure because this defines a lot what type of hardware and in which fashion can be used. You can build infrastructure which would use crappy boxes as redundant array of inexpensive servers, basically moving redundancy one way up from the disk. It however would be rather expensive and can be only justified with extreme hardware needs. In fact even Google as far as I remember uses decent servers with RAID for their MySQL installations.
I surely agree with Jeremy on Commodity does not mean Crapy thing. There is usually a sweet spot when you can get well performing hardware with good price/performance. We use a lot of Dell PowerEdge 2950 with 6 hard drives and 16GB of RAM.
With other vendors I see people using 2*2 Opterons with 32GB of RAM and 10-14 hard drives. Both solutions have comparable price/performance, though former depends on your application quite a lot.
I think MySQL is pretty well supported by matching hardware these days. As it does not use many CPUs and many hard drives very efficiently in many cases so going much higher does not buy you much anyway.
Not all the needs are satisfied that well by vendors. Dell for example for some reason failed to provide cheap 32GB of memory option even with their new Opteron servers. They also do not provide high volume internal storage option. Ie find me 12 or more hot swap SATA hard drives in the case in Dell range, while you can by appropriate cases from Supermicro and a lot of other vendors. I think this is where technical possibilities clash with sales needs – there is more revenue up-selling to external directly attach storage or SAN