PCI vs PCI x1

I am in the process of buying a SATA II raid card.
I would like to find a PCI express X1 card but if I can't (bang for buck), is it find to go with normal PCI?
I don't know if what I read was acurate however an article said:
PCI can support 200MB per second.
So if Sata II does 3Gb per second (translates to 375MB per second), that seems fine to me for my desktop and the PCI is a promise card.
Who is Participating?
ravenplConnect With a Mentor Commented:
> PCI can support 200MB per second.
Regular PCI can handle 133MB/s. All devices connected to that pci bus(one pci bus have many pci slots), shares that bandwidth.
Note however that nowadays HDDs have burst speed 65MB/s max.
So If You going to have no more than two drives, You can go with PCI.

Otherwise either PCI-Ex1 (or even x4), or PCI-X(present on server boards).
willcompConnect With a Mentor Commented:
PCI-E card that is cheap and works rather well:

No current hard disks exceed SATAI throughput (150MB/sec), so PCI or PCI-E doesn't really matter from a performance standpoint.
Gary CaseConnect With a Mentor RetiredCommented:
Here's a nice PCIe x1 card that supports up to 4 drives and does RAID 0, 1, 5, 10, and JBOD:  http://www.newegg.com/Product/Product.aspx?Item=N82E16816115029

As for whether or not you need the extra bandwidth of PCIe ...

=>  It depends :-)     Current high-capacity 7200 rpm drives reach a sustained data rate in the 90MB/s range on the outer cylinders, down to about 40MB/s on the inner cylinders, with an average of about 65-70MB/s.   This is, as willcomp noted, well below the interface speed for either PCI (133MB/s) or PCIe x1 (250MB/s).

The DRIVE interface (IDE, SATA-I, or SATA-II) makes little difference here --> a very small % of the transfers will be buffer-memory, so the sustained data rate is MUCH more important; and clearly any of these interfaces are well above those rates.

But the BUS interface of the RAID card (PCI vs PCIe x1) MAY make a difference (as I noted, it depends) ==> depending on which RAID level you'll be using and how many disks are connected.

If you build a striped array (RAID 0 or 5), then the maximum transfer rate will be appreciably faster than that of a single drive => for a 2 drive RAID-0 or 3-drive RAID-5 the read performance will approach double the individual drive rates -- so on outer-cylinder reads that will be in the 175MB/s range.   With a 3-drive RAID-0 or 4-drive RAID-5 it will be faster yet (approaching 3 times the individual rate).   The actual performance will be somewhat less than the theoretical ... but clearly it can be faster than the PCI limits ==> and with a 4-drive RAID-0 would even exceed the PCIe x1 rate.

If you build a mirrored array (RAID 1) with 2 drives, the bus speed isn't an issue.   The write speed will be no better than a single drive;  and the read speed will only usually be in the same range as a single drive [it can be higher, but won't hit the 2X rate that a stripe could).   A mirrored array with 4 drives (RAID 10) could hit the same 175MB/s rate noted above ... so it could also exceed the PCI limits.

If you should decide to use 10,000 rrpm drives, the sustained transfer rates are a bit higher ... and the advantage of the higher speed PCIe bus is even more significant.

... I noted in a previous question that your intent is apparently to simply mirror (RAID 1) a pair of 500GB drives.   In that case either PCI or PCIe x1 is fine -- but I'd still use the PCIe card to provide a bit of "headroom" on the bus.   You'll always get better, more consistent, performance if you're not saturating the bus.
why not just get an external device eg, DNS-323? mine works great and is quick
forgot to add .. DLink DNS-323
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.