1 |
On Mon, Mar 13, 2023 at 8:24 AM Dale <rdalek1967@×××××.com> wrote: |
2 |
> |
3 |
> According to my google searches, PCIe x4 is faster |
4 |
> than PCIe x1. It's why some cards are PCIe x8 or x16. I think video |
5 |
> cards are usually x16. My question is, given the PCIe x4 card costs |
6 |
> more, is it that much faster than a PCIe x1? |
7 |
|
8 |
It could be slower than PCIe x1, because you didn't specify the version. |
9 |
|
10 |
PCIe uses lanes. Each lane provides a certain amount of bandwidth |
11 |
depending on the version in use. |
12 |
|
13 |
For example, a v1 4x card has 1 GB/s of bandwidth. A v4 1x card has |
14 |
2GB/s of bandwidth. |
15 |
|
16 |
Note that slot size is only loosely coupled with the number of lanes. |
17 |
Lots of motherboards have a second 16x slot that only provides 4-8 |
18 |
lanes to save on the cost of a PCIe swich. You can also use adapters |
19 |
to connect a 16x card to a 1x slot, or you might find a motherboard |
20 |
that has an open-ended slot so that you can just fit a 16x card onto |
21 |
the 1x slot. It will of course only use a single lane that way. |
22 |
|
23 |
So what you need to do is consider the following: |
24 |
|
25 |
1. How much bandwidth do you actually need? If you're using spinning |
26 |
disks you aren't going to sustain more than 200MB/s to a single drive, |
27 |
and the odds of having 10 drives using all that bandwidth are pretty |
28 |
low. If you're using SSDs then you're more likely to max them out |
29 |
since the seek cost is much lower. |
30 |
2. What PCIe version does your motherboard support? Sticking a v4 |
31 |
card on an old motherboard that only supports v2 is going to result in |
32 |
it running at v2 speeds, so don't pay a premium for something you |
33 |
won't use. Likewise, if they cut down on the number of lanes assuming |
34 |
they'll have more bandwidth you might have less than you expected to |
35 |
have. |
36 |
|
37 |
Then look up the number of lanes and the PCIe version and see what you |
38 |
can expect: |
39 |
https://en.wikipedia.org/wiki/PCI_Express#History_and_revisions |
40 |
|
41 |
I think odds are you aren't going to want to pay a premium if you're |
42 |
just using spinning disks. If you actually wanted solid state storage |
43 |
then I'd also be avoiding SATA and trying to use NVMe, though doing |
44 |
that at scale requires a lot of IO, and that will cost you quite a |
45 |
bit. There is a reason your motherboard has mostly 1x slots - PCIe |
46 |
lanes are expensive to support. On most consumer motherboards they're |
47 |
only handled by the CPU, and consumer CPUs are very limited in how |
48 |
many they offer. Higher end motherboards may have a switch and offer |
49 |
more lanes, but they'll still bottleneck if they're all maxed out |
50 |
getting into the CPU. If you buy a server CPU for several thousand |
51 |
dollars one of the main features they offer is a LOT more PCIe lanes, |
52 |
so you can load up on NVMes and have them running at v4-5. (Typical |
53 |
NVMe uses a 4x M.2 slot, and of course you can have 16x cards offering |
54 |
multiples of those.) |
55 |
|
56 |
The whole setup is pretty analogous to networking. If you have a |
57 |
computer with 4 network ports you can bond them together and run them |
58 |
to a switch that supports this with 4 cables, and get 4x the |
59 |
bandwidth. However, you can also get a single connection to run at |
60 |
higher speeds (1Gb, 2.5Gb, 10Gb, etc), and you can do both. PCIe |
61 |
lanes are just like bonded network cables - they are just pairs of |
62 |
signal wires that use differential signaling, just like twisted pairs |
63 |
in an ethernet cable. Longer slots just add more of them. Everything |
64 |
is packet switched, so if there are more lanes it just spreads the |
65 |
packets across them. Higher versions mean higher speeds in each lane. |
66 |
|
67 |
-- |
68 |
Rich |