1 |
On Wed, Aug 31, 2011 at 7:58 PM, Adam Carter <adamcarter3@×××××.com> wrote: |
2 |
>> I just might have an opportunity to setup a Gentoo |
3 |
>> System using 10 G ethernet (fiber). The .39 |
4 |
>> kernel lists this hardware [1]. |
5 |
>> |
6 |
>> Does anyone have any experience with this hardware? |
7 |
>> Does it work? Did you make any bandwidth measurements? |
8 |
>> What type of fiber (mulimode/singlemode) (ST/SC) did you |
9 |
>> use? |
10 |
>> |
11 |
>> Any comments? Is the kernel a bottleneck or your application? |
12 |
> |
13 |
> Intel are active in linux kernel development and their linux drivers |
14 |
> seem to be very good. |
15 |
> |
16 |
> When testing Intel copper gig interfaces on an intel firewall (HP |
17 |
> DL380G5 8 core box), I was able to send a core to 100% with ~330Mb of |
18 |
> small packets. The limiting factor appears to be packet rate, and the |
19 |
> consequent processing of interrupts (1 irq to 1 core). I don't think |
20 |
> that a 10Gig interface would pass any more than that due to similar |
21 |
> limitations. |
22 |
> |
23 |
> Tweaking the e1000 driver options RxDescriptors, TxDescriptors and |
24 |
> RxIntDelay pushed it up to ~350Mb. MSI was enabled so no interrupt |
25 |
> sharing. |
26 |
> |
27 |
> So if you're running normal sized packets, you should be ok. Otherwise |
28 |
> you way want to look at what irqbalance can do for you (I didn't try |
29 |
> it at the time). |
30 |
> |
31 |
> Also don't forget stuff like Large Receive Offload in your kernel. |
32 |
|
33 |
I've overheard IRC conversations that discussed multi-queue network |
34 |
cards in the context of multi-core systems. My educated guess, based |
35 |
on what you mention, is that each queue in the card would ping a |
36 |
different interrupt. Each interrupt might be handled by a different |
37 |
core, so you'd see a scaleable improvement there. |
38 |
|
39 |
|
40 |
-- |
41 |
:wq |