1 |
Hello, Adam. |
2 |
|
3 |
Thanks for the reply. |
4 |
|
5 |
On Sun, Mar 19, 2017 at 11:45:34 +1100, Adam Carter wrote: |
6 |
> On Sun, Mar 19, 2017 at 3:36 AM, Alan Mackenzie <acm@×××.de> wrote: |
7 |
|
8 |
> > I've just bought myself a Samsung NVMe 960 EVO M.2 SSD. |
9 |
> > <snip> |
10 |
> > Some timings: |
11 |
> > |
12 |
> > An emerge -puND @world (when there's nothing to merge) took 38.5s. With |
13 |
> > my mirrored HDDs, this took 45.6s. (Though usually it takes nearer a |
14 |
> > minute.) |
15 |
> > |
16 |
> > An emerge of Firefox took 34m23s, compared with 37m34s with the HDDs. |
17 |
> > The lack of the sound of the HDD heads moving was either disconcerting |
18 |
> > or a bit of a relief, I'm not sure which. |
19 |
> > |
20 |
> > Copying my email spool file (~110,000 entries, ~1.4 GB) from SSD -> SSD |
21 |
> > took 6.1s. From HDD RAID -> HDD RAID it took 30.0s. |
22 |
> > |
23 |
> > <snip> |
24 |
> > |
25 |
|
26 |
> I was also hoping for more speed up when i got mine, but of course it only |
27 |
> helps with the system is IO bound. Its great for loading VMs. |
28 |
|
29 |
> It may be mandatory with NVM, but you can check multiqueue is setup/working |
30 |
> with; |
31 |
> # cat /proc/interrupts | egrep '(CPU|nvm)' |
32 |
> CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7 |
33 |
> 30: 21596 0 0 0 0 0 0 0 PCI-MSI 1572864-edge nvme0q0, nvme0q1 |
34 |
> 40: 0 12195 0 0 0 0 0 0 PCI-MSI 1572865-edge nvme0q2 |
35 |
> 41: 0 0 12188 0 0 0 0 0 PCI-MSI 1572866-edge nvme0q3 |
36 |
> 42: 0 0 0 13696 0 0 0 0 PCI-MSI 1572867-edge nvme0q4 |
37 |
> 43: 0 0 0 0 11698 0 0 0 PCI-MSI 1572868-edge nvme0q5 |
38 |
> 44: 0 0 0 0 0 45820 0 0 PCI-MSI 1572869-edge nvme0q6 |
39 |
> 45: 0 0 0 0 0 0 10917 0 PCI-MSI 1572870-edge nvme0q7 |
40 |
> 46: 0 0 0 0 0 0 0 12865 PCI-MSI 1572871-edge nvme0q8 |
41 |
|
42 |
> If its not setup there'll be just a single IRQ/core handling all the IO. |
43 |
|
44 |
That, indeed, seems to to be the case. When I do cat /proc/interrupts | |
45 |
egrep '(CPU/nvm)', I get just the header line with one data line: |
46 |
|
47 |
CPU0 CPU1 CPU2 CPU3 |
48 |
17: 0 0 15 14605 IO-APIC 17-fasteoi ehci_hcd:usb1, nvme0q0, nvme0q1 |
49 |
|
50 |
I'm kind of feeling a bit out of my depth here. What are the nvme0q0, |
51 |
etc.? "Queues" of some kind? You appear to have nine of these things, |
52 |
I've just got two. I'm sure there's a fine manual I ought to be |
53 |
reading. Do you know where I might find this manual? |
54 |
|
55 |
When I look at the entire /proc/interrupts, there are just 30 lines |
56 |
listed, and I suspect there are no more than 32 interrupt numbers |
57 |
available. Is there any way I can configure Linux to give my SSD more |
58 |
than one interrupt line to work with? |
59 |
|
60 |
> FWIW |
61 |
> # hdparm -tT /dev/nvme0n1 |
62 |
|
63 |
> /dev/nvme0n1: |
64 |
> Timing cached reads: 9884 MB in 2.00 seconds = 4945.35 MB/sec |
65 |
> Timing buffered disk reads: 4506 MB in 3.00 seconds = 1501.84 MB/sec |
66 |
|
67 |
I get: |
68 |
|
69 |
/dev/nvme0n1: |
70 |
Timing cached reads: 4248 MB in 2.00 seconds = 2124.01 MB/sec |
71 |
Timing buffered disk reads: 1214 MB in 3.00 seconds = 404.51 MB/sec |
72 |
|
73 |
So my "cached reads" speed is (a little under) half of yours. This is |
74 |
to be expected, since my PCIe lanes are only version 2 (and yours are |
75 |
probably version 3). But the "buffered disk read" are much slower. Is |
76 |
this just the age of my PC, or might I have something suboptimally |
77 |
configured? |
78 |
|
79 |
-- |
80 |
Alan Mackenzie (Nuremberg, Germany). |