1 |
On Saturday 23 June 2007 00:47:27 Duncan wrote: |
2 |
> Peter Humphrey <prh@××××××××××.uk> posted |
3 |
> 200706221910.44194.prh@××××××××××.uk, excerpted below, on Fri, 22 Jun |
4 |
> |
5 |
> 2007 19:10:44 +0100: |
6 |
> >> What I'm wondering, of course, is whether you have NUMA turned on when |
7 |
> >> you shouldn't, or don't have core scheduling turned on when you |
8 |
> >> should, thus artificially increasing the resistance to switching |
9 |
> >> cores/cpus and causing the stickiness. |
10 |
> > |
11 |
> > I don't think so. |
12 |
> |
13 |
> Yeah, now that you've clarified that it's sockets and confirmed settings, |
14 |
> you seem to have it right. |
15 |
|
16 |
Here's an example of silly output from top. In this case I did this: |
17 |
# schedtool -a 0x1 5280 |
18 |
to get 5280 onto CPU0, then when I didn't get any better loadings I restored |
19 |
the affinity to its original value: |
20 |
# schedtool -a 0x3 5280 |
21 |
|
22 |
Here's what top showed then. Look at the /nice/ values on lines 3 and 4, and |
23 |
compare those with the %CPU and Processor fields of processes 5279 and 5280 |
24 |
(sorry about the line wraps). This has me deeply puzzled: |
25 |
|
26 |
top - 09:04:59 up 23 min, 5 users, load average: 3.60, 4.79, 3.91 |
27 |
Tasks: 124 total, 2 running, 122 sleeping, 0 stopped, 0 zombie |
28 |
Cpu0 : 0.3%us, 0.3%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, |
29 |
0.0%st |
30 |
Cpu1 : 0.0%us, 0.3%sy, 99.7%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, |
31 |
0.0%st |
32 |
Mem: 4088968k total, 1822644k used, 2266324k free, 218296k buffers |
33 |
Swap: 4176848k total, 0k used, 4176848k free, 735708k cached |
34 |
|
35 |
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ P COMMAND |
36 |
5279 prh 34 19 60256 38m 3600 S 50 1.0 6:53.97 1 |
37 |
setiathome-5.12 |
38 |
5280 prh 34 19 60252 38m 3612 S 50 1.0 6:54.08 0 |
39 |
setiathome-5.12 |
40 |
3692 root 15 0 144m 63m 7564 S 0 1.6 0:36.92 0 X |
41 |
5272 prh 15 0 4464 2636 1692 S 0 0.1 0:00.70 1 boinc |
42 |
5286 prh 15 0 93016 21m 14m S 0 0.5 0:00.66 0 konsole |
43 |
5322 prh 15 0 145m 13m 10m S 0 0.3 0:03.01 0 gkrellm2 |
44 |
10357 root 15 0 10732 1340 964 R 0 0.0 0:00.01 1 top |
45 |
[snip system processes] |
46 |
|
47 |
I don't think this is a scheduling problem; it goes deeper, so that the |
48 |
kernel doesn't have a consistent picture of which processor is which. |
49 |
|
50 |
-- |
51 |
Rgds |
52 |
Peter Humphrey |
53 |
Linux Counter 5290, Aug 93 |
54 |
-- |
55 |
gentoo-amd64@g.o mailing list |