1 |
If the elevated iowait from iostat is on the host you might be able to |
2 |
find something hogging you io bandwidth with iotop. Also look for D |
3 |
state procs with ps auxr. Are you on a software raid? |
4 |
|
5 |
If you are on linux soft raid you might check your disks for errors |
6 |
with smartmontools. Other than that the only thing I can think of is |
7 |
something like a performance regression in the ide/scsi/sata |
8 |
controller (on host or virtual) or mdadm on host. If the host system |
9 |
is bogged before starting vmware instances I would suspect the former |
10 |
(host controller or mdadm). |
11 |
|
12 |
On 3/11/10, Stefan G. Weichinger <lists@×××××.at> wrote: |
13 |
> Am 11.03.2010 16:54, schrieb Kyle Bader: |
14 |
>> If you use the cfq scheduler (linux default) you might try turning off |
15 |
>> low latency mode (introduced in 2.6.32): |
16 |
>> |
17 |
>> Echo 0 > /sys/class/block/<device name>/queue/iosched/low_latency |
18 |
>> |
19 |
>> http://kernelnewbies.org/Linux_2_6_32 |
20 |
> |
21 |
> That sounded good, but unfortunately it is not really doing the trick. |
22 |
> The VM still takes minutes to boot ... and this after I copied it back |
23 |
> to the RAID1-array which should in theory be faster than the |
24 |
> noraid-partition before. |
25 |
> |
26 |
> Thanks anyway, I will test that setting ... |
27 |
> |
28 |
> Stefan |
29 |
> |
30 |
> |
31 |
> |
32 |
|
33 |
-- |
34 |
Sent from my mobile device |
35 |
|
36 |
|
37 |
Kyle |