1 |
On Sat, April 20, 2013 17:33, Alan McKinnon wrote: |
2 |
> On 20/04/2013 17:00, Tanstaafl wrote: |
3 |
>> Thanks for the responses so far... |
4 |
>> |
5 |
>> Another question - are there any caveats as to which filesystem to use |
6 |
>> for a mail server, for virtualized systems? Ir do the same |
7 |
>> issues/questions apply (ie, does the fact that it is virtualized not |
8 |
>> change anything)? |
9 |
>> |
10 |
>> If there are none, I'm curious what others prefer. |
11 |
>> |
12 |
>> I've been using reiserfs on my old mail server since it was first set up |
13 |
>> (over 8 years ago). I have had no issues with it whatsoever, and even |
14 |
>> had one scare with a bad UPS causing the system to experienc an unclean |
15 |
>> shutdown - but it came back up, auto fsck'd, and there was no 'apparent' |
16 |
>> data loss (this was a very long time ago, so if there had been any |
17 |
>> serious problems, I'd have known about it long go). |
18 |
>> |
19 |
>> I've been considering using XFS, but have never used it before. |
20 |
>> |
21 |
>> So, anyway, opinions are welcome... |
22 |
> |
23 |
> |
24 |
> Virtualization can change things, and it's not really intuitive. |
25 |
> |
26 |
> Regardless of what optimizations you apply to the VM, and regardless of |
27 |
> what kind of virtualization is in use on the host, you are still going |
28 |
> to be bound by the disk and fs behaviour of the host. If VMWare gives |
29 |
> you a really shitty host driver, then something really shitty is going |
30 |
> to be the best you can achieve. |
31 |
|
32 |
This can be improved by not using file-backed disk devices. |
33 |
Not sure if ESXi can do this. XenServer supports LVM-backed disk devices, |
34 |
which reduces the overhead for the FS significantly. |
35 |
|
36 |
> Disks aren't like eg NICs, you can't easily virtualize them and give the |
37 |
> guest exclusive access in the style of para-virtualization (I can't |
38 |
> imagine how that would even be done). |
39 |
|
40 |
There is one option for this, but then you need to push the whole |
41 |
disk-device for the VM. In other words you pass "sda" to the VM, instead |
42 |
of a file or partition. |
43 |
To avoid lousy I/O driver on the host, you could try passing the |
44 |
disk-controller directly to the VM. But then the VM has it's own set of |
45 |
disks that can not be used for other VMs. |
46 |
|
47 |
|
48 |
> FWIW, I have two mail relays (no mail storage) running old postfix |
49 |
> versions on FreeBSD. I expected throughput to differ when virtualized on |
50 |
> ESXi, but in practice I couldn't see a difference at all - maybe the |
51 |
> mail servers were very under-utilized. Considering this pair deal with |
52 |
> anything between 500,000 to a million mails a day total, I would not |
53 |
> have considered them "under-utilized". Just goes to show how opinions |
54 |
> are often worthless but numbers buys the whiskey :-) |
55 |
|
56 |
If Postfix only passes emails through, then it only uses the mail-spool |
57 |
for temporary storage. For that, it doesn't require a lot of disk I/O. |
58 |
Most filtering is done in memory, afaik. |
59 |
|
60 |
My experience with ESXi is that it can have issues with networking when |
61 |
the versions of the host don't match perfectly or the time is not synched |
62 |
correctly or VMs are moved around a lot by an Admin that likes to play |
63 |
with that... |
64 |
Doing large multi-system installations on an ESXi cluster while the VMs |
65 |
are moved around can occasionally fail because of a bad network-layer. |
66 |
|
67 |
-- |
68 |
Joost |
69 |
|
70 |
> |
71 |
> |
72 |
> -- |
73 |
> Alan McKinnon |
74 |
> alan.mckinnon@×××××.com |
75 |
> |
76 |
> |
77 |
> |
78 |
> |