1 |
Hi Mick, |
2 |
|
3 |
On Sat, 07 Apr 2018 11:21:23 +0100 |
4 |
Mick <michaelkintzios@×××××.com> wrote: |
5 |
>So far I had been using gdbm, but I now see that emerge also added lmdb. |
6 |
|
7 |
Same here, so I gave lmdb a try as hcache backend. |
8 |
|
9 |
>Which one is best to use? What have you chosen? |
10 |
|
11 |
I assume you mean for speed? I don’t know and it may become very |
12 |
academic to answer this. But you can find some none Mutt-specific |
13 |
benchmark results on NeoMutt’s website [1]. |
14 |
|
15 |
Note, the mentioned benchmark page say: |
16 |
|
17 |
“[…] you’ll need a reasonable large number of |
18 |
messages – >50k – to see anything interesting” |
19 |
|
20 |
Using lmdb as backend, I do not realise any differences over gdbm within |
21 |
Mutt respectively NeoMutt and I doubt one really can (without measuring |
22 |
it exactly – which I haven’t done yet). |
23 |
|
24 |
|
25 |
References: |
26 |
[1] <https://www.neomutt.org/contrib/hcache-bench> |
27 |
|
28 |
|
29 |
-- |
30 |
Regards, |
31 |
floyd |