1 |
On Saturday, 7 April 2018 14:35:27 BST Floyd Anderson wrote: |
2 |
> Hi Mick, |
3 |
> |
4 |
> On Sat, 07 Apr 2018 11:21:23 +0100 |
5 |
> |
6 |
> Mick <michaelkintzios@×××××.com> wrote: |
7 |
> >So far I had been using gdbm, but I now see that emerge also added lmdb. |
8 |
> |
9 |
> Same here, so I gave lmdb a try as hcache backend. |
10 |
> |
11 |
> >Which one is best to use? What have you chosen? |
12 |
> |
13 |
> I assume you mean for speed? I don’t know and it may become very |
14 |
> academic to answer this. But you can find some none Mutt-specific |
15 |
> benchmark results on NeoMutt’s website [1]. |
16 |
> |
17 |
> Note, the mentioned benchmark page say: |
18 |
> |
19 |
> “[…] you’ll need a reasonable large number of |
20 |
> messages – >50k – to see anything interesting” |
21 |
> |
22 |
> Using lmdb as backend, I do not realise any differences over gdbm within |
23 |
> Mutt respectively NeoMutt and I doubt one really can (without measuring |
24 |
> it exactly – which I haven’t done yet). |
25 |
> |
26 |
> |
27 |
> References: |
28 |
> [1] <https://www.neomutt.org/contrib/hcache-bench> |
29 |
|
30 |
Thanks Floyd, good information. |
31 |
|
32 |
I also switched to lmdb now and updated my use flags accordingly for mutt. I |
33 |
see neomutt gaining traction, but I am still running mutt here. Is there a |
34 |
benefit from switching? |
35 |
|
36 |
-- |
37 |
Regards, |
38 |
Mick |