1 |
On Sun, 2019-02-17 at 12:54 -0600, Matthew Thode wrote: |
2 |
> On 19-02-17 09:55:54, Michał Górny wrote: |
3 |
> > On Sun, 2019-02-17 at 06:56 +0000, Robin H. Johnson wrote: |
4 |
> > > On Sat, Feb 16, 2019 at 09:40:21AM +0100, Michał Górny wrote: |
5 |
> > > 2. The uid signatures should NOT be naively exported to keyservers. They |
6 |
> > > should use the CAFF method of generating a uid signature, writing it to a file, |
7 |
> > > and sending it as an encrypted message to the uid address. The uid owner is |
8 |
> > > responsible for decrypt + sending to servers. This ensures that the email |
9 |
> > > address and key are still tied together. |
10 |
> > |
11 |
> > That sounds like awful requirement of statefulness with requirement of |
12 |
> > manual manipulation to me, i.e. a can of worms. Do we really need to |
13 |
> > assume that Gentoo developers will be adding keys they can't use to |
14 |
> > LDAP? |
15 |
> > |
16 |
> |
17 |
> It could also be a bad actor, though that comes with other concerns. |
18 |
> The CAFF method is the standard way of handling signatures, switching to |
19 |
> ldap also switches our trust store to be based on ldap, not developer |
20 |
> keys (anything can be in ldap). |
21 |
> |
22 |
|
23 |
As Kristian named it, could you please explain to me what specific |
24 |
threat model does this address? |
25 |
|
26 |
AFAIU the main purpose of the caff model is to verify that the person |
27 |
whose key you are signing can access the particular e-mail address. |
28 |
Which certainly makes sense when you are signing an arbitrary key. |
29 |
However, I don't really see how that's relevant here, and I'd rather |
30 |
not add needless complexity based on cargo cult imitation of caff. |
31 |
|
32 |
In our case, the key fingerprint comes from LDAP, and is directly bound |
33 |
to the particular username, therefore mailbox. I don't really see it |
34 |
a likely case that someone would be able to edit developer's LDAP |
35 |
attributes but at the same time be unable to access his mailbox. |
36 |
|
37 |
In other words, as I see it, the caff model can help if: |
38 |
|
39 |
1) someone manages to compromise LDAP without compromising e-mail |
40 |
service, |
41 |
|
42 |
2) a developer accidentally puts the wrong fingerprint in LDAP, |
43 |
|
44 |
3) a developer has broken e-mail setup, |
45 |
|
46 |
4) a developer is inactive. |
47 |
|
48 |
I think cases 1)-3) are rather unlikely, and 2)-3) belong to |
49 |
the 'wrong place to solve the problem' category. 4) practically relies |
50 |
on making an assumption that we don't want users to trust developers who |
51 |
aren't active enough to add this signature. |
52 |
|
53 |
The other side of this is added complexity on the scripting side, for |
54 |
a start. We need to store to whom we sent signatures to avoid resending |
55 |
them over and over again. Then, we need to able to force resending if |
56 |
the developer lost the mail for some reason. |
57 |
|
58 |
Finally, there will be at least a few developers who won't be covered by |
59 |
this because they won't care enough to add the signature to their key. |
60 |
|
61 |
My original goal was to cover all active developers because users might |
62 |
have their reasons to contact any of the developers, and I don't see any |
63 |
reason to exclude anyone from this. It's not equivalent to giving |
64 |
people access to any system, privileges to perform any specific action. |
65 |
|
66 |
It's mostly about confirming which OpenPGP key should be used to send |
67 |
mail to a particular e-mail address. The same as if you went to |
68 |
the developer listing and checked key IDs there, except more automated |
69 |
and using a single authority key rather than PKI for verification |
70 |
(though at least initially the authority key would itself be verified |
71 |
against PKI). |
72 |
|
73 |
-- |
74 |
Best regards, |
75 |
Michał Górny |