1 |
On Sun, 2019-03-10 at 17:53 +0100, Thomas Deutschmann wrote: |
2 |
> Hi, |
3 |
> |
4 |
> I know I am late but... |
5 |
> |
6 |
> 1) I still don't understand the motivation for this. What will |
7 |
> change/improve/be possible in future vs. status quo? |
8 |
|
9 |
The motivation is that it will let people easily verify keys used by |
10 |
Gentoo developers (mentioning 'people actually committing to the Gentoo |
11 |
systems'). The status quo is that users need to jump through hoops to |
12 |
verify key used by a developer, e.g. by either meeting him or finding |
13 |
his fingerprint via api.g.o. |
14 |
|
15 |
> 2) This sounds like a blackbox only a few people know/understand: |
16 |
> |
17 |
> > +The single L1 Authority Key is used only to (manually) certify the L2 |
18 |
> > +Keys, and is kept securely offline following the Infrastructure policies |
19 |
> > +on protecting primary keys. |
20 |
> |
21 |
> However, the L1 key is critical, especially if all devs will sign that |
22 |
> key. E.g. this will allow people with access to L1 to establish a new |
23 |
> tree without the knowledge of everyone who signed the key but the new |
24 |
> tree will be able to issue new signatures and everyone will accept them |
25 |
> because L1 is trusted. Unacceptable. Process must be clear for |
26 |
> _everyone_. No blackbox. |
27 |
|
28 |
I don't understand what you mean here. It is inevitable that person |
29 |
having access to L1 key will be able to divert the trust. Ergo, we want |
30 |
to protect the key just like we protect service keys. |
31 |
|
32 |
If you mean that every developer should have access to L1 key, then |
33 |
you're introducing over 150 attack vectors. Not to mention the majority |
34 |
of developers doesn't understand much of OpenPGP, nor cares to, so |
35 |
any attacker could trivially find a developer who doesn't protect his |
36 |
copy (or access) to the key. |
37 |
|
38 |
There doesn't exist any solution that doesn't involve having at least |
39 |
one person who could abuse the key. We can't do anything without |
40 |
putting trust in *someone*, and in this particular case Infra makes |
41 |
sense because they need to be trusted for a dozen other reasons. |
42 |
|
43 |
> > +The fingerprint of this key is published |
44 |
> > +on the Gentoo website and users are requested to sign this key to enable |
45 |
> > +key validity via Authority Keys. |
46 |
> |
47 |
> This is problematic. Nobody should ever sign something because it was |
48 |
> published somewhere. I know this will be a hard challenge but I believe |
49 |
> the L1 key can only be created if infra people will meet in person: |
50 |
> |
51 |
> - This will guarantee that nobody will take a copy of the L1 key, |
52 |
> assuming we trust these people (maybe do this during a Gentoo conference |
53 |
> so other people can watch the ceremony). |
54 |
> |
55 |
> - We will sign L1 only because it was signed by infra person X which we |
56 |
> met in person. E.g. nobody should sign L1 key who hasn't met anyone who |
57 |
> already signed that key. Because mgorny and antarus for example never |
58 |
> met somone else according to current WoT graph, trust anchor will |
59 |
> probably be robbat2 -> k_f -> ... for most people. But signing something |
60 |
> because you were asked to do so without *any* trust anchor should be a |
61 |
> no go. |
62 |
|
63 |
This will render it impossible for a huge majority of users to use this |
64 |
system. Effectively, it will render the proposal useless. Then, people |
65 |
would still end up jumping through hoops to mark individual keys trusted |
66 |
to be able to send encrypted mail or anything, and your 'craving' for |
67 |
security is effectively reducing the actual security of user systems. |
68 |
|
69 |
> My main criticisms, however, is that this system will create a false |
70 |
> sense of security (authenticity to be precise): |
71 |
> |
72 |
> Let's say Gentoo has a developer named Bob. |
73 |
> |
74 |
> Bob's local system will get compromised. Attacker will gain access to |
75 |
> Bob's SSH key and will also find Bob's Gentoo LDAP password. |
76 |
> |
77 |
> With the SSH key, the attacker will be able to connect to |
78 |
> dev.gentoo.org. The LDAP password will allow the attacker to add or |
79 |
> replace Bob's GPG fingerprint. |
80 |
> |
81 |
> Thanks to the new system, an automatic process will sign that new GPG key. |
82 |
> |
83 |
> The attacker is now able to manipulate an ebuild for example and push |
84 |
> that change. If no one happens to review the commit for some reason and |
85 |
> notice the malicious change, attack will be successful, because the |
86 |
> commit has a valid signature. |
87 |
> |
88 |
> That's basically status quo (see #1). |
89 |
|
90 |
Exactly. Signing doesn't make any difference here. Once the attacker |
91 |
adds new fingerprint to LDAP, the system will accept his commits. Both |
92 |
rsync and git distribution will sign the resulting repository, and all |
93 |
users will happily use it. |
94 |
|
95 |
> Once we detect the compromise, we would disable Bob's access and revoke |
96 |
> all signatures. But this doesn't matter anymore. |
97 |
> |
98 |
> Also keep in mind that currently we only validate last commit. If |
99 |
> another developer will add his/her commit on top of the attacker's |
100 |
> change, the attacker's key doesn't matter anymore. |
101 |
|
102 |
Yes, this is also true but still completely irrelevant to the proposal. |
103 |
|
104 |
> My point is, you cannot automate trust. Yes, if someone has to change |
105 |
> his/her key it will be a pain for everyone... but that's part of the |
106 |
> game. The difference would be that nobody would sign that new key the |
107 |
> attacker added to LDAP because it wasn't signed by Bob's previous key |
108 |
> everyone trusted. So everyone would ping Bob asking what's going on. If |
109 |
> this would be a legitimate key change, Bob would explain that (and |
110 |
> probably sign that new key with the old key or he would have to |
111 |
> establish a new WoT). If this wasn't a legitimate key change, we would |
112 |
> notice... |
113 |
> |
114 |
> I am basically back at #1. Why do we need GLEP 79? It doesn't improve |
115 |
> anything for real aside adding a lot of complexity. Wrong? |
116 |
> |
117 |
|
118 |
Yes, you are. You've basically described everything that's completely |
119 |
irrelevant to this GLEP, and now you dismiss it because it doesn't fix |
120 |
problem that wasn't addressed here in the first place. |
121 |
|
122 |
The point is that it adds some well-described way of determining which |
123 |
keys are currently used by apparent Gentoo developers. It's not perfect |
124 |
but it's better than not having any reasonably accessible way at all. |
125 |
The GLEP describes the terms, and you are free not to use this key |
126 |
if you don't agree with them. |
127 |
|
128 |
Yes, if Gentoo systems suffer a heavy compromise then this system can be |
129 |
abused. However, it also provides a clear way of revoking the involved |
130 |
signatures and/or keys. |
131 |
|
132 |
-- |
133 |
Best regards, |
134 |
Michał Górny |