1 |
On Mon, Feb 4, 2019 at 1:14 PM Joonas Niilola <juippis@×××××.com> wrote: |
2 |
> |
3 |
> This could be 'exploited' with a group of friends. By exploited I mean |
4 |
> small inner circles forming, where people just approve their friends |
5 |
> commits without looking at them. |
6 |
|
7 |
It sounds like you still need to get the 3 points from already-vetted |
8 |
users to get credit towards having a reputation of your own in the |
9 |
proposal. So, a group of users with no reputation could not confer |
10 |
reputation to each other. |
11 |
|
12 |
Otherwise you could have innocent/ignorant users approving each |
13 |
other's work and gaining reputation without knowing what they're |
14 |
doing. Alternatively you could have a malicious user use sock puppets |
15 |
to gain reputation trivially. |
16 |
|
17 |
Of course a malicious user could still gain reputation by making |
18 |
genuine contributions. They would just need to do this with three |
19 |
accounts to have the equivalent of developer access. Of course, by |
20 |
that argument somebody can also maliciously become a developer. I |
21 |
don't think the system has to be bulletproof - it just has to be |
22 |
enough of a barrier so that you don't get a ton of low-effort attacks. |
23 |
|
24 |
My alternate proposal of having users maintian their own trust bits |
25 |
for contributors has the same weakness. Somebody doing enough good |
26 |
work can start slipping in malware. Short of actually verifying |
27 |
identity documents and so on we aren't going to solve that problem. |
28 |
|
29 |
-- |
30 |
Rich |