1 |
Hey, |
2 |
|
3 |
On Friday, 30 September 2022 02:36:05 CEST William Hubbs wrote: |
4 |
> I don't know for certain about a vendor tarball, but I do know there |
5 |
> are instances where a vendor tarball wouldn't work. |
6 |
> app-containers/containerd is a good example of this, That is why the |
7 |
> vendor tarball idea was dropped. |
8 |
It is indeed not possible to verify vendor tarballs[1]. The proposed |
9 |
solution Go people had would also require network access. |
10 |
|
11 |
> Upstream doesn't need to provide a tarball, just an up-to-date |
12 |
> "vendor" directory at the top level of the project. Two examples that |
13 |
> do this are docker and kubernetes. |
14 |
Upstreams doing this sounds like a mess, because then they'd have to |
15 |
maintain multiple source trees in their repositories, if I understand |
16 |
what you mean. |
17 |
|
18 |
An alternative to vendor tarballs is modcache tarballs. These are |
19 |
absolutely massive (~20 times larger IIRC), though, they are verifiable. |
20 |
|
21 |
opinion: I see no way around it. Vendor tarballs are the way to go. For |
22 |
trivial cases, this can likely be EGO_SUM, but it scales exceedingly |
23 |
poorly, to the point of the trivial case being a very small percentage |
24 |
of Go packages. I proposed authenticated automation on Gentoo |
25 |
infrastructure as a solution to this, and implemented (a slow and |
26 |
unreliable) proof of concept (posted previously). The obvious question |
27 |
of "how will proxy maintainers deal with this" is also relatively |
28 |
simple: giving them authorization for a subset of packages that they'd |
29 |
need to work on. This is an obvious increase in the barrier of entry for |
30 |
fresh proxy maintainers, but it's still likely less than needing |
31 |
maintainers to rework ebuilds to use vendor tarballs on dev.g.o. |
32 |
|
33 |
|
34 |
[1]: https://github.com/golang/go/issues/27348 |
35 |
-- |
36 |
Arsen Arsenović |