1 |
On Fri, 21 Feb 2014 16:40:46 -0600 Canek Peláez Valdés wrote: |
2 |
> >>>> Of course the larger a project is the *potential* number of bugs |
3 |
> >>>> increases, but so what? With enough developers, users and testers, all |
4 |
> >>>> bugs are *potentially* squashed. |
5 |
> >>> |
6 |
> >>> Agreed, but I know of enough large projects with large development teams |
7 |
> >>> and even more users that don't get the most basic bugs fixed. |
8 |
> >>> Quantity is not equivalent to Quality. |
9 |
> >> |
10 |
> >> I also agree with that. My point is that the systemd project has |
11 |
> >> enough numbers of *talented* developers to do it. |
12 |
> >> |
13 |
> >> You can disagree, of course. |
14 |
> > |
15 |
> > Talented developer, maybe. |
16 |
> > But not talented designers. |
17 |
> |
18 |
> That's subjective. For me (and many others), the design of systemd is sound. |
19 |
|
20 |
Thanks to your explanation of socket activation it is subjective no |
21 |
longer. Systemd design flaws may be discussed in sheer technical |
22 |
terms, see below. |
23 |
|
24 |
> > If I were to have sockets created in advance (does it work with TCP/IP |
25 |
> > sockets?) I would get timeouts on the responses which would lead to some |
26 |
> > services not starting correctly and ending up in limbo... |
27 |
> |
28 |
> You don't know how the socket activation works, do you? At boot time, |
29 |
> if a service ask for a socket on port 1234 (and yes, they work on |
30 |
> TCP/IP sockets), systemd opens the socket for the service, and the |
31 |
> service *does not start yet*. |
32 |
> |
33 |
> When the *first* connection gets into the socket, systemd starts the |
34 |
> service, and when it finishes starting, systemd passes the opened |
35 |
> socket to it as an fd. Done, now the service has control of the |
36 |
> socket, and it will until the services terminates; not when the |
37 |
> connection closes (although you can configure it that way), when the |
38 |
> *service* terminates. |
39 |
> |
40 |
> If several connections arrive to the socket *before* the service |
41 |
> finishes starting up, the kernel automatically queues them, and when |
42 |
> systemd handles the socket to the service, the service does it things |
43 |
> for all of them. |
44 |
> |
45 |
> There is *no single* connection lost. Well, if a godzillion |
46 |
> connections arrive before the service finishes starting up, the kernel |
47 |
> queue is finite and some would be lost, but it would have to be a lot |
48 |
> of connections arriving in a window of some microseconds. |
49 |
|
50 |
And here we have a design issue. I already pointed this issue in this |
51 |
discussion: |
52 |
http://www.mail-archive.com/gentoo-user@l.g.o/msg144144.html |
53 |
Though it was completely ignored by you. I understand: it is easier |
54 |
to discuss design in terms of taste than in technical merits. |
55 |
|
56 |
Systemd assumes that time required to start service is small (at |
57 |
microseconds scale). While this is true for widely used simple |
58 |
setups, this is not true in general case. Service may take seconds or |
59 |
even minutes to start up (good example are services depending on |
60 |
enterprise SAN or large databases). And because systemd never assumes |
61 |
it can take long time to start we have the following issues possible |
62 |
in general case: |
63 |
|
64 |
1. Client connections are lost due to timeout when service takes |
65 |
long time to start. Systemd fakes service to be available while it |
66 |
isn't still. Thus systemd is not an option for production grade |
67 |
servers. |
68 |
|
69 |
2. Even if connection timeout is not reached, requests may pale up and |
70 |
be lost. Loss trigger depends on memory available, thus systemd is |
71 |
not an option for both embedded setups and production server setups. |
72 |
|
73 |
As one can see, while systemd socket activation design will work for |
74 |
many case, it will fail for corner ones and by no means can't be used |
75 |
in production (where this corner cases have a high chance to rise). |
76 |
|
77 |
Best regards, |
78 |
Andrew Savchenko |