1 |
On 03/27/2013 04:00 PM, Grant Edwards wrote: |
2 |
> On 2013-03-27, Kevin Chadwick <ma1l1ists@××××××××.uk> wrote: |
3 |
> |
4 |
>> The real drive behind systemd is enterprise cloud type computing for |
5 |
>> Red Hat. The rest is snake oil and much of the features already exist |
6 |
>> without systemd. With more snake oil of promises of faster boot up on a |
7 |
>> portion of the code which is already fast and gains you maybe two |
8 |
>> seconds. |
9 |
> |
10 |
> I'm not trying to fan the flames: I'm genuinely confused... |
11 |
> |
12 |
> I just don't get the whole "parallel startup for faster boot thing". |
13 |
> Most of my machines just don't boot up often enough for a few seconds |
14 |
> or even tens of seconds to matter at all. |
15 |
|
16 |
With cloud-based computing, you don't have a bunch of servers running, |
17 |
waiting to received requests. |
18 |
|
19 |
Instead, you have is a bunch of idle hardware, waiting to have pre-built |
20 |
system images spun up on them on-demand. |
21 |
|
22 |
The faster those pre-built images can spin up, the faster they can serve |
23 |
requests. The faster they can serve requests, the fewer mostly-idle |
24 |
images need to be already running for immediate needs. Traffic on a web |
25 |
service usually spins up gradually. In the middle of the night, it's |
26 |
low, but it increases during certain hours and decreases during others. |
27 |
(Even with things like social media, there's a gradual buildup of |
28 |
resource demands, as it takes URLs a while to take fire and spread.) |
29 |
Ultimately, if you can have just enough images running to manage |
30 |
immediate demand plus a small burst margin, you can save on costs. If |
31 |
demand eats into your burst, you spin up more instances until you're |
32 |
below your burst margin again. If demand falls, you kill off the extra |
33 |
instances. |
34 |
|
35 |
The quicker the spin-up process, the more efficient the on-demand system |
36 |
becomes, and the better the resource utilization (and value to the |
37 |
person paying for the cloud services). |
38 |
|
39 |
(Though, really, I'd think that the best way to handle this kind of load |
40 |
would be a hibernate system with a sparse image for RAM, and driver |
41 |
tweaks to allow hardware to swap out from underneath in the event of |
42 |
hardware changes while asleep. Or handle things like MAC address |
43 |
rewriting in the VM hypervisor.) |
44 |
|
45 |
> |
46 |
> It seems to me that starting things in parallel would be inherintly |
47 |
> much more difficult, bug-prone, and hard to troubleshoot. |
48 |
|
49 |
Indeed. |
50 |
|
51 |
> |
52 |
> Even on my laptop, which does get booted more than once every month or |
53 |
> two, openrc is plenty fast enough. |
54 |
|
55 |
The case for systemd is twofold: |
56 |
|
57 |
1) Boot-to-desktop session management by one tool. (The same thing that |
58 |
launches your cron daemon is what launches your favorite apps when you |
59 |
log in.) |
60 |
2) Reduce the amount of CPU and RAM consumed when you're talking about |
61 |
booting tens of thousands of instances simultaneously across your entire |
62 |
infrastructure, or when your server instance might be spun up and down |
63 |
six times over the course of a single day. |
64 |
|
65 |
> |
66 |
> Are there people who reboot their machines every few minutes and |
67 |
> therefore need to shave a few seconds off their boot time? |
68 |
|
69 |
On-demand server contexts, yes. |
70 |
|
71 |
> |
72 |
> I can see how boot time matters for small embedded systems (routers, |
73 |
> firewalls, etc.) that need to be up and running quickly after a power |
74 |
> outage, but they're probably even less likely to be running systemd |
75 |
> than desktops or servers. |
76 |
|
77 |
Servers in cloud environments have one normal state: "Off". But when |
78 |
they need to be "On", they need to get there hella quickly, or the |
79 |
client is going to lose out on ad revenue when he starts getting a few |
80 |
tens of thousands of visits per minute. |