1 |
On Tue, Aug 23, 2011 at 4:53 PM, Canek Peláez Valdés <caneko@×××××.com> wrote: |
2 |
> On Tue, Aug 23, 2011 at 4:18 PM, Michael Mol <mikemol@×××××.com> wrote: |
3 |
> You are right, I stand corrected. And actually, D-Bus is very much |
4 |
> capable of restart without kicking out sessions (read Havoc |
5 |
> explanation in http://article.gmane.org/gmane.comp.freedesktop.dbus/7730). |
6 |
> The problem is that many apps don't handle this correctly, and the |
7 |
> D-Bus developers choose the "safe" option. If all the apps supported |
8 |
> gracefully the restart, there would be no problems. |
9 |
|
10 |
A very, very good writeup which provides a lot of explanation of the |
11 |
semantics of D-Bus connections. Very helpful, thanks for the link. |
12 |
|
13 |
What it tells me is that a D-Bus restart needs to be able to be done |
14 |
without dropping connections. I don't know the specifics of the |
15 |
bindings between libdbus and the daemon, but a first guess would be to |
16 |
spin up the upgraded daemon, hand over all live connections, and shut |
17 |
down the old daemon. |
18 |
|
19 |
That obviously requires newer daemons being able to talk with older |
20 |
libdbus (at least until the app itself is restarted), and newer |
21 |
libdbus being able to talk with older daemons (in the event that an |
22 |
app hits that vulnerable spot between when things have been installed |
23 |
but not upgraded). Obviously, this really depends on how much the |
24 |
libdbus protocol itself must be able to change going forward. |
25 |
|
26 |
> |
27 |
>> And, yes, upgrades on live data can be a royal PITA. Designing a |
28 |
>> system which can handle it requires careful attention and testing. The |
29 |
>> more anal you want to be about it, the more you start talking about |
30 |
>> writing provable and verifiable code and other stringent debugging, |
31 |
>> development and testing practices. |
32 |
> |
33 |
> It's a matter of cost-benefit. Given the open source community, I |
34 |
> think the PITA is not worth it in several cases, D-Bus being one of |
35 |
> them. |
36 |
|
37 |
Personally, I suspect the balance will change as there's greater and |
38 |
greater dependency on D-Bus on long-uptime systems. |
39 |
|
40 |
> |
41 |
>> Yet it's done. Last Friday, I sat at a table with someone who |
42 |
>> witnessed an IBM tech replace a CPU in a mainframe. Flip a switch, |
43 |
>> pull out the old part, insert the new part, flip the switch back on, |
44 |
>> everything's fine. Saturday, I listened to a guy present on how he |
45 |
>> dynamically reroutes live calls through a VOIP network based on |
46 |
>> hardware faults. |
47 |
> |
48 |
> I have seen those kind of systems. And again, it's a matter of |
49 |
> cost-benefit: See the difference in budgets for that kind of systems |
50 |
> and our open source software stack. |
51 |
|
52 |
True to an extent. I think it was four or five years ago, during the |
53 |
SCO lawsuit, when someone estimated the value of the Linux kernel at |
54 |
well over a billion dollars. Many for-profit companies contribute to |
55 |
the kernel and its architecture. If D-Bus becomes necessary to the |
56 |
operation of the majority of use cases, I suspect similar contribution |
57 |
paths will occur there. |
58 |
|
59 |
> |
60 |
>> D-Bus wants to be a core system service, and *that's* what should be |
61 |
>> setting its design goals, not how difficult it would be for the system |
62 |
>> to be worthy of the core. |
63 |
>> |
64 |
>> Again, I really don't believe D-Bus is badly-written. I just think its |
65 |
>> community isn't fully aware of the trends of its position in the |
66 |
>> system, and what responsibilities it carries. |
67 |
> |
68 |
> I think we are fully aware. The thing is, given the community |
69 |
> resources, D-Bus is good enough, which, as everybody knows, is the |
70 |
> enemy of (the impossible) perfect. |
71 |
|
72 |
Very true. |
73 |
|
74 |
-- |
75 |
:wq |