1 |
On Thu, Feb 20, 2014 at 3:16 AM, Yuri K. Shatroff <yks-uno@××××××.ru> wrote: |
2 |
> |
3 |
> |
4 |
> 20.02.2014 09:24, Canek Peláez Valdés пишет: |
5 |
>> |
6 |
>> [ snip ] |
7 |
>> |
8 |
>>> but I do not see the point, beyond as a nice gimmick. |
9 |
>> |
10 |
>> |
11 |
>> Well, I *do* see a point. Many points, actually. You want the logs for |
12 |
>> SSH, from February 12 to February 15? Done: |
13 |
>> |
14 |
>> journalctl --since=2014-02-12 --until=2014-02-15 -u sshd.service |
15 |
>> |
16 |
>> No grep. No cat. No hunting logrotated logs (the journal will rotate |
17 |
>> automatically its logs, and will search on all logs available). You |
18 |
>> can have second-precision intervals. |
19 |
> |
20 |
>> |
21 |
>> |
22 |
>> Also, the binary format that the journal uses is indexed (hence the |
23 |
>> binary part); therefore, the search is O(log n), no O(n). With a log |
24 |
>> with a million entries, that's about 20 steps. |
25 |
>> |
26 |
>> Perhaps it's just a gimmick to you. For me is a really usefull |
27 |
> |
28 |
> |
29 |
> Clearly, it's reinventing a wheel. |
30 |
|
31 |
Where I come from, doing something that takes O(n) in O(log n) is nor |
32 |
reinventing the wheel, but, OK, see it that way if you want to. Simply |
33 |
don't use it. |
34 |
|
35 |
> All that indexing stuff and O(log(n)) if |
36 |
> really needed is easily achieved with databases. |
37 |
|
38 |
The journal is a specialized database for logs. |
39 |
|
40 |
> Not using cat and grep is not something one'd boast; rather, again, a waste |
41 |
> of resources to recreate already existing tools. |
42 |
|
43 |
Are those *your* resources? If not, what's the problem? |
44 |
|
45 |
> BTW, I wonder if anyone does really have logs with millions of lines in one |
46 |
> single file, not split into files by date, service etc, so that the whole |
47 |
> O(n) issue is moot. |
48 |
|
49 |
Oh boy, you haven't worked much in enterprise, right? |
50 |
|
51 |
Also, even if *one* machine doesn't have logs with a million lines |
52 |
(which I've seen it, in real life, in *production*, but whatever), the |
53 |
journal can send (automatically, of course, if so configured) logs to |
54 |
a central server. So you can coalesce the logs from *all* your network |
55 |
in a single place, and with the journal you can merge them when doing |
56 |
queries. Again, Everything in O(log n). |
57 |
|
58 |
Si right now I have a little server with logs of ~75,000 lines. If I |
59 |
had 20 (nothing weird in enterprise, may would call that a really |
60 |
small operation), that would be logs of 1,500,000 lines. With the |
61 |
journal, you could check *all* your servers with a single command, and |
62 |
all the queries could be done in O(log n). |
63 |
|
64 |
So, yeah, moot. |
65 |
|
66 |
> Well, maybe it'd be nice to have a collection of log management tools |
67 |
> all-in-one but beyond that I don't see any advantages of systemd-journald. |
68 |
|
69 |
Then, again, don't use it. |
70 |
|
71 |
>> Its raison d'être is the new features it brings. |
72 |
> |
73 |
> |
74 |
> I didn't notice any new features. It's not features that are new, but just a |
75 |
> new implementation of old features in a more obtrusive way IMO. |
76 |
|
77 |
Again, O(n) vs. O(log n). Coalescing logs from different machines. A |
78 |
single powerful tool with well define semantics to query the logs. |
79 |
|
80 |
So, yeah, no new features. |
81 |
|
82 |
>>> Additionally, the use of "tail -f" and "grep" allows me to check the logs |
83 |
>>> real-time for debugging purposes. |
84 |
>> |
85 |
>> |
86 |
>> journalctl -f |
87 |
>> |
88 |
>> Checks the logs in real time. Again, [1]. |
89 |
> |
90 |
> |
91 |
> Again, a brand new Wheel(c) |
92 |
|
93 |
I never said that was a new feature. Roeleveld said that he could use |
94 |
"tail -f" and grep, like that was not possible with the journal. I was |
95 |
proving him it could be done with the journal. |
96 |
|
97 |
>> systemctl status apache2.service |
98 |
>> |
99 |
>> (see [2]) will print the status of the Apache web server, and also the |
100 |
>> last lines from the logs. You can control how many lines. You can |
101 |
>> check also with the journal, as I showed up. |
102 |
> |
103 |
> |
104 |
> I believe it would be a 5-minutes job to add the capability of printing last |
105 |
> N log entries for a service to `rc-service status`. Using cat, grep and the |
106 |
> like. Not reinventing wheels. Not spending super-talented super-highly paid |
107 |
> developers' time on doing tasks one had done about 30 years ago. I believe, |
108 |
> not having this option is due to its simple uselessness. |
109 |
|
110 |
Others have chimed in on the infeasibility of this claim. However, if |
111 |
you don't want to use the journal, and can emulate everything it does |
112 |
in 5 minutes, then don't use the journal and write your little shell |
113 |
scripts in 5 minutes. |
114 |
|
115 |
I'd rather see cats with Wolverine claws in YouTube with those 5 |
116 |
minutes, and let the journal do the thing. But that's me. |
117 |
|
118 |
> This way I really wonder if at some point the super talented systemd |
119 |
> programmers decide that all shell tools are obsolete and every program |
120 |
> should know how to index or filter or tail its output in its own, though, |
121 |
> open, binary format. I can't get rid of the idea that systemd uses the MS |
122 |
> Windows approach whatever you say about its open source. |
123 |
|
124 |
Again, the journal can export an output (and really fast, since it has |
125 |
everything indexed) that is 100% identical to the output of any other |
126 |
logger. And you can use on it shell, grep and sed to your heart's |
127 |
desire. |
128 |
|
129 |
But if you don't want to, then don't use the journal. Nobody is |
130 |
forcing it on you. |
131 |
|
132 |
Regards. |
133 |
-- |
134 |
Canek Peláez Valdés |
135 |
Posgrado en Ciencia e Ingeniería de la Computación |
136 |
Universidad Nacional Autónoma de México |