1 |
On Thu, February 20, 2014 16:16, Alan McKinnon wrote: |
2 |
> On 20/02/2014 11:16, Yuri K. Shatroff wrote: |
3 |
>> |
4 |
>> |
5 |
>> 20.02.2014 09:24, Canek Peláez Valdés пишет: |
6 |
>>> [ snip ] |
7 |
>>>> but I do not see the point, beyond as a nice gimmick. |
8 |
>>> |
9 |
>>> Well, I *do* see a point. Many points, actually. You want the logs for |
10 |
>>> SSH, from February 12 to February 15? Done: |
11 |
>>> |
12 |
>>> journalctl --since=2014-02-12 --until=2014-02-15 -u sshd.service |
13 |
>>> |
14 |
>>> No grep. No cat. No hunting logrotated logs (the journal will rotate |
15 |
>>> automatically its logs, and will search on all logs available). You |
16 |
>>> can have second-precision intervals. |
17 |
>>> |
18 |
>>> Also, the binary format that the journal uses is indexed (hence the |
19 |
>>> binary part); therefore, the search is O(log n), no O(n). With a log |
20 |
>>> with a million entries, that's about 20 steps. |
21 |
>>> |
22 |
>>> Perhaps it's just a gimmick to you. For me is a really usefull |
23 |
>> |
24 |
>> Clearly, it's reinventing a wheel. All that indexing stuff and O(log(n)) |
25 |
>> if really needed is easily achieved with databases. |
26 |
>> Not using cat and grep is not something one'd boast; rather, again, a |
27 |
>> waste of resources to recreate already existing tools. |
28 |
>> BTW, I wonder if anyone does really have logs with millions of lines in |
29 |
>> one single file, not split into files by date, service etc, so that the |
30 |
>> whole O(n) issue is moot. |
31 |
> |
32 |
> I have logs like that. It's not an uncommon scenario. |
33 |
|
34 |
I've seen logdirectories containing a a few hundred MB of logs on a test |
35 |
environment with a single user doing just one thing. |
36 |
Fortunately, there was a single file which indicated which of the 200+ |
37 |
files contained the actual error message I was looking for. |
38 |
|
39 |
>> I believe it would be a 5-minutes job to add the capability of printing |
40 |
>> last N log entries for a service to `rc-service status`. Using cat, grep |
41 |
>> and the like. |
42 |
> |
43 |
> |
44 |
> No, that will not work easily for all definitions of easily. |
45 |
> |
46 |
> rc-something has zero control over where the logs go and no standard |
47 |
> method to provide "hints" to the logger. Gentoo ships syslog* configs |
48 |
> that basically stick everything in messages, where grepping them out is |
49 |
> a PITA. I usually rewrite that config more to my taste and needs and |
50 |
> rc-service cannot know what I did. So the idea fails at step 1 as the |
51 |
> code does not know where the logs are. |
52 |
|
53 |
Would journald? |
54 |
|
55 |
>> Not reinventing wheels. Not spending super-talented |
56 |
>> super-highly paid developers' time on doing tasks one had done about 30 |
57 |
>> years ago. I believe, not having this option is due to its simple |
58 |
>> uselessness. |
59 |
> |
60 |
> 30 years ago we had isolated stand-alone machines without nothing like |
61 |
> the logging needs we have today. Whilst I agree with you that systemd's |
62 |
> logging tools may not be the solution, I can assure you (as someone who |
63 |
> has to deal with this shit) that syslogging in the modern world is a mess. |
64 |
> |
65 |
> Try this: Decide you cannot afford Splunk, so do it yourself. Now get |
66 |
> your Apache access logs into the same searchable database your other |
67 |
> stuff is in, and do it in such a way that you can SELECT what you want |
68 |
> out in obvious ways. |
69 |
> |
70 |
> Repeat for every other app you have that logs stuff. Remember to find |
71 |
> the really important logs which are usually sitting in /opt/ and |
72 |
> produced by Log4Perl or something equally abominable. |
73 |
|
74 |
Replace "perl" for a different 4-letter world depicting a language |
75 |
commonly used for enterprise applications supported on multiple platforms |
76 |
and you get what I have to deal with. |
77 |
|
78 |
One of those has the more commonly needed logs in 4 or 5 locations. This |
79 |
can easily end up being a lot more, depending on how it is being used. A |
80 |
script to find all those would need admin-level permissions into the |
81 |
application itself to query information needed to find the logfiles. |
82 |
|
83 |
Another application I worked with in the past had 20+ locations. A few of |
84 |
which contained 100+ logfiles after a few days of use. At least 5 of those |
85 |
didn't even have time-stamps. |
86 |
|
87 |
For those, a clever utility would be useful, but if I could write that, |
88 |
I'd use those AI-routines to take over the world ;) |
89 |
|
90 |
-- |
91 |
Joost |