1 |
Tom Wijsman posted on Tue, 25 Jun 2013 01:18:07 +0200 as excerpted: |
2 |
|
3 |
> On Mon, 24 Jun 2013 15:27:19 +0000 (UTC) |
4 |
> Duncan <1i5t5.duncan@×××.net> wrote: |
5 |
> |
6 |
>> Throwing hardware at the problem is usable now. |
7 |
> |
8 |
> If you have the money; yes, that's an option. |
9 |
> |
10 |
> Though I think a lot of people see Linux as something you don't need to |
11 |
> throw a lot of money at; it should run on low end systems, and that's |
12 |
> kind of the type of users we shouldn't just neglect going forward. |
13 |
|
14 |
Well, let's be honest. Anyone building packages on gentoo isn't likely |
15 |
to be doing it on a truly low-end system. For general linux, yes, |
16 |
agreed, but that's what puppy linux and etc are for. True there's the |
17 |
masochistic types that build natively on embedded or a decade plus old |
18 |
(and mid-level or lower then!) systems, but most folks with that sort of |
19 |
system either have a reasonable build server to build it on, or use a pre- |
20 |
built binary distro. And the masochistic types... well, if it takes an |
21 |
hour to get the prompt in an emerge --ask and another day or two to |
22 |
actually complete, that's simply more masochism for them to revel in. =:^P |
23 |
|
24 |
Tho you /do/ have a point. |
25 |
|
26 |
OTOH, some of us used to do MS or Apple or whatever and split our money |
27 |
between hardware and software. Now we pay less for the software, but |
28 |
that doesn't mean we /spend/ significantly less on the machines; now it's |
29 |
mostly/all hardware. |
30 |
|
31 |
I've often wondered why the hardware folks aren't all over Linux, given |
32 |
the more money available for hardware it can mean, and certainly /does/ |
33 |
mean here. |
34 |
|
35 |
>> Truth is, I used to run a plain make -j (no number and no -l at all) on |
36 |
>> my kernel builds, just to watch the system stress and then so elegantly |
37 |
>> recover. It's an amazing thing to watch, this Linux kernel thing and |
38 |
>> how it deals with cpu oversaturation. =:^) |
39 |
> |
40 |
> If you have the memory to pull it off, which involves money again. |
41 |
|
42 |
What was interesting was doing it without the (real) memory -- letting it |
43 |
go into swap and just queue up hundreds and hundreds of jobs as the make |
44 |
continued to generate more and more of them, faster than they could even |
45 |
fully initialize, particularly since they were packing into swap before |
46 |
they even had that chance. |
47 |
|
48 |
And then with 500-600 jobs or more (custom kernel build, not all-yes/all- |
49 |
mod config, or it'd likely have been 1200...) stacked up and gigs into |
50 |
swap, watch the system finally start to slowly unwind the tangle. |
51 |
Obviously the system wasn't usable for anything else during the worst of |
52 |
it, but it still rather fascinates me that the kernel scheduling and code |
53 |
quality in general is such that it can successfully do that and unwind it |
54 |
all, without crashing or whatever. And the kernel build is one of the |
55 |
few projects that's /that/ incredibly parallel, without requiring /too/ |
56 |
much memory per individual job, to do it in the first place. |
57 |
|
58 |
Actually, that's probably the flip side of my getting more conservative. |
59 |
The reason I /can/ get more conservative now is that I've enough cores |
60 |
and memory that it's actually reasonably practical to do so. When you're |
61 |
always dumping cache and/or swapping anyway, no big deal to do so a bit |
62 |
more. When you have a system big enough to avoid that while still |
63 |
getting reasonably large chunks of real work done, and you're no longer |
64 |
used to the compromise of /having/ to dump cache, suddenly you're a lot |
65 |
more sensitive to doing so at all! |
66 |
|
67 |
>> Needlessly oversaturating the CPU (and RAM) only slows things down and |
68 |
>> forces cache dump and swappage. |
69 |
> |
70 |
> The trick is to set it a bit before the point of oversaturating; low |
71 |
> enough so most packages don't oversaturize, it could be put more |
72 |
> precisely for every package but that time is better spent elsewhere |
73 |
|
74 |
Indeed. =:^) |
75 |
|
76 |
> Not everyone is a sysadmin with a server; I'm just a student running a |
77 |
> laptop bought some years ago, and I'm kind of the type that doesn't |
78 |
> replace it while it still works fine otherwise. Maybe when I graduate... |
79 |
|
80 |
Actually, I use "sysadmin" in the literal sense, the person taking the |
81 |
practical responsibility for deciding what goes on a system, when/if/what |
82 |
to upgrade (or not), with particular emphasis on RESPONSIBILITY, both for |
83 |
security and both keeping the system running and getting it back running |
84 |
again when it breaks. Nothing in that says it has to be commercial, or |
85 |
part of some huge farm of systems. For me, the person taking |
86 |
responsibility (or failing to take it) for updating that third-generation |
87 |
hand-me-down castoff system is as much of a sysadmin for that system, as |
88 |
the guy/gal with 100 or 1000 systems (s)he's responsible for. |
89 |
|
90 |
My perspective has always been that if all those folks running virus |
91 |
infested junk out there actually took the sysadmin responsibility for the |
92 |
systems they're running seriously, the virus/malware issue would cease to |
93 |
be an issue at all. |
94 |
|
95 |
Meanwhile, I'll admit my last system was rather better than average when |
96 |
I first set it up (dual socket original 3-digit Opteron, that whole |
97 |
spending all the money I used to spend on software, on hardware, now, |
98 |
thing, my first 64-bit machine and my first and likely last real dual- |
99 |
CPU... socket); in fact, compared to peers of its time it may well be the |
100 |
best system I'll ever own, but that thing lasted me 8+ years. My goal |
101 |
was a decade but I didn't make it as the caps on the mobo were bulging |
102 |
and finally popping by the time I got rid of it. (The last month or so I |
103 |
ran it, last summer here in Phoenix, it'd run if I kept it cold enough, |
104 |
basically 15C or lower, so I was dressing up in a winter jacket with long |
105 |
underwear and a knit hat on, with the AC running to keep it cold enough |
106 |
to run the computer inside, while outside it was 40C+!) |
107 |
|
108 |
But OTOH, that was originally a $400 mobo alone, for quite some time |
109 |
worth probably 2-3 grand total as I kept upgrading bits and pieces of it |
110 |
as I had the money. But FTR, I /am/ quite happy with the 6-core |
111 |
Bulldozer-1 that replaced it, when I finally really had no other choice. |
112 |
And the replacement was *MUCH* cheaper! |
113 |
|
114 |
But anyway, yeah, I do know a bit about running old hardware, myself, and |
115 |
know how to make those dollars strreeettcchh myself. =:^) |
116 |
|
117 |
-- |
118 |
Duncan - List replies preferred. No HTML msgs. |
119 |
"Every nonfree program has a lord, a master -- |
120 |
and if you use the program, he is your master." Richard Stallman |