Gentoo Archives: gentoo-amd64

From: Duncan <1i5t5.duncan@×××.net>
To: gentoo-amd64@l.g.o
Subject: [gentoo-amd64] Re: madwifi-ng not compile in amd64
Date: Thu, 31 Jan 2008 10:14:50
Message-Id: pan.2008.01.31.10.14.39@cox.net
In Reply to: Re: [gentoo-amd64] Re: madwifi-ng not compile in amd64 by Beso
1 Beso <givemesugarr@×××××.com> posted
2 d257c3560801301106o7d8c3b15mf6e0a4a075bcfbde@××××××××××.com, excerpted
3 below, on Wed, 30 Jan 2008 19:06:47 +0000:
4
5 > 2008/1/30, Duncan <1i5t5.duncan@×××.net>:
6 >>
7 >> Beso <givemesugarr@×××××.com> posted
8 >> d257c3560801300032m6f6c51adwc9f4bd75da066609@××××××××××.com, excerpted
9 >> below, on Wed, 30 Jan 2008 09:32:57 +0100:
10 >>
11 >> IMO --buildpkg (or better yet, FEATURES=buildpkg, if you've got 2-4
12 >> gigs of room for the packages) is one of the most under-advertised
13 >> power-user tools Gentoo has. The binaries are just so handy to have
14 >> around, giving one all the advantages of a binary distribution if you
15 >> need to remerge or need to fetch a "pristine" config file for some
16 >> reason, without losing all the benefits and extra control allowed by
17 >> from-source, since you compile everything initially anyway.
18 >
19 >
20 > i only use it on system packages like gcc that are mandatory for
21 > compiling something (just in case something breaks) and on some big
22 > packages that don't update frequently (kdebase, kdelibs). this requires
23 > less space than buildpkg for all world packages. of course having this
24 > option enabled needs frequent distcleans after updates since there's the
25 > risk of having there different versions of the same package that aren't
26 > used.
27
28 Disk space is cheap! =8^) As I said, I do FEATURES=buildpkg so have
29 packages for the entire system. I keep them in a dedicated RAID-6 LVM
30 volume, along with a backup volume of the same size. Since my last disk
31 upgrade when I increased the sizes of the volumes, I've not yet run out
32 of space and only eclean-pkg once in awhile when I decide I've not done
33 it recently, so maybe twice a year and it might be 9 months between
34 runs. My system runs ~1 gig for a full set of packages, but I'd say
35 closer to 2 gigs is the minimum one should consider viable for such an
36 endeavor, given that ideally you keep both the present version and the
37 last known working version at least until you've verified that a new
38 version of a package works. AFAIK, I ran 2 gig package partitions
39 before, but that didn't really leave me as much room as I'd have liked.
40 I'm running 4 gig volumes now, as I said, two of them, the working copy
41 and a backup I take every so often. Right now, df says I'm using 1.9
42 gigs out of 4 gigs on my working package volume, and that's with both KDE
43 3.5.x and 4.0.x-SVNs merged (but no GNOME tho I do have GTK, so figure
44 about the same if you just run KDE and GNOME normally, one copy of each).
45
46 The only problem with that system is that for live builds, like my kde-
47 svn builds, the new copy replaces the old one if I don't specifically
48 save the old ones off, and if that new copy doesn't end up working, as is
49 quite possible with live-svn...
50
51 I could script a solution that automatically saves off the old copy,
52 keeping several versions of it much like logrotate does, but I haven't as
53 I really haven't had the time I've wanted to get kde-4.x customized to my
54 liking, so I've kept kde-3.5.x as my (very customized) main desktop for
55 the time being, and it's therefore no big deal if the kde4-svn live
56 ebuilds go tits-up on me for awhile.
57
58 In fact, I was saying how easy it is to keep kde-svn compiled now that I
59 upgraded to dual dual-core Opterons... and it's true. Once I got past
60 the initial compile (that was back in November, so things were quite
61 rough still and it took a bit of effort to get the USE flags and etc
62 right so it would compile the first time), it has been much easier to
63 keep up with routinely updating the compiled builds, than it has to
64 actually run them, and go in and get things customized to my liking in
65 the environment. I've played around with it some, but I've actually been
66 disappointed as I've simply not had the time I had expected to have to
67 get it running to my liking, so mostly, it just sits, and gets updated,
68 and seldom gets run as I continue to run kde-3.5.x for my daily work. It
69 has been quite frustrating, actually, because I've /not/ had that time --
70 putting in 10 hours a week more at work really takes the time away from
71 other things! But things are getting rather back to normal, now, so I'm
72 hoping I get to it in the next couple weeks...
73
74
75 > it sure helps when a rebuild is triggered after a lib soname update or
76 > relinking issues. in that case there's no need to recompile but only to
77 > relink and having binary builds helps a lot.
78
79 ?? I can see how ccache would help, and maybe keeping the sources around
80 with the previous compile job in place so you can simply run make && make
81 install and it'll use existing work where it can, but the binary packages
82 are already dynamic stub-linked, so you'd still have to rebuild them. I
83 don't quite get what you are saying here, unless you're talking about
84 ccache, or under the misimpression that the binary packages aren't
85 rebuilt when they need to relink against a new *.so -- they are, from all
86 I know.
87
88 > the problem is always there: major or blocking bugs getting into the
89 > unstable branch after the update of the package (has happened with
90 > dhcpcd for example) that makes gentoo devs mask the package the next day
91 > or so. so for people like me that update daily this could be troublesome
92 > if it happens on the system packages.
93
94 Well, that's why I was wishing there was a convenient way to run ~arch,
95 but only emerge packages that had been ~arch keyworded for a week or so,
96 thus getting past the initial testing wave and any serious bugs that
97 might have prompted a new -rX release or a remasking.
98
99 However, there again, the binpkg repository is nice, as one can simply
100 roll back to whatever they had previously installed, if necessary.
101
102 > The cure, of course, is a faster machine! =8^) Gentoo was reasonable
103 >> with my old dual Opteron 242, but merges for things such as gcc, glibc,
104 >> and DEFINITELY whole desktop updates like KDE, tended to be a chore.
105 >> The upgrade to dual dual-core Opteron 290s, combined with 8 gigs memory
106 >> (tho 4 gig would do) and putting PORTAGE_TMPDIR on tmpfs, made a HUGE
107 >> difference.
108
109 > this is a high end machine. opteron is better than a normal athlon x2 in
110 > terms of compilation and having 8gb on a server mobo, that differs from
111 > normal mobos, is not affordable for every user. my system is about the
112 > average one (2ghz processor, 1gb ram, 80gb hdd) for notebooks and quite
113 > average one for desktops. remember that nuveau devs still work on
114 > athlons or pentium 3 hw, and that hw is not dead yet. your system
115 > nowadays is a middle-high end one.
116
117 Well, dual-cores are standard now for new gear anyway, and 1-2 gig
118 memory, but I get your point. My machine is certainly a low to middling
119 high-end machine today tho really only for the memory and dual dual-
120 cores. However, it's not /that/ great (except the 8 gig memory), as it's
121 the older socket-940 Opterons, none of the new virtualization stuff, etc,
122 and I have the top of the end of the line 2.8 GHz, while as I said, most
123 desktops come with dual-core now, and often run 3.2 GHz or higher.
124
125 Still, the next paragraph (which I'm snipping) /did/ talk about it being
126 common "a couple years from now", and I think that's a reasonable
127 prediction, except perhaps 4 gig memory instead of the 8 gig I have, but
128 4 gig is still within the comfort range for what I'm talking -- the 8 gig
129 is really a bit over the top currently, and it CERTAINLY was before I
130 upgraded to the dual-cores so was running only dual CPUs, roughly the
131 equivalent of a single dual-core today.
132
133 As I said, those who've not yet had a change to try quad cores (whether
134 single-core quad socket, dual-core dual socket as I have, or the new quad-
135 core)... it makes a MUCH bigger difference than I had expected it to.
136 Once they and 4 gigs memory become common, I really DO think from source
137 could take-off, because it really /does/ become trivial to do the
138 compiling -- as I found with kde-4 as I discussed above, the actual
139 compiling starts to become much less of an issue than the standard
140 sysadmin overhead, and people have to deal with that regardless of what
141 distribution they choose, binary or from-source.
142
143 > well, this is true for minor packages, but kdelibs or kdebase for
144 > example are still quite big and if you're not a developer you won't go
145 > around compiling it everyday or so.
146
147 See, that was exactly my reaction back with the dual single-cores (so
148 equivalent to today's single dual-cores). Minor packages were trivial,
149 but the big stuff, gcc, glibc, kdelibs, kdebase, etc, were "practically
150 managable", but still decidedly NOT trivial.
151
152 I feel like I'm repeating myself (and I am =8^), but I really /was/
153 shocked at how much of a difference the 4 cores (and a decent amount of
154 memory, but I had that first, so the bottleneck here was eliminated with
155 the 2 to 4 cores upgrade -- tho I did increase CPU speed substantially as
156 well, 1.6 to 2.8 GHz) made! It's really almost embarrassing how
157 trivially easy and fast it is to compile even the "big" packages. When
158 all of KDE can be upgraded in 4 hours from scratch, an hour if upgraded
159 daily from svn with ccache active, even formerly big individual
160 packages... just aren't much of a sweat any more.
161
162 It's kinda like how compiling the kernel used to be a big thing on 386s
163 and even Pentiums, but on even half-decent modern hardware, the lowest
164 end x86_64 around (well, with perhaps the exception of Via's Isaiah or
165 whatever it is, I honestly don't know how it does), and even with a
166 couple generations old Athlon 1.X GHz (pre-Athlon XP), it might be five
167 minutes or so, or even 10, but it's little more than a coffee or two at
168 most, certainly not the half-day affair it used to be! Well, with quad-
169 cores (again, however you get them, so dual-dual-cores or quad-single-
170 cores included) and say 4 gig of memory to work with, pretty much any
171 single package, even the ones like gcc/glibc/kdelibs that used to be a
172 big deal, is that trivial. (I just checked. Merging kdelibs-9999.4, the
173 kde4 svn version, took all of 5 minutes, 37 seconds, based on the random
174 entry in emerge.log I checked. Of course, that's with ccache enabled and
175 only a bit of update since the previous day, but it still has a bit of
176 compiling and all that linking to do. That's about right since as I
177 said, all of KDE often takes only an hour, if done daily.)
178
179 As for the kernel, it has really ceased even to be a decent measure of
180 performance, it's done so fast. Maybe a minute. I've honestly not timed
181 it since I got the new CPUs in. I can just see a future boot solution
182 containing an initial book kernel in BIOS, using it to load /boot with
183 the kernel sources, and compiling them dynamically from source based on
184 detected hardware at every boot, then using kexec to switch to the new,
185 freshly compiled kernel! =8^) If the previous work were kept, so only
186 newly detected hardware and any admin's config changes had to be built,
187 it'd literally only add a few seconds to the boot process, maybe 10
188 seconds if no config changes, 30 seconds to a minute if you move the disk
189 to a new machine and it has to rebuild almost the entire kernel.
190
191 --
192 Duncan - List replies preferred. No HTML msgs.
193 "Every nonfree program has a lord, a master --
194 and if you use the program, he is your master." Richard Stallman
195
196 --
197 gentoo-amd64@l.g.o mailing list