Beso <givemesugarr@...> posted
below, on Wed, 30 Jan 2008 19:06:47 +0000:
> 2008/1/30, Duncan <1i5t5.duncan@...>:
>> Beso <givemesugarr@...> posted
>> d257c3560801300032m6f6c51adwc9f4bd75da066609@..., excerpted
>> below, on Wed, 30 Jan 2008 09:32:57 +0100:
>> IMO --buildpkg (or better yet, FEATURES=buildpkg, if you've got 2-4
>> gigs of room for the packages) is one of the most under-advertised
>> power-user tools Gentoo has. The binaries are just so handy to have
>> around, giving one all the advantages of a binary distribution if you
>> need to remerge or need to fetch a "pristine" config file for some
>> reason, without losing all the benefits and extra control allowed by
>> from-source, since you compile everything initially anyway.
> i only use it on system packages like gcc that are mandatory for
> compiling something (just in case something breaks) and on some big
> packages that don't update frequently (kdebase, kdelibs). this requires
> less space than buildpkg for all world packages. of course having this
> option enabled needs frequent distcleans after updates since there's the
> risk of having there different versions of the same package that aren't
Disk space is cheap! =8^) As I said, I do FEATURES=buildpkg so have
packages for the entire system. I keep them in a dedicated RAID-6 LVM
volume, along with a backup volume of the same size. Since my last disk
upgrade when I increased the sizes of the volumes, I've not yet run out
of space and only eclean-pkg once in awhile when I decide I've not done
it recently, so maybe twice a year and it might be 9 months between
runs. My system runs ~1 gig for a full set of packages, but I'd say
closer to 2 gigs is the minimum one should consider viable for such an
endeavor, given that ideally you keep both the present version and the
last known working version at least until you've verified that a new
version of a package works. AFAIK, I ran 2 gig package partitions
before, but that didn't really leave me as much room as I'd have liked.
I'm running 4 gig volumes now, as I said, two of them, the working copy
and a backup I take every so often. Right now, df says I'm using 1.9
gigs out of 4 gigs on my working package volume, and that's with both KDE
3.5.x and 4.0.x-SVNs merged (but no GNOME tho I do have GTK, so figure
about the same if you just run KDE and GNOME normally, one copy of each).
The only problem with that system is that for live builds, like my kde-
svn builds, the new copy replaces the old one if I don't specifically
save the old ones off, and if that new copy doesn't end up working, as is
quite possible with live-svn...
I could script a solution that automatically saves off the old copy,
keeping several versions of it much like logrotate does, but I haven't as
I really haven't had the time I've wanted to get kde-4.x customized to my
liking, so I've kept kde-3.5.x as my (very customized) main desktop for
the time being, and it's therefore no big deal if the kde4-svn live
ebuilds go tits-up on me for awhile.
In fact, I was saying how easy it is to keep kde-svn compiled now that I
upgraded to dual dual-core Opterons... and it's true. Once I got past
the initial compile (that was back in November, so things were quite
rough still and it took a bit of effort to get the USE flags and etc
right so it would compile the first time), it has been much easier to
keep up with routinely updating the compiled builds, than it has to
actually run them, and go in and get things customized to my liking in
the environment. I've played around with it some, but I've actually been
disappointed as I've simply not had the time I had expected to have to
get it running to my liking, so mostly, it just sits, and gets updated,
and seldom gets run as I continue to run kde-3.5.x for my daily work. It
has been quite frustrating, actually, because I've /not/ had that time --
putting in 10 hours a week more at work really takes the time away from
other things! But things are getting rather back to normal, now, so I'm
hoping I get to it in the next couple weeks...
> it sure helps when a rebuild is triggered after a lib soname update or
> relinking issues. in that case there's no need to recompile but only to
> relink and having binary builds helps a lot.
?? I can see how ccache would help, and maybe keeping the sources around
with the previous compile job in place so you can simply run make && make
install and it'll use existing work where it can, but the binary packages
are already dynamic stub-linked, so you'd still have to rebuild them. I
don't quite get what you are saying here, unless you're talking about
ccache, or under the misimpression that the binary packages aren't
rebuilt when they need to relink against a new *.so -- they are, from all
> the problem is always there: major or blocking bugs getting into the
> unstable branch after the update of the package (has happened with
> dhcpcd for example) that makes gentoo devs mask the package the next day
> or so. so for people like me that update daily this could be troublesome
> if it happens on the system packages.
Well, that's why I was wishing there was a convenient way to run ~arch,
but only emerge packages that had been ~arch keyworded for a week or so,
thus getting past the initial testing wave and any serious bugs that
might have prompted a new -rX release or a remasking.
However, there again, the binpkg repository is nice, as one can simply
roll back to whatever they had previously installed, if necessary.
> The cure, of course, is a faster machine! =8^) Gentoo was reasonable
>> with my old dual Opteron 242, but merges for things such as gcc, glibc,
>> and DEFINITELY whole desktop updates like KDE, tended to be a chore.
>> The upgrade to dual dual-core Opteron 290s, combined with 8 gigs memory
>> (tho 4 gig would do) and putting PORTAGE_TMPDIR on tmpfs, made a HUGE
> this is a high end machine. opteron is better than a normal athlon x2 in
> terms of compilation and having 8gb on a server mobo, that differs from
> normal mobos, is not affordable for every user. my system is about the
> average one (2ghz processor, 1gb ram, 80gb hdd) for notebooks and quite
> average one for desktops. remember that nuveau devs still work on
> athlons or pentium 3 hw, and that hw is not dead yet. your system
> nowadays is a middle-high end one.
Well, dual-cores are standard now for new gear anyway, and 1-2 gig
memory, but I get your point. My machine is certainly a low to middling
high-end machine today tho really only for the memory and dual dual-
cores. However, it's not /that/ great (except the 8 gig memory), as it's
the older socket-940 Opterons, none of the new virtualization stuff, etc,
and I have the top of the end of the line 2.8 GHz, while as I said, most
desktops come with dual-core now, and often run 3.2 GHz or higher.
Still, the next paragraph (which I'm snipping) /did/ talk about it being
common "a couple years from now", and I think that's a reasonable
prediction, except perhaps 4 gig memory instead of the 8 gig I have, but
4 gig is still within the comfort range for what I'm talking -- the 8 gig
is really a bit over the top currently, and it CERTAINLY was before I
upgraded to the dual-cores so was running only dual CPUs, roughly the
equivalent of a single dual-core today.
As I said, those who've not yet had a change to try quad cores (whether
single-core quad socket, dual-core dual socket as I have, or the new quad-
core)... it makes a MUCH bigger difference than I had expected it to.
Once they and 4 gigs memory become common, I really DO think from source
could take-off, because it really /does/ become trivial to do the
compiling -- as I found with kde-4 as I discussed above, the actual
compiling starts to become much less of an issue than the standard
sysadmin overhead, and people have to deal with that regardless of what
distribution they choose, binary or from-source.
> well, this is true for minor packages, but kdelibs or kdebase for
> example are still quite big and if you're not a developer you won't go
> around compiling it everyday or so.
See, that was exactly my reaction back with the dual single-cores (so
equivalent to today's single dual-cores). Minor packages were trivial,
but the big stuff, gcc, glibc, kdelibs, kdebase, etc, were "practically
managable", but still decidedly NOT trivial.
I feel like I'm repeating myself (and I am =8^), but I really /was/
shocked at how much of a difference the 4 cores (and a decent amount of
memory, but I had that first, so the bottleneck here was eliminated with
the 2 to 4 cores upgrade -- tho I did increase CPU speed substantially as
well, 1.6 to 2.8 GHz) made! It's really almost embarrassing how
trivially easy and fast it is to compile even the "big" packages. When
all of KDE can be upgraded in 4 hours from scratch, an hour if upgraded
daily from svn with ccache active, even formerly big individual
packages... just aren't much of a sweat any more.
It's kinda like how compiling the kernel used to be a big thing on 386s
and even Pentiums, but on even half-decent modern hardware, the lowest
end x86_64 around (well, with perhaps the exception of Via's Isaiah or
whatever it is, I honestly don't know how it does), and even with a
couple generations old Athlon 1.X GHz (pre-Athlon XP), it might be five
minutes or so, or even 10, but it's little more than a coffee or two at
most, certainly not the half-day affair it used to be! Well, with quad-
cores (again, however you get them, so dual-dual-cores or quad-single-
cores included) and say 4 gig of memory to work with, pretty much any
single package, even the ones like gcc/glibc/kdelibs that used to be a
big deal, is that trivial. (I just checked. Merging kdelibs-9999.4, the
kde4 svn version, took all of 5 minutes, 37 seconds, based on the random
entry in emerge.log I checked. Of course, that's with ccache enabled and
only a bit of update since the previous day, but it still has a bit of
compiling and all that linking to do. That's about right since as I
said, all of KDE often takes only an hour, if done daily.)
As for the kernel, it has really ceased even to be a decent measure of
performance, it's done so fast. Maybe a minute. I've honestly not timed
it since I got the new CPUs in. I can just see a future boot solution
containing an initial book kernel in BIOS, using it to load /boot with
the kernel sources, and compiling them dynamically from source based on
detected hardware at every boot, then using kexec to switch to the new,
freshly compiled kernel! =8^) If the previous work were kept, so only
newly detected hardware and any admin's config changes had to be built,
it'd literally only add a few seconds to the boot process, maybe 10
seconds if no config changes, 30 seconds to a minute if you move the disk
to a new machine and it has to rebuild almost the entire kernel.
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
firstname.lastname@example.org mailing list