1 |
On Saturday 04 February 2006 08:09, you wrote: |
2 |
|
3 |
> if you have 10 web servers, then they all use the same 'build profile' or |
4 |
> whatever y ou called them, that's fantastic. However... what if on 2 of |
5 |
> them, we want apache 'threads' enabled, and on 4 of them we want hardened |
6 |
> PHP patches, but we don't want it on all of them due to some issue with |
7 |
> the patch (this is all hypotetical). Lets also pretend that these 2 |
8 |
> exceptions overlap, 1 threaded server, 1 threaded with hardened php, 3 |
9 |
> hardened php, and 5 'standard' web servers. |
10 |
> |
11 |
> This means there's now 4 profiles, and 4 lots of build profiles to |
12 |
> maintain, 99% of the packages in these build profiles will use identical |
13 |
> use flags, only apache and php will be different - your system doens't |
14 |
> allow for these exceptions very nearly, which is my biggest concern. |
15 |
|
16 |
I do have a few of these situations, I just create a different build host |
17 |
for them. I developed this environment at home, where I have one really |
18 |
nice P4 3GHz with lots of memory (my myth box). I got sick of rebuilding |
19 |
by laptop and the wife's workstation over and over every time a toolchain |
20 |
or kde update happened. So now I do all of my builds on the fast machine. |
21 |
Each one has its own chroot and originally started at stage1. I have my |
22 |
laptop snapshot set to portage-latest, so I can update it daily if I |
23 |
desire.. |
24 |
|
25 |
With servers, the size of the build does not turn out to be too bad since |
26 |
all of the crap that is not needed for servers is not built in. I have a |
27 |
couple of situations like you describe, for example one set of web servers |
28 |
needs pam, the other does not, one set needs ssl, the other does not |
29 |
(behind a load balancer that offloads ssl). |
30 |
|
31 |
> I do entirly agree that standardisation is the way to go , but I want to |
32 |
> be able to neatly handle the exceptions - because unfortunatly, they will |
33 |
> happen. |
34 |
|
35 |
Actually, I went this way because it is more often that not quicker to |
36 |
completely rebuild from a stage1 and install from the resulting binary |
37 |
packages than to risk going through updates. Particularly if you don't |
38 |
update more often than once a week or so. By limiting what goes into the |
39 |
server builds (getting rid of the extranneous crap), the servers rarely |
40 |
need upgrading. Either way, I have to know a complete build (particular |
41 |
snapshot date) will work flawlessly _before_ I put it out on production |
42 |
servers. |
43 |
|
44 |
If you are interested, I will post my build scripts. They were not created |
45 |
for public consumption, so they are not perfect, but they do work... Any |
46 |
feedback/improvements would certainly be appreciated :) |