1 |
"Jamie" <gentoo@×××××××.nz> posted |
2 |
4049.210.55.22.193.1161647305.squirrel@××××××××××××××××××.com, excerpted |
3 |
below, on Tue, 24 Oct 2006 12:48:25 +1300: |
4 |
|
5 |
> A good piece of advice and one that I really should follow. What is the |
6 |
> best way to take an image of the Gentoo install? In my case my Gentoo |
7 |
> install resides on /dev/hda2 (boot) ; /dev/hda3 (swap) and /dev/hda5 |
8 |
> (root) - is it possible to use something like dd to image these partitions |
9 |
> to a disk somewhere else on my network? Is this a good/bad/non-workable |
10 |
> idea? |
11 |
> |
12 |
> I would welcome any constructive advise on this as the many hours of |
13 |
> building a Gentoo box is not something I want to do too often so being |
14 |
> able to image prior to any major upgrades would be a great safety net... |
15 |
|
16 |
There are of course several different alternatives. I've found what works |
17 |
for me, and it has been decently tested in recovery situations. |
18 |
|
19 |
#1 rule for me was that it needed to be relatively convenient, or I'd not |
20 |
use it regularly. |
21 |
|
22 |
#2, as you, hard disks seem to be the most cost effective here for nearly |
23 |
whole-system backup, and are reasonably reliable if you treat them right. |
24 |
I do backup choice bits multiple times, often multiple disk backup |
25 |
entries, but also burned to CD/DVD, particularly where the data isn't |
26 |
likely to be changing over long periods. |
27 |
|
28 |
#3 works as a result of 1 and 2: I'm able to boot directly into my backup |
29 |
system and resume from there using it as my main system. Something like |
30 |
tape backup simply doesn't work that way. Neither do emergency boot |
31 |
LiveCDs and the like, tho they have their place as a "worst case situation" |
32 |
solution, if my main backups fail. |
33 |
|
34 |
#4, my system scales. I used it as multiple partitions on a single hard |
35 |
drive and/or a second hard drive for years, including recovering from two |
36 |
partial hard drive failures. The last one was caused by overheating after |
37 |
my A/C went out. (Phoenix temps can reach 50C/122F outdoor, so after the |
38 |
A/C went out I figure inside temps like reached 60C/140F, possibly higher. |
39 |
The CPUs' thermal protection kicked in, but the hard drive understandably |
40 |
just wasn't the same after that.) I decided to switch to a RAID system |
41 |
for additional reliability, and my backup system scaled right along with |
42 |
it. I'm now running 4 x 300 gig Seagate SATA drives, slightly slower than |
43 |
some but a full five year warrantee, using kernel RAID, additional details |
44 |
below. |
45 |
|
46 |
Here's how it works. I have my system divided up into several protection |
47 |
levels, depending on the content. |
48 |
|
49 |
0: Temporary or easily redownloadable content need not be backed up at all. |
50 |
|
51 |
This includes /tmp, /var/tmp, the portage tree, /usr/src, and my ccache. |
52 |
All these can be kept in separate directories on the same "partition", |
53 |
using either config settings (for ccache and the portage tree) or symlinks |
54 |
(/tmp, /var/tmp). |
55 |
|
56 |
On a single drive, this will be a single partition, not duplicated. On a |
57 |
RAID system, this works very well as RAID-0/striped. Striped RAID is very |
58 |
fast, which happens to fit the use of much of this data quite well, but if |
59 |
one disk goes down, it takes the entire RAID-0 with it. That's just fine, |
60 |
as this is temporary data anyway, either throw-away or easily |
61 |
redownloadable/recoverable. IOW, this is the /perfect/ case for |
62 |
RAID-0/striped. |
63 |
|
64 |
1: boot area (MBR and /boot, with GRUB and kernels). |
65 |
|
66 |
On a single drive, you should keep a removable (ISO, thumb |
67 |
drive, zip drive, whatever) around with at least GRUB/LILO and a kernel or |
68 |
two, in case the hard drive's boot area and/or /boot fail. If you have |
69 |
two drives, keep a tested/working GRUB and /boot with fairly current |
70 |
kernels on both drives. In a RAID installation, RAID-1/mirrored /boot, |
71 |
and install and test-boot from GRUB on all drives in the mirror set. Note |
72 |
that RAID-1/mirroring will provide reasonable protection against physical |
73 |
drive failure, but it will **NOT** protect against "sysadmin fat-finger |
74 |
syndrome", since it's still only a single copy of /boot. Thus, you'll |
75 |
still want to keep a removable media backup boot solution available. For |
76 |
ease of update, a USB thumb-drive is probably the most convenient (128MB |
77 |
or even 64MB should work), altho a CDRW/DVDRW would work, as would a |
78 |
bootable zip drive and disk, altho they are somewhat passe by now. |
79 |
|
80 |
Again, this will likely be a single "partition" or unpartitioned RAID-1 |
81 |
device. |
82 |
|
83 |
Everything else: data that likely needs some level of redundancy |
84 |
protection. |
85 |
|
86 |
On a RAID system, it will be RAID protected, but likely at |
87 |
RAID-5, 6 or 10. (Here I use RAID-6. Again, more detail below.) For |
88 |
higher value data, however, just the redundancy of the RAID itself isn't |
89 |
enough, due to the previously mentioned "sysadmin fat-finger syndrome", |
90 |
or tho we don't have to worry about it so much on *ix, malware or the |
91 |
like. Single and double disk systems will want to backup some but not all |
92 |
of this data. |
93 |
|
94 |
This "everything else" can be further divided into three levels, system, |
95 |
high-value data, mid-value data, with multiple partitions accordingly. |
96 |
|
97 |
mid-value data: This consists of stuff like /var/log. It may also |
98 |
include content known retrievable from the net, but at some cost in time |
99 |
and trouble, and other system/purpose specific data that's not extremely |
100 |
difficult to replace but would require enough time or expense to do so |
101 |
that you can't rightly call it "temporary". |
102 |
|
103 |
On a single or double disk system, mid-value data will likely be treated |
104 |
for purposes of backup much like temporary data -- ignore it. The hassle |
105 |
of backing it up is likely to be more work than the value of the recovered |
106 |
data. However, on RAID systems, there's a distinction between the two in |
107 |
that mid-value data will be considered valuable enough to protect it with |
108 |
the redundancy of RAID, while temporary data is throw-away even at that |
109 |
level. Even so, the redundancy of the RAID will normally be considered |
110 |
safety enough -- no additional backup efforts worth it. That remains true |
111 |
despite the risk of "sysadmin fat-finger syndrome", since the risk is |
112 |
relatively low compared to the value of the data and the hassle of |
113 |
additional backup arrangements. |
114 |
|
115 |
A single operational copy of mid-value data will be all that's tracked, |
116 |
but as the amount and filesystem layout of this data will vary greatly |
117 |
across individual purposes and installations, it may be one partition, |
118 |
many, or one or more logical volumes managed with LVM2 or similar. (More |
119 |
on that below.) |
120 |
|
121 |
System data: This is the critical operational system. |
122 |
|
123 |
For least trouble managing it, the goal in Gentoo terms should be to |
124 |
include everything installed by portage, together with the portage |
125 |
database (in /var/db), and all system level configuration, all on the same |
126 |
(single) partition, BUT CRITICALLY, TO MAKE THREE IMAGES, three separate |
127 |
partitions, imaging essentially similar data. |
128 |
|
129 |
>From experience, I know keeping everything installed by portage together |
130 |
with the portage database and all system level config data, is the |
131 |
simplest to manage. While it's /possible/ to split off for example /var |
132 |
and /usr, that's needless additional complexity. The idea here is that |
133 |
everything should stay in sync. If you lose /usr, the portage database of |
134 |
what's installed that's located in /var/db won't be very useful. |
135 |
Similarly if / goes bye bye and you boot from "rootbak1", which has older |
136 |
binaries in /bin and an older /etc configuration, but still matched |
137 |
against the new /usr. That's no good and only causes headaches -- I've |
138 |
had it happen! Thus, keep the portage installation database in /var/db in |
139 |
sync with /usr, /, and /etc, by keeping everything on the same |
140 |
"partition", and you save yourself VAST amounts of trouble! << That last |
141 |
sentence is worth rereading. I KNOW from experience! FWIW, I'm running |
142 |
10 gig "system image" partitions, giving me plenty of room as I'm only |
143 |
using 1.5 gig according to df, so 5 gig system partitions should be fine |
144 |
on most systems, I'd think. |
145 |
|
146 |
As I mentioned above, ideally you have a minimum of THREE system images, |
147 |
identically sized. One will be your "working" system. The other two (and |
148 |
additionals if desired) are alternating backup snapshots. Here, I have no |
149 |
set schedule for system backups, though every three-ish months might be a |
150 |
reasonable goal. When I think everything's working particularly well, |
151 |
I'll just erase the oldest backup and copy the working system image over, |
152 |
creating a new snapshot. Two backups minimum is safest, as that means |
153 |
even if disaster strikes when I'm in the middle of one, and I lose the |
154 |
main working image before the backup is complete, I still have a known |
155 |
working second backup I can boot to. No sweat! Of course more system |
156 |
images lets you have more backups, effectively letting you take snapshots |
157 |
more frequently as anything over say 9 months old will likely be nearly as |
158 |
much work to update as it would be to install from scratch, so there's a |
159 |
practical limit to how old you want to go, and more images simply means |
160 |
you can take snapshots more frequently within that limit. |
161 |
|
162 |
It's worth point out again what I mentioned as #3 requirement/advantage |
163 |
above. With my system, any of the system snapshots is pretty much self |
164 |
contained and ready to boot, just as it was when the snapshot was copied |
165 |
over. No fiddling around restoring stuff if you are up against a deadline |
166 |
and have to have a working system NOW. If the normal/working system image |
167 |
fails, you simply boot one of your snapshots and continue where you left |
168 |
off. You have exactly the same set of applications, with exactly the same |
169 |
installation specific configuration, as you did when you took the |
170 |
snapshot, so you are immediately up and running, only possibly having to |
171 |
think back on how it was configured back then if you changed something |
172 |
since then. |
173 |
|
174 |
On a single disk system, you may be able to get away with two system |
175 |
images, but you lose the additional safety of the second backup. On a |
176 |
dual disk system, if space permits, you almost certainly want at least one |
177 |
backup image on each disk, the main/working image and one backup on |
178 |
presumably the bigger disk, the other backup on the other disk, thus |
179 |
maintaining a fully working system if either disk goes out. On a RAID |
180 |
system, you'll probably want your images in RAID-5, -6, or -10, /possibly/ |
181 |
RAID-1/mirrored if you have only a two-disk RAID or need the extreme |
182 |
redundancy. |
183 |
|
184 |
Taking a paragraph out to discuss RAID for a moment. RAID-0 as mentioned |
185 |
is striped for speed, with speed scaling as more devices are added, up to |
186 |
the speed of the buses they are attached to, but offers no redundancy, so |
187 |
any disk out and you lose it all. RAID-1 as mentioned is mirrored for |
188 |
redundancy -- the data is simply copied to each device in the RAID, and |
189 |
the RAID can continue to function as long as even a single device |
190 |
survives. RAID-1 (kernel implemented, anyway) is slow writing as the data |
191 |
must be written once to each device, but slightly faster than single-disk |
192 |
for reading under single tasking scenarios, faster still under |
193 |
multi-tasking i/o scenarios as the kernel will split the tasks between |
194 |
devices. RAID 4,5,6 are variations on parity RAID. RAID-4 is devices minus |
195 |
1 for data, with that one device dedicated to parity protection. If any |
196 |
data device goes down, the parity device can reconstruct the data, altho |
197 |
it's slower. Read-speed is comparable to the number of data devices in a |
198 |
striped array, but write-speed is bottlenecked by the parity device which |
199 |
is always written, which is the reason RAID-4 isn't commonly used any |
200 |
more. RAID-5 is RAID-4 but with the parity stripe alternating between |
201 |
devices, thus eliminating the write bottleneck of RAID-4. Both read and |
202 |
write speeds compare to a striped array of one less device. RAID-6 is |
203 |
double-parity, again alternating stripes. Thus, a four-device RAID-6 array |
204 |
is comparable in speed to a two-way striped array. Any TWO devices can |
205 |
die in a RAID-6, and it can still recover or even continue to operate |
206 |
altho at reduced speed. RAID-10 is a combination of RAID-0 and RAID-1, |
207 |
mirrored and striped. There's also RAID-0+1 which is the reverse |
208 |
combination but less efficient. I can never keep straight which is which, |
209 |
however, but that's fine as I'm not using it and remember the concept and |
210 |
that if I DO find a use for it, that I need to look it up again. Speeds |
211 |
are typically half the speed of a striped array of the same size, but with |
212 |
symmetric mirroring redundancy. RAID-10 is typically used in large |
213 |
arrays, where its efficiency is far better than RAID-5 or -6, while RAID-5 |
214 |
or -6 are typically used in smaller arrays or where total byte capacity is |
215 |
more important than speed. RAID-10 also maintains degraded (one or more |
216 |
bad devices but still operational) speed better than RAID-5 and -6 do, so |
217 |
is the better choice where continued operation at full speed even as |
218 |
individual devices fail is a must. |
219 |
|
220 |
As I mentioned, for my little four-disk RAID, RAID-6 seemed the best |
221 |
compromise of redundancy and speed for my needs, so that's what I chose to |
222 |
implement my "everything else" data protection on. The practical effect |
223 |
is a two-way striped array, in which I could lose any two of the four |
224 |
drives and still have the data on the RAID intact. |
225 |
|
226 |
Below I cover partitioned RAID vs. LVM2 managed volumes. Here, I'll |
227 |
simply mention that I have my system images implemented as partitioned |
228 |
RAID, (mdd partitioned, vs. md and lvm2), as it *significantly* |
229 |
decomplicates booting and recovery issues. |
230 |
|
231 |
That leaves high-value data: Installation specific data (such as /home |
232 |
and /usr/local) for which the redundancy of RAID is NOT sufficient |
233 |
protection. |
234 |
|
235 |
As with system images, here you'll want backup images of the data even if |
236 |
it's on RAID, to protect against "sysadmin fat-finger syndrome" among |
237 |
other things. Actual configuration and number of images will vary by |
238 |
purpose and site, but the idea is always keep at least two copies of |
239 |
every "partition". Single disk systems will want at least second partition |
240 |
copies, and will likely want removable backup copies of the most critical |
241 |
stuff. Dual disk systems will likely want to keep one image of at least |
242 |
the most critical stuff on each disk, just in case, thus lessening the |
243 |
need for removable media backup (altho not eliminating it entirely for the |
244 |
super-critical stuff). Likewise for RAID systems -- a second image |
245 |
protected by RAID is a must, but that and the RAID protection may well be |
246 |
considered enough given the lowered risk and the hassle factor of |
247 |
removable media backups, except for the most critical stuff, of course. |
248 |
In practice, it's likely that what would be DVDs worth of data needing |
249 |
some sort of backup can be reduced to CDs or thumb-drives worth of data at |
250 |
the critical value level worth backup beyond the second or third on-disk |
251 |
image. |
252 |
|
253 |
Here, I didn't worry much about number of partitions as I chose to manage |
254 |
everything using LVM2 (Logical Volume Management), which maintains most of |
255 |
the advantages of partitions (with an exception of direct bootability, |
256 |
thus the lower suitability for bootable system images), but is VASTLY more |
257 |
flexible. It's out of scope to detail LVM2 further here, save to note that |
258 |
even for single disk systems it could be worth the trouble to research it. |
259 |
|
260 |
Now, a few more notes on implementation and practicality issues. |
261 |
|
262 |
For general system management, I find mc aka midnight commander, an |
263 |
extremely valuable tool in my sysadmin toolbox. If you aren't familiar |
264 |
with it yet but are interested in an ncurses based |
265 |
file-manager/viewer/editor complete with scriptable macros and syntax |
266 |
highlighting, it's certainly worth trying. app-misc/mc . |
267 |
|
268 |
As with most of my sysadmin tasks, I use mc when I'm working on my |
269 |
partition images. Its copy and move mode maintain attributes by default |
270 |
and just do "The Right Thing" (TM) when it comes to symlinks, socket |
271 |
and device files, and other assorted filesystem irregularities of the *ix |
272 |
world. What could be simpler than simply copying existing files from the |
273 |
working location to the backup/snapshot location? Using mc's two window |
274 |
layout, it's that easy -- at least after the issue in the next paragraph |
275 |
is dealt with, anyway. Sure, one can use the archive switch and etc on a |
276 |
command line copy, or do impressive stuff using tar, but for me, simply |
277 |
copying it with mc accomplishes my objective with the least fuss and muss. |
278 |
|
279 |
So what's that issue I mentioned? Well, simply this. Particularly for |
280 |
the system image (/ fs with /etc and much of /usr and /var) upon which |
281 |
everything else is mounted, simply copying won't /normally/ stop at the |
282 |
system image (aka partition) boundary as we'd like in this case. There's |
283 |
also the issue of trying to copy the raw device files in /dev with udev |
284 |
mounted there, and other similar things to worry about. |
285 |
|
286 |
As it turns out, there's a relatively easy way around that. Simply use |
287 |
the mount --bind option to mount a view of the root filesystem (only, |
288 |
without everything mounted on it recursively) elsewhere, and copy /that/ |
289 |
view of the filesystem, instead of trying to copy the normal one. This is |
290 |
easiest accomplished with an entry in fstab similar to the following: |
291 |
|
292 |
# Dev/Part MntPnt Type MntOpt D F |
293 |
# for mount --bind, --rbind, and --move ### |
294 |
# /old/dir /new/dir none bind 0 0 |
295 |
/. /mnt/rootbind none bind,noauto 0 0 |
296 |
|
297 |
Now, when I go to create a system image snapshot, I can simply |
298 |
"mount /mnt/rootbind", and then simply copy (using mc, naturally) |
299 |
everything from there to my new image snapshot. Note that the dev/part |
300 |
is listed as /. Since there's already an entry for the root filesystem |
301 |
mounted at /, I can't use that or mount gets confused. Of course, /. |
302 |
is self referencing, and this little trick keeps mount happy. I had fun |
303 |
trying to figure it out. =8^) |
304 |
|
305 |
One more caveat: A very few files may be read-locked under normal |
306 |
operation and thus not able to be copied directly. I've never run into |
307 |
that here, but from what I understand, a typical example might be a |
308 |
database file. MySQL users and others running databases might want to pay |
309 |
attention here, but as I said, I've never run into issues, and those |
310 |
shouldn't be on the system image, but rather belong in the |
311 |
high-value data section, likely in their own dedicated partition, which |
312 |
should make special-casing that particular imaging task not too difficult. |
313 |
|
314 |
OK, now a word about fstab management. What I've found works best is to |
315 |
keep three separate fstabs (or one for each system image), call them |
316 |
fstab.working, fstab.bak1, and fstab.bak2, if you wish. They will be |
317 |
essentially the same except that the root filesystem will be switched |
318 |
around with the (noauto optioned, so they don't automount) two backup |
319 |
system image snapshots. Then on the working image, make fstab a symlink |
320 |
pointing to fstab.working. When you take a snapshot image, before you |
321 |
unmount after the copying, simply point the fstab symlink at the |
322 |
appropriate fstab.bak? file. Each system image then has a copy of all |
323 |
three files at the time the snapshot was taken, with the symlink adjusted |
324 |
to point at the right fstab.* file so everything just works when that |
325 |
image is booted. |
326 |
|
327 |
Finally, a bit more detail on the lvm vs partitioned RAID choices I made |
328 |
and why. First, it's worthwhile to note that a lot of the documentation |
329 |
still says (incorrectly) that Linux kernel RAID doesn't handle partitions. |
330 |
Linux kernel RAID (and the corresponding mdadm, don't know about the older |
331 |
raidtools) has worked with partitioned RAID devices since at least early |
332 |
in the 2.6 cycle, and I think sometime in the 2.5 development kernel cycle. |
333 |
|
334 |
Second, there's an important difference between the way Linux handles LVM2 |
335 |
and the way it handles RAID, including partitioned RAID. The RAID device |
336 |
drivers are entirely contained within the kernel (provided they are |
337 |
configured as built-in, not modules), and can be configured on the kernel |
338 |
command line (thus from grub), so one can boot RAID, including partitioned |
339 |
RAID, directly. Not so with LVM(both 1 and 2), which require userspace |
340 |
configuration to work, thus complicating the root on LVM2 boot scenario |
341 |
dramatically. While it's still possible to use an initrd/initramfs |
342 |
containing the LVM2 config for at least the root device, that's VASTLY |
343 |
more complex than simply tweaking the kernel command line in grub. Of |
344 |
course, should things go wrong with LVM2, fixing the problem is likely to |
345 |
be VASTLY more complex as well, while with RAID including partitioned |
346 |
RAID, if it's fixable, it's fixable from GRUB by altering the kernel |
347 |
command line or in the event of a wrongly configured kernel, simply |
348 |
booting back to the old kernel that handled it correctly. |
349 |
|
350 |
Thus, for ease of boot maintenance and recovery, I chose NOT to use LVM2 |
351 |
on my bootable system rootfs images. Once that's up and mounted, I have |
352 |
the full system with all the usual tools available to me to recover |
353 |
or reconfigure additional RAID devices, or troubleshoot LVM2, if either |
354 |
one is necessary. Thus, no big deal having my mid-value and high-value |
355 |
data on LVM2, where it's easy to manage the volumes than it would be |
356 |
partitions. I just don't want the additional complexity of having to |
357 |
worry about LVM2 on my main root filesystem, whether working image or |
358 |
backup images, since the purpose is to have all of them bootable, and |
359 |
managing that is VASTLY less complex with direct kernel boot than it would |
360 |
be if I had to worry about an initramfs to load LVM2. |
361 |
|
362 |
So, after all the above, here as an example is how I have my system |
363 |
configured (roughly): |
364 |
|
365 |
For each drive, /dev/sd[a-d], same sized partitions on each of the four |
366 |
300 gig SATA drives: |
367 |
|
368 |
GRUB on the MBR |
369 |
|
370 |
/dev/sdX1: partition forming part of a RAID-1 /dev/md0 for /boot |
371 |
|
372 |
/dev/sdX2: partition forming part of a partitioned RAID-6 /dev/md_d1 |
373 |
|
374 |
/dev/sdX3: partition for swap, all four partitions pri=1, so the |
375 |
kernel stripes them automatically when it activates them |
376 |
|
377 |
/dev/sdX4: extended partition, reserved/ignore |
378 |
|
379 |
/dev/sdX5: partition forming part of a partitioned RAID=0 /dev/md_d2 |
380 |
however I didn't actually need this one partitioned |
381 |
|
382 |
/dev/sdX6 and beyond: ~100 gig remains free and unpartitioned on each 300 |
383 |
gig drive, to be allocated later as needed. |
384 |
|
385 |
The partitions on the RAID-6 /dev/md_d1 are as follows: |
386 |
|
387 |
/dev/md_d1p1: working system image |
388 |
/dev/md_d1p2: /mnt/bak1 (first backup system image) |
389 |
/dev/md_d1p3: /mnt/bak2 (second backup system image) |
390 |
/dev/md_d1p4: reserved as an LVM2 physical volume group, vgraid6 |
391 |
|
392 |
vgraid6 has the following logical volumes: |
393 |
|
394 |
/dev/vgraid6/home |
395 |
/dev/vgraid6/homebk (the backup /home image) |
396 |
/dev/vgraid6/local (this is /usr/local) |
397 |
/dev/vgraid6/localbk |
398 |
/dev/vgraid6/log (/var/log, raided but not worth a backup image) |
399 |
/dev/vgraid6/mail |
400 |
/dev/vgraid6/mailbk |
401 |
/dev/vgraid6/mm (multimedia) |
402 |
/dev/vgraid6/mmbk |
403 |
/dev/vgraid6/news (raided but no backup, again, mid-value data) |
404 |
/dev/vgraid6/pkg (portages package dir for FEATURES=buildpkg) |
405 |
/dev/vgraid6/pkgbk |
406 |
|
407 |
The RAID-0/striped /dev/md_d2p1 (the only partition I'm using) |
408 |
mounts at /mnt/str (for striped). Subdirs: |
409 |
|
410 |
/mnt/str/ccache |
411 |
/mnt/str/portage (the portage tree, including distdir but not pkgdir) |
412 |
/mnt/str/usrsrc (for /usr/src/linux) |
413 |
|
414 |
It would have /tmp as well except that with 8 gigs of memory, I mount /tmp |
415 |
as tmpfs, maximum size 5 gig. /var/tmp symlinks /tmp, and all portage's |
416 |
compiles use the tmpfs for scratch space, dramatically speeding up merges! |
417 |
=8^) |
418 |
|
419 |
Obviously, I've spent quite some time devising a solution that fits my |
420 |
needs. There's only one big thing I'd change about my layout, and I |
421 |
already accounted for that as described above. I actually have only two |
422 |
system images, working and a single backup. Next time I reorganize |
423 |
(remember, I've got a third of the space remaining on the drives, a |
424 |
hundred gigs each, to do so), I'll create a third system image. I'll |
425 |
probably do something different with swap as well, since I only made those |
426 |
partitions four gigs each and with 8 gigs of memory and the fact that |
427 |
in-kernel suspend to disk writes to a single device/partition only, that's |
428 |
not enough to properly suspend, even if the kernel's implementation /does/ |
429 |
work right (last I checked, back when I had only a gig of RAM, it didn't, |
430 |
but there's some dramatic changes going on in that department so it might |
431 |
now -- except I error out due to lack of room in the partition!). |
432 |
|
433 |
While the above is my current RAID layout, I was using something very |
434 |
similar back when I had only two disks, as described. The layout now is a |
435 |
bit more sophisticated, however. In particular, I had /usr and /var on |
436 |
their own partitions back then, but as I said, that lead to serious |
437 |
problems when one died and they got out of sync with each other, thus the |
438 |
warnings about that and the current layout folding those back into the |
439 |
single system image. The idea is solid and has demonstrated itself over |
440 |
two partial drive failures, as I mentioned, before I decided I wanted the |
441 |
reliability of RAID. |
442 |
|
443 |
-- |
444 |
Duncan - List replies preferred. No HTML msgs. |
445 |
"Every nonfree program has a lord, a master -- |
446 |
and if you use the program, he is your master." Richard Stallman |
447 |
|
448 |
-- |
449 |
gentoo-amd64@g.o mailing list |