1 |
Beso <givemesugarr@×××××.com> posted |
2 |
d257c3560710111422ne24dda4gf8bfd77dbd07f556@××××××××××.com, excerpted |
3 |
below, on Thu, 11 Oct 2007 23:22:28 +0200: |
4 |
|
5 |
> i'd like to commute my laptop system to lvm2, but before doing that i'd |
6 |
> like some hints. |
7 |
|
8 |
I run LVM on RAID, so let's see... =8^) |
9 |
|
10 |
> first, i'd like to know if there's a way of passing an existing gentoo |
11 |
> installation on a hda disk going through the sda stack (i needed to do |
12 |
> that cause it was the only thing that fixed a problem with my ata disk |
13 |
> going only on udma-33 after kernel 2.20) with sata-pata piix controller |
14 |
> which is still experimental. i've taken a look around but haven't |
15 |
> actually seen a document explaining if it is possible to do that and how |
16 |
> to do that. |
17 |
|
18 |
I don't know that specific controller, but in general, if it's the |
19 |
correct libata SATA/PATA driver, the conversion from using the old IDE |
20 |
drivers is pretty straightforward. On the kernel side, it's just a |
21 |
matter of selecting the libata driver instead of the ide driver. |
22 |
|
23 |
Configuring mount-devices, you use /dev/sdX or /dev/srX, depending. |
24 |
(I'm not actually sure which one IDE drives use -- I run SATA drives but |
25 |
have my DVD burners on PATA still, using the libata drivers, and they |
26 |
get /dev/srX not /dev/sdX.) Basically, try both from grub and see which |
27 |
works. Once you figure out which devices it's using (srX or sdX), you'll |
28 |
need to setup your fstab using the correct ones. |
29 |
|
30 |
Keep in mind that it's possible your device numbering will change as |
31 |
well, particularly if you already had some SATA or SCSI devices. That's |
32 |
the most common problem people run into -- their device numbers changing |
33 |
around on them, because the kernel will load the drivers and test the |
34 |
drives in a different order than it did previously. Another variation on |
35 |
the same problem is that devices such as thumb drives and MP3 players |
36 |
typically load as SCSI devices, so once you switch to SCSI for your |
37 |
normal drives, you may find the device numbering changing depending on |
38 |
whether your MP3player/thumb-drive is plugged in or not at boot. One |
39 |
typical solution to this problem, if you find you have it, is using the |
40 |
other shortcuts automatically created by UDEV. Here, I use /dev/disk/by- |
41 |
label/xxxxx for my USB drive mount-devices in fstab, thus allowing me to |
42 |
keep them straight, and access them consistently regardless of whether I |
43 |
have one or more than one plugged in at once. |
44 |
|
45 |
Finally, there's one more possible complication. SCSI drives are |
46 |
normally limited to 15 partitions, while the old IDE drivers allowed 63. |
47 |
If you run >15 partitions, better do some reconfiguring... but you're |
48 |
already on the right track, as that's right where LVM comes in! =8^) |
49 |
|
50 |
> second, i'd like to know if there's a need for a raid |
51 |
> enabled motherboard and more than one disk to go on lvm. i only have a |
52 |
> 100gb disk that i'd like to convert to lvm with no raid. |
53 |
|
54 |
LVM and RAID are two different layers. While it's common to run LVM on |
55 |
top of RAID, it's not necessary at all. It's perfectly fine to run LVM |
56 |
on a single drive, and in fact, a lot of folks run LVM not because it |
57 |
helps manage RAID, but because it makes managing their volumes (think |
58 |
generic for partitions) easier, regardless of /what/ they are on. =8^) |
59 |
|
60 |
[I reordered this and the next question] |
61 |
> i'd like to use it on amd64. is there any problem? i have seen around |
62 |
> some problems with lvm and amd64 some of them marked as solved, so i'd |
63 |
> like to know if there could be problems with this arch. thanks for your |
64 |
> help. |
65 |
|
66 |
I've had absolutely no issues at all with LVM on amd64, here. Once |
67 |
configured, it has "just worked". Note that I'm running LVM2 and NOT |
68 |
running LVM1 compatibility mode, as I setup LVM long after LVM2 was the |
69 |
recommended deployment. It's possible/likely that the amd64 problems |
70 |
were long ago, with LVM1 configurations. |
71 |
|
72 |
> and last, does it make sense doing a passage to lvm? i currently run |
73 |
> into some problems with my root partition that gets filled and that i |
74 |
> always have to watch the free space on it, so if i don't pass to raid |
75 |
> i'll try to duplicate the partition on a greater one. |
76 |
|
77 |
As I mentioned, a lot of folks swear by LVM for managing all their |
78 |
volumes, as once you get the hang of it, it's simply easier. They'd |
79 |
CERTAINLY say it's worth it. |
80 |
|
81 |
There is one caveat. Unlike RAID, which you can run your root filesystem |
82 |
off of, and /boot as well for RAID-1, LVM requires user mode |
83 |
configuration. Therefore, you can't directly boot off of LVM. I don't |
84 |
believe there's a way to put /boot on LVM at all because to my knowledge, |
85 |
neither grub nor lilo grok lvm (tho it's possible I'm wrong, I've just |
86 |
never seen anything saying it's possible, neither have I had anyone |
87 |
correct this statement of belief when I've made it in the past). |
88 |
|
89 |
The root filesystem CAN be on LVM, but it requires running an initrd/ |
90 |
initramfs and proper configuration thereof in ordered to do. Here, I |
91 |
configure my own kernel directly (without using genkernel or whatever), |
92 |
and while I understand the general idea, I've never taken the time to |
93 |
figure out initrd/initramfs as I've never needed it, and strongly prefer |
94 |
continuing to omit that extra level of complexity from my boot process if |
95 |
at all possible. Thus, while I have my root filesystem (and my emergency |
96 |
backup image of same) on RAID (which the kernel can handle on its own |
97 |
entirely automatically in the simple case, or with a couple simple kernel |
98 |
command line parameters in more complex situations), I deliberately chose |
99 |
NOT to put my root filesystem on LVM, thereby making it possible to |
100 |
continue to boot to the main root filesystem directly, no initramfs/ |
101 |
initrd necessary, as it would be if the root filesystem was on LVM. |
102 |
|
103 |
So... at minimum, you'll need to have a traditional /boot partition, and |
104 |
depending on whether you want to run LVM in an initramfs/initrd or would |
105 |
prefer to avoid that, as I did, you may want your root filesystem on a |
106 |
traditional partition as well. |
107 |
|
108 |
FWIW, my root filesystems (the main/working copy, and the backup copy... |
109 |
next time I redo it, I'll make TWO backup copies, so I'll always have one |
110 |
to fall back to if the worst happens and the system goes down while I'm |
111 |
updating my backup copy) are 10 GB each. That includes all of what's |
112 |
traditionally on /, plus most of /usr (but not /usr/local, /usr/src, or |
113 |
/usr/portage), and most of /var (but not /var/log, and I keep mail and |
114 |
the like on its own partition as well). |
115 |
|
116 |
The idea with what I put on / here was that I wanted keep everything that |
117 |
portage touched on one partition, so it was always in sync. This was |
118 |
based on passed experience with a dying hard drive in which my main |
119 |
partition went out. I was able to boot to my backup root, but /usr was |
120 |
on a separate partition, as was /var. /var of course contains the |
121 |
installed package database, so what happened is that what the package |
122 |
database said I had installed only partly matched what was really on |
123 |
disk, depending on whether it was installed to /usr or to /. *That* |
124 |
*was* *a* *BIG* *mess* to clean up!!! As a result, I decided from then |
125 |
on, everything that portage installed had to be on the same partition as |
126 |
its package database, so everything stayed in sync. With it all in sync, |
127 |
if I had to boot the backup, it might not be current, but at least the |
128 |
database would match what was actually installed since it was all the |
129 |
same backup, and it would be *far* easier to /make/ current, avoiding the |
130 |
problems with orphaned libraries and etc that I had for months as a |
131 |
result of getting out of sync. |
132 |
|
133 |
Of course, as an additional bonus, since I keep root backup volumes as |
134 |
well, with the entire operational system on root and therefore on the |
135 |
backups, if I ever have to boot to the backup, I have a fully configured |
136 |
and operational system there, just as complete and ready to run as was my |
137 |
main system the day I created the backup off of it. |
138 |
|
139 |
So FWIW, here, as I said, 10 gig root volumes, designed to everything |
140 |
portage installs along with its package database is on root, and |
141 |
according to df /: |
142 |
|
143 |
Filesystem Size Used Avail Use% Mounted on |
144 |
/dev/md_d1p1 9.6G 1.8G 7.8G 19% / |
145 |
|
146 |
So with / configured as detailed above, at 19% usage, 10 gigs is plenty |
147 |
and to spare. I could actually do it in 5 gig and still be at <50% |
148 |
usage, but I wanted to be SURE I never ran into a full root filesystem |
149 |
issue. I'd recommend a similar strategy for others as well, and assuming |
150 |
one implements it, 10 gig should be plenty and to spare for current and |
151 |
future system expansion for quite some time. (While I only have KDE |
152 |
installed, even if one were to have GNOME AND KDE 3.5.x AND the upcoming |
153 |
KDE 4.x AND XFCE AND..., even then, I find it difficult to see how a 10 |
154 |
gig / wouldn't be more than enough.) |
155 |
|
156 |
Swap: If you hibernate, aka suspend to disk, using the swap partition |
157 |
for your suspend image, I /think/ it has to be on kernel-only configured |
158 |
volumes, thus not on LVM. At least with the mainline kernel suspend |
159 |
(which I use, suspend2 may well be different), the default image is half |
160 |
a gig. However, by writing a value (number of bytes, why they couldn't |
161 |
have made it KB or MB I don't know...) to /sys/power/image_size, you can |
162 |
change this if desired. If you make it at least the size of your memory |
163 |
(but not larger than the size of the single swap partition it's going |
164 |
to), you won't lose all your cache at suspend, and things will be more |
165 |
responsive after resume. The cost is a somewhat longer suspend and |
166 |
resume cycle, as it's writing out and reading in more data. Still, I've |
167 |
found it well worth it, here. You can of course create additional swap |
168 |
space on the LVM if you want to later, but can't use it as suspend image, |
169 |
and the additional layer of processing will make it slightly less |
170 |
efficient (at least in theory, but given the bottleneck is the speed of |
171 |
the disk, in practice, it's going to be very slight if even measureable). |
172 |
|
173 |
So... assuming you deploy based on the above, here's what you will want |
174 |
directly on your hard drive as partitions: |
175 |
|
176 |
/boot 128 meg, half a gig, whatever... |
177 |
|
178 |
/ \ These three, 10 gig each, |
179 |
rtbk1 } again, as partitions |
180 |
rtbk2 / directly on the hard drive. |
181 |
|
182 |
swap Size as desired. Particularly if you suspend to disk, |
183 |
you'll want this on a real partition, not on LVM. |
184 |
|
185 |
LVM2 Large partition, probably the rest of the disk. |
186 |
You then create "logical volumes" managed with LVM2 |
187 |
on top of this, and then mkfs/format the created logical |
188 |
volumes with whatever filesystems you find most |
189 |
appropriate, in whatever size you need. |
190 |
|
191 |
If you keep some extra space in the "volume group", you can then expand |
192 |
any of the volumes within as necessary. When you do so, LVM simply |
193 |
allocates the necessary additional space at the end of what's already |
194 |
used, marking it as in use and assigning it to the logical volume you |
195 |
want to expand. |
196 |
|
197 |
If you run out of space on the drive, you can then add another drive (or |
198 |
partition on another drive, or RAID device, or whatever, as long as it's |
199 |
a block device, LVM should be able to use it), then tell LVM about it and |
200 |
that you want it added to your existing volume group, and you'll have |
201 |
additional room to continue to expand into, all without worrying about |
202 |
further partitioning or whatever. |
203 |
|
204 |
Correspondingly, if you want to reorganize your logical volumes (LVs) |
205 |
under LVM, moving the data to other volumes and deleting the freed LVs, |
206 |
that's fine too. Simply do that. The space freed up by the deleted LV |
207 |
is then free to be used by other logical volumes as necessary. Just tell |
208 |
LVM you want to do it, and it does it. All you worry about is the |
209 |
available space to expand LVs into, and LVM takes care of where they are |
210 |
actually located within the space you've told it to manage as that volume |
211 |
group (VG). |
212 |
|
213 |
THE BIG CAVEAT, however, is simply that with LVM managing all of it, it's |
214 |
all too easy to just continue adding drives to your existing volume |
215 |
groups, forgetting how old your first drive is, and that it'll eventually |
216 |
wear out and fail. This is what's so nice about putting LVM on RAID, |
217 |
since the redundancy of RAID allows drives to fail and be replaced, while |
218 |
the LVM continues to live on and adapt to your ever changing and normally |
219 |
ever growing data needs. |
220 |
|
221 |
As long as you catch it before the failure, however, you can simply |
222 |
create a new VG (volume group) on your new drive (or partition, or set of |
223 |
drives or partitions, or RAID device(s), or other block device(s)), size |
224 |
it as necessary to hold your existing data, and copy or move stuff over. |
225 |
As with the above, you'll have to worry about any non-LVM partitions/ |
226 |
volumes/whatever, but the bulk of your data including all your user and/ |
227 |
or server data will be in the LVM, with the flexibility it provides, so |
228 |
even with the limited number of non-LVM partitions I recommended above, |
229 |
it'll still be vastly easier to manage upgrading drives than it would be |
230 |
if you were handling all that data in individual partitions. |
231 |
|
232 |
So the big thing is, just because it's all on LVM now, and easy to expand |
233 |
to a new drive, doesn't mean you can forget about the age of your old |
234 |
drive, and that if you simply expand, the old drive will eventually fail, |
235 |
taking all the data on it, and your working system if you've not made |
236 |
proper arrangements, with it. Remember that (or put it on appropriate |
237 |
RAID so you have its redundancy backing up the LVM) and LVM can certainly |
238 |
be worth the trouble of learning how it works and the initial deployment, |
239 |
yes, even on single disks. |
240 |
|
241 |
-- |
242 |
Duncan - List replies preferred. No HTML msgs. |
243 |
"Every nonfree program has a lord, a master -- |
244 |
and if you use the program, he is your master." Richard Stallman |
245 |
|
246 |
-- |
247 |
gentoo-amd64@g.o mailing list |