1 |
On Thu, Sep 1, 2016 at 2:04 AM, gevisz <gevisz@×××××.com> wrote: |
2 |
> |
3 |
> Is it still advisable to partition a big hard drive |
4 |
> into smaller logical ones and why? |
5 |
> |
6 |
|
7 |
Assuming this is only used on Linux machines (you mentioned moving |
8 |
files around), here is what I would do: |
9 |
|
10 |
1. Definitely create a partition table. Yes, I know some like to |
11 |
stick filesystems on raw drives, but you're basically going to fight |
12 |
all the automation in existence if you do this. |
13 |
2. Set it up as an LVM partition. Unless you're using filesystems |
14 |
like zfs/btrfs that have their own way of doing volume management, |
15 |
this just makes things less painful down the road. |
16 |
3. I'd probably just set it up as one big logical volume, unless you |
17 |
know you don't need all the space and you think you might use it for |
18 |
something else later. You can change your mind on this with ext4+lvm |
19 |
either way, but better to start out whichever way seems best. |
20 |
|
21 |
It will take you all of 30 seconds to format this, unless you're |
22 |
running badblocks (which almost nobody does, because...). |
23 |
|
24 |
You seem to be concerned about losing data. You should be. This is a |
25 |
physical storage device. You WILL lose everything stored on it at |
26 |
some point in time. You mitigate this by one or more of: |
27 |
1. Not storing anything you mind losing on the drive, and then not |
28 |
complaining when you lose it. |
29 |
2. Keeping backups, preferably at a different physical location, |
30 |
using a periodically tested recovery methodology. |
31 |
3. Availability solutions like RAID (not the same as a backup, but it |
32 |
will mean less downtime WHEN you WILL have a drive failure). Some |
33 |
filesystems like zfs/btrfs have specific ways of achieving this (and |
34 |
are generally more resistant to unreliable storage devices, which all |
35 |
storage devices are). |
36 |
|
37 |
I've actually had LVM eat my data once due to some kind of really rare |
38 |
bug (found one discussion of similar issues on some forum somewhere). |
39 |
That isn't a good reason not to use LVM. Wanting to plug the drive |
40 |
into a bunch of Windows machines would be a good reason not to use |
41 |
LVM, or ext4 for that matter. |
42 |
|
43 |
Most of the historic reasons for not having large volumes had to do |
44 |
with addressing limits, whether it be drive geometry limits, |
45 |
filesystem limits, etc. Modern partition tables like GPT and |
46 |
filesystems can handle volumes MUCH larger than 5TB. |
47 |
|
48 |
Most modern journaling filesystems should also tend to avoid failure |
49 |
modes like losing the entire filesystem during a power failure (when |
50 |
correctly used, heaven help you if you follow a random friend's advice |
51 |
with mount options, like not using at least ordered data or disabling |
52 |
barriers). But, bugs can exist, which is a big reason to have backups |
53 |
and not just trust your filesystem unless you don't care much about |
54 |
the data. |
55 |
|
56 |
-- |
57 |
Rich |