1 |
2016-09-01 14:55 GMT+03:00 Rich Freeman <rich0@g.o>: |
2 |
> On Thu, Sep 1, 2016 at 2:04 AM, gevisz <gevisz@×××××.com> wrote: |
3 |
>> |
4 |
>> Is it still advisable to partition a big hard drive |
5 |
>> into smaller logical ones and why? |
6 |
>> |
7 |
> |
8 |
> Assuming this is only used on Linux machines (you mentioned moving |
9 |
> files around), here is what I would do: |
10 |
> |
11 |
> 1. Definitely create a partition table. Yes, I know some like to |
12 |
> stick filesystems on raw drives, but you're basically going to fight |
13 |
> all the automation in existence if you do this. |
14 |
|
15 |
I will do it with gparted and guess that it will create a partition table |
16 |
for me anyway. |
17 |
|
18 |
> 2. Set it up as an LVM partition. Unless you're using filesystems |
19 |
> like zfs/btrfs that have their own way of doing volume management, |
20 |
> this just makes things less painful down the road. |
21 |
> |
22 |
> 3. I'd probably just set it up as one big logical volume, unless you |
23 |
> know you don't need all the space and you think you might use it for |
24 |
> something else later. You can change your mind on this with ext4+lvm |
25 |
> either way, but better to start out whichever way seems best. |
26 |
|
27 |
I had to refresh my memory about LVM before replying to you |
28 |
but still can not see why I may need LVM on an external |
29 |
hard drive... |
30 |
|
31 |
> It will take you all of 30 seconds to format this, unless you're |
32 |
> running badblocks (which almost nobody does, because... |
33 |
|
34 |
it takes too much time? |
35 |
|
36 |
I currently running a smart test on it, and it promised to take |
37 |
10 hours to complete... |
38 |
|
39 |
> You seem to be concerned about losing data. You should be. This is a |
40 |
> physical storage device. You WILL lose everything stored on it at |
41 |
> some point in time. |
42 |
|
43 |
Last time, I have managed to restore all the data from my 2.5" hard |
44 |
drive that suddenly died about 7 years ago and hope to do it again |
45 |
if any. :) |
46 |
|
47 |
> You mitigate this by one or more of: |
48 |
> 1. Not storing anything you mind losing on the drive, and then not |
49 |
> complaining when you lose it. |
50 |
> 2. Keeping backups, preferably at a different physical location, |
51 |
> using a periodically tested recovery methodology. |
52 |
> 3. Availability solutions like RAID (not the same as a backup, but it |
53 |
> will mean less downtime WHEN you WILL have a drive failure). Some |
54 |
> filesystems like zfs/btrfs have specific ways of achieving this (and |
55 |
> are generally more resistant to unreliable storage devices, which all |
56 |
> storage devices are). |
57 |
> |
58 |
> I've actually had LVM eat my data once due to some kind of really rare |
59 |
> bug (found one discussion of similar issues on some forum somewhere). |
60 |
|
61 |
Aha! |
62 |
|
63 |
> That isn't a good reason not to use LVM. Wanting to plug the drive |
64 |
> into a bunch of Windows machines would be a good reason not to use |
65 |
> LVM, or ext4 for that matter. |
66 |
> |
67 |
> Most of the historic reasons for not having large volumes had to do |
68 |
> with addressing limits, whether it be drive geometry limits, |
69 |
> filesystem limits, etc. Modern partition tables like GPT and |
70 |
> filesystems can handle volumes MUCH larger than 5TB. |
71 |
> |
72 |
> Most modern journaling filesystems should also tend to avoid failure |
73 |
> modes like losing the entire filesystem during a power failure (when |
74 |
> correctly used, heaven help you if you follow a random friend's advice |
75 |
> with mount options, like not using at least ordered data or disabling |
76 |
> barriers). But, bugs can exist, which is a big reason to have backups |
77 |
> and not just trust your filesystem unless you don't care much about |
78 |
> the data. |
79 |
|
80 |
Thank you for replying. |