1 |
On Wed, 17 Feb 2016 22:26:32 -0500 Richard Yao wrote: |
2 |
> On 02/17/2016 02:01 PM, Andrew Savchenko wrote: |
3 |
> > On Tue, 16 Feb 2016 15:18:46 -0500 Rich Freeman wrote: |
4 |
> >> On Tue, Feb 16, 2016 at 2:31 PM, Patrick Lauer <patrick@g.o> wrote: |
5 |
> >>> |
6 |
> >>> The failure message comes from rc-mount.sh when the list of PIDs using a |
7 |
> >>> mountpoint includes "$$" which is shell shorthand for self. How can the |
8 |
> >>> current shell claim to be using /usr when it is a shell that only has |
9 |
> >>> dependencies in $LIBDIR ? |
10 |
> >>> As far as I can tell the code at this point calls fuser -k ${list of |
11 |
> >>> pids}, and fuser outputs all PIDs that still use it. I don't see how $$ |
12 |
> >>> can end up in there ... |
13 |
> >> |
14 |
> >> What does openrc do when the script fails? Just shut down the system anyway? |
15 |
> >> |
16 |
> >> If you're going to shut down the system anyway then I'd just force the |
17 |
> >> read-only mount even if it is in use. That will cause less risk of |
18 |
> >> data loss than leaving it read-write. |
19 |
> >> |
20 |
> >> Of course, it would be better still to kill anything that could |
21 |
> >> potentially be writing to it. |
22 |
> > |
23 |
> > This is not always possible. Two practical cases from my experience: |
24 |
> > |
25 |
> > 1) NFS v4 shares can't be unmounted if server is unreachable (even |
26 |
> > with -f). If filesystem (e.g. /home or /) contains such unmounted |
27 |
> > mount points, it can't be unmounted as well, because it is still in |
28 |
> > use. This happens quite often if both NFS server and client are |
29 |
> > running from UPS on low power event (when AC power failed and |
30 |
> > battery is almost empty). |
31 |
> |
32 |
> Does `umount -l /path/to/mnt` work on those? |
33 |
|
34 |
No, if mount point is already stalled, -l is of no use. |
35 |
|
36 |
> > 2) LUKS device is in frozen state. I use this as a security |
37 |
> > precaution if LUKS fails to unmount (or it takes too long), e.g. |
38 |
> > due to dead mount point. |
39 |
> |
40 |
> This gives me another reason to justify being a fan of integrating |
41 |
> encryption directly into a filesystem |
42 |
|
43 |
Ext4 and f2fs do this, but with limited cypersuits available. |
44 |
|
45 |
Actually problems with LUKS are not critical: I never lost data, |
46 |
integrity there and had only a small security issue. The only |
47 |
failure I can remember was libgcrypt whirlpool issue (invalid |
48 |
implementation on old versions and incompatible fix on new ones). |
49 |
|
50 |
> or using ecryptfs on top of the VFS. |
51 |
|
52 |
No, never. Not on my setups. Ecryptfs is |
53 |
1) unsecure (leaks several bytes of data) |
54 |
2) unreliable, as it depends on boost and other high-level C++ |
55 |
stuff. I had lost an ability to decrypt data because of boost XML |
56 |
versioning change. |
57 |
|
58 |
> The others were possible integrity concerns (which definitely |
59 |
> happen with a frozen state, |
60 |
|
61 |
In theory — maybe. In real life — no. I use LUKS for over 8 years, |
62 |
I often had frozen shutdowns and I never had data loss there. In |
63 |
terms of data integrity LUKS + ext4 is ultimate. This joint |
64 |
survived even on host with failing RAM for several years. |
65 |
|
66 |
> although mine were about excessive layering |
67 |
> adding opportunities for bugs) and performance concerns from doing |
68 |
> unnecessary calculations on filesystems that span multiple disks (e.g. |
69 |
> each mirror member gets encrypted independently). |
70 |
|
71 |
Ehh... why independently? Just create mdadm with proper chunk |
72 |
size, put LUKS on top of it, align both LUKS and internal fs to that |
73 |
size and relevant stride and performance will be optimal. On SSDs |
74 |
this is harder because it is very difficult to determine erase |
75 |
block size properly, but that is another issue. |
76 |
|
77 |
Best regards, |
78 |
Andrew Savchenko |