1 |
On Mon, Dec 20, 2021 at 12:52 PM Rich Freeman <rich0@g.o> wrote: |
2 |
> |
3 |
> On Mon, Dec 20, 2021 at 1:52 PM Mark Knecht <markknecht@×××××.com> wrote: |
4 |
> > |
5 |
> > I've recently built 2 TrueNAS file servers. The first (and main) unit |
6 |
> > runs all the time and serves to backup my home user machines. |
7 |
> > Generally speaking I (currently) put data onto it using rsync but it |
8 |
> > also has an NFS mount that serves as a location for my Raspberry Pi to |
9 |
> > store duplicate copies of astrophotography pictures live as they come |
10 |
> > off the DSLR in the middle of the night. |
11 |
> > |
12 |
> > ... |
13 |
> > |
14 |
> > The thing is that the ZIL is only used for synchronous writes and I |
15 |
> > don't know whether anything I'm doing to back up my user machines, |
16 |
> > which currently is just rsync commands, is synchronous or could be |
17 |
> > made synchronous, and I do not know if the NFS writes from the R_Pi |
18 |
> > are synchronous or could be made so. |
19 |
> > |
20 |
> |
21 |
> Disclaimer: some of this stuff is a bit arcane and the documentation |
22 |
> isn't very great, so I could be missing a nuance somewhere. |
23 |
> |
24 |
> First, one of your options is to set sync=always on the zfs dataset, |
25 |
> if synchronous behavior is strongly desired. That will force ALL |
26 |
> writes at the filesystem level to be synchronous. It will of course |
27 |
> also normally kill performance but the ZIL may very well save you if |
28 |
> your SSD performs adequately. This still only applies at the |
29 |
> filesystem level, which may be an issue with NFS (read on). |
30 |
> |
31 |
> I'm not sure how exactly you're using rsync from the description above |
32 |
> (rsyncd, directly client access, etc). In any case I don't think |
33 |
> rsync has any kind of option to force synchronous behavior. I'm not |
34 |
> sure if manually running a sync on the server after using rsync will |
35 |
> use the ZIL or not. If you're using sync=always then that should |
36 |
> cover rsync no matter how you're doing it. |
37 |
> |
38 |
> Nfs is a little different as both the server-side and client-side have |
39 |
> possible asynchronous behavior. By default the nfs client is |
40 |
> asynchronous, so caching can happen on the client before the file is |
41 |
> even sent to the server. This can be disabled with the mount option |
42 |
> sync on the client side. That will force all data to be sent to the |
43 |
> server immediately. Any nfs server or filesystem settings on the |
44 |
> server side will not have any impact if the client doesn't transmit |
45 |
> the data to the server. The server also has a sync setting which |
46 |
> defaults to on, and it additionally has another layer of caching on |
47 |
> top of that which can be disabled with no_wdelay on the export. Those |
48 |
> server-side settings probably delay anything getting to the filesystem |
49 |
> and so they would have precedence over any filesystem-level settings. |
50 |
> |
51 |
> As you can see you need to use a bit of a kill-it-with-fire approach |
52 |
> to get synchronous behavior, as it traditionally performs so poorly |
53 |
> that everybody takes steps to try to prevent it from happening. |
54 |
> |
55 |
> I'll also note that the main thing synchronous behavior protects you |
56 |
> from is unclean shutdown of the server. It has no bearing on what |
57 |
> happens if a client goes down uncleanly. If you don't expect server |
58 |
> crashes it may not provide much benefit. |
59 |
> |
60 |
> If you're using ZIL you should consider having the ZIL mirrored, as |
61 |
> any loss of the ZIL devices will otherwise cause data loss. Use of |
62 |
> the ZIL is also going to create wear on your SSD so consider that and |
63 |
> your overall disk load before setting sync=always on the dataset. |
64 |
> Since the setting is at the dataset level you could have multiple |
65 |
> mountpoints and have a different sync policy for each. The default is |
66 |
> normal POSIX behavior which only syncs when requested (sync, fsync, |
67 |
> O_SYNC, etc). |
68 |
> |
69 |
> -- |
70 |
> Rich |
71 |
> |
72 |
|
73 |
Rich & Wols, |
74 |
Thanks for the responses. I'll post a single response here. I had |
75 |
thought of the need to mirror the ZIL but didn't have enough physical |
76 |
disk slots in the backup machine for the 2nd SSD. I do think this is a |
77 |
critical point if I was to use the ZIL at all. |
78 |
|
79 |
Based on inputs from the two of you I'm investigating a different |
80 |
overall setup for my home network: |
81 |
|
82 |
Previously - a new main desktop that holds all my data. Lots of disk |
83 |
space, lots of data. All of my big data work - audio recording |
84 |
sessions and astrophotography - are done on this machine. Two |
85 |
__backup__ machines. Desktop machines are backed up to machine 1, |
86 |
machine 1 backed up to machine 2, machine 2 eventually backed up to |
87 |
some cloud service. |
88 |
|
89 |
Now - a new desktop machine that holds audio recording data currently |
90 |
being recorded and used due to real-time latency requirements. Two new |
91 |
network machines: Machine 1 would be both a backup machine as well as |
92 |
a file server. The file server portion of this machine holds |
93 |
astrophotography data and recorded video files. PixInsight running on |
94 |
my desktop accesses and stores over the network to machine 1. Instead |
95 |
of a ZIL in machine 1 the SSD becomes a ZLOG cache most likely holding |
96 |
a cached copy of the currently active astrophotography projects. |
97 |
Machine 1 may also run a couple of VMs over time. Machine 2 is a pure |
98 |
backup machine of everything on Machine 1. |
99 |
|
100 |
FYI - Machine 1 will always be located close to my desktop machines |
101 |
and use the 1Gb/S wired network. iperf suggests I get about 850Mb/S on |
102 |
and off of Machine 1. Machine 2 will be remote and generally backed up |
103 |
overnight using wireless. |
104 |
|
105 |
As always I'm interested in your comments about what works or |
106 |
doesn't work about this sort of setup. |
107 |
|
108 |
Cheers, |
109 |
Mark |