1 |
Hi Michael, |
2 |
|
3 |
thank you for replying to my questions! :) |
4 |
|
5 |
On 04/13 11:06, Michael wrote: |
6 |
> On Monday, 13 April 2020 06:32:37 BST tuxic@××××××.de wrote: |
7 |
> > Hi, |
8 |
> > |
9 |
> > From the list I already have learned, that most of my concerns regarding |
10 |
> > the lifetime and maintainance to prolong it are without a |
11 |
> > reason. |
12 |
> |
13 |
> Probably your concerns about SSD longevity are without a reason, but keep up |
14 |
> to date backups just in case. ;-) |
15 |
|
16 |
...of course! :) |
17 |
My question are more driven by curiousty than by anxiety... |
18 |
|
19 |
> > Nonetheless I am interested in the technique as such. |
20 |
> > |
21 |
> > My SSD (NVme/M2) is ext4 formatted and I found articles on the |
22 |
> > internet, that it is neither a good idea to activate the "discard" |
23 |
> > option at mount time nor to do a fstrim either at each file deletion |
24 |
> > no triggered by a cron job. |
25 |
> |
26 |
> Beside what the interwebs say about fstrim, the man page provides good advice. |
27 |
> They recommend running a cron job once a week, for most desktop and server |
28 |
> implementations. |
29 |
|
30 |
...but it neither explains why to do so nor does it explain the technical |
31 |
background. |
32 |
|
33 |
For example it saus: |
34 |
"For most desktop and server systems a sufficient trimming frequency is |
35 |
once a week." |
36 |
|
37 |
...but why is this ok to do so? Are all PCs made equal? Are all |
38 |
use cases are equal? It even does not distinguish between SSD/Sata |
39 |
and SSD/NVMe(M2 in my case). |
40 |
|
41 |
These are the points, where my curiousty kicks in and I am starting |
42 |
to ask questions. |
43 |
|
44 |
:) |
45 |
|
46 |
> > Since there seems to be a "not so good point in time", when to do a |
47 |
> > fstrim, I think there must also be a point in time, when it is quite |
48 |
> > right to fstrim the mu SSD. |
49 |
> > |
50 |
> > fstrim clears blocks, which currently are not in use and which |
51 |
> > contents is != 0. |
52 |
> > |
53 |
> > The more unused blocks there are, which has a contents != 0, the |
54 |
> > lesser the count of blocks is, which the wear leveling algorithm can |
55 |
> > use for its purpose. |
56 |
> |
57 |
> The wear levelling mechanism is using the HPA as far as I know, although you |
58 |
> can always overprovision it.[1] |
59 |
|
60 |
For example: Take an SSD with 300 GB user-useable space. To |
61 |
over-overprovisioning the device the user decides to partitioning |
62 |
only the half of the disk and format it. The rest is left untouched in |
63 |
"nowhere land". |
64 |
Now the controller has a lot of space to shuffle data around. |
65 |
Fstrim only works on the mounted part of the SSD. So the used blocks |
66 |
in "nowhere land" remain...unfstrimmed? |
67 |
|
68 |
To not to use all available space for the partitions is a hint I found |
69 |
online...and then I asked me the question above... |
70 |
|
71 |
If what I read online is wrong my assumptions are wrong...which |
72 |
isn't reassuring either. |
73 |
|
74 |
> |
75 |
> > That leads to the conclusion: to fstrim as often as possible, to keep the |
76 |
> > count of empty blocks as high as possible. |
77 |
> |
78 |
> Not really. Why would you need the count of empty blocks as high as possible, |
79 |
|
80 |
Unused blocks with data cannot be used for wearleveling. Suppose you |
81 |
have a total amount of 100 block, 50 blocks are used, 25 are unused |
82 |
and empty, 25 are unused and filled with former data. |
83 |
|
84 |
In this case only 25 blocks are available to spread the next write |
85 |
operation. |
86 |
|
87 |
After fstrim 50 blocks would be available again and the same amount of |
88 |
writes could now be spread over 50 sectors. |
89 |
|
90 |
At least that is what I read online... |
91 |
|
92 |
> unless you are about to right some mammoth file and *need* to use up every |
93 |
> available space possible on this disk/partition? |
94 |
> |
95 |
> |
96 |
> > BUT: Clearing blocks is an action, which includes writes to the cells of |
97 |
> > the SSD. |
98 |
> > |
99 |
> > Which is not that nice. |
100 |
> |
101 |
> It's OK, as long as you are not over-writing cells which do not need to be |
102 |
> overwritten. Cells with deleted data will be overwritten as some point. |
103 |
> |
104 |
> |
105 |
> > Then, do a fstrim just in the moment, when there is no useable block |
106 |
> > left. |
107 |
> |
108 |
> Why leave it at the last moment and incur a performance penalty while waiting |
109 |
> for fstrim to complete? |
110 |
|
111 |
Performance is not my concern (at least in the moment ;) ). I try to |
112 |
fully understand the mechanisms here, since what I read online is not |
113 |
without contradictions... |
114 |
|
115 |
|
116 |
> > Then the wear-leveling algorithm is already at its limits. |
117 |
> > |
118 |
> > Which is not that nice either. |
119 |
> > |
120 |
> > The truth - as so often - is somewhere in between. |
121 |
> > |
122 |
> > Is it possible to get an information from the SSD, how many blocks are |
123 |
> > in the state of "has contents" and "is unused" and how many blocks are |
124 |
> > in the state of "has *no* contents" and "is unused"? |
125 |
> > |
126 |
> > Assuming this information is available: Is it possible to find the |
127 |
> > sweat spot, when to fstrim SSD? |
128 |
> |
129 |
> I humbly suggest you may be over-thinking something a cron job running fstrim |
130 |
> once a week, or once a month, or twice a month would take care of without you |
131 |
> knowing or worrying about. |
132 |
|
133 |
To technically overthink problems is a vital part of my profession and exactly what |
134 |
I am asked for. I cannot put this behaviour down so easily. :) |
135 |
From my experience there aren't too manu questions, Michael, there is |
136 |
often only a lack of related answers. |
137 |
|
138 |
|
139 |
> Nevertheless, if the usage of your disk/partitions is variable and one week |
140 |
> you may fill it up with deleted data, while for the rest of the month you |
141 |
> won't even touch it, there's SSDcronTRIM, a script I've been using for a |
142 |
> while.[2] |
143 |
> |
144 |
> |
145 |
> [1] https://www.thomas-krenn.com/en/wiki/SSD_Over-provisioning_using_hdparm |
146 |
> [2] https://github.com/chmatse/SSDcronTRIM |
147 |
|
148 |
Cheers! |
149 |
Meino |