1 |
On Sun, Jul 31, 2011 at 6:37 PM, Volker Armin Hemmann |
2 |
<volkerarmin@××××××××××.com> wrote: |
3 |
> Am Sonntag 31 Juli 2011, 10:44:28 schrieb Michael Mol: |
4 |
>> While I take your point about write-cycle limitations, and I would |
5 |
>> *assume* you're familiar with the various improvements on |
6 |
>> wear-leveling technique that have happened over the past *ten years* |
7 |
> |
8 |
> yeah, I am. Or let it phrase it differently: |
9 |
> I know what is claimed. |
10 |
> |
11 |
> The problem is, the best wear leveling does not help you if your disk is |
12 |
> pretty filled up and you still do a lot of writing. 1 000 000 write cycles |
13 |
> aren't much. |
14 |
|
15 |
Ok; I wasn't certain, but it sounded like you'd had your head in the |
16 |
sand (if you'll pardon the expression). It's clear you didn't. I'm |
17 |
sorry. |
18 |
|
19 |
> |
20 |
>> since those concerns were first raised, I could probably raise an |
21 |
>> argument that a fresh SSD is likely to last longer as a swap device |
22 |
>> than as a filesystem. |
23 |
> |
24 |
> depends - because thanks to wear leveling that 'swap partition' is just |
25 |
> something the firmware makes the kernel believe to be there. |
26 |
> |
27 |
> |
28 |
>> |
29 |
>> Swap is only touched as-needed, while there's been an explosion in |
30 |
>> programs and user software which demands synchronous writes to disk |
31 |
>> for data integrity purposes. (Firefox uses sqlite in such a way, for |
32 |
>> example; I discovered this when I was using sqlite heavily in my *own* |
33 |
>> application, and Firefox hung for a couple minutes during every batch |
34 |
>> insert.) |
35 |
> |
36 |
> which is another goof reason not to use firefox - but |
37 |
> total used free shared buffers cached |
38 |
> Mem: 8182556 7373736 808820 0 56252 2197064 |
39 |
> -/+ buffers/cache: 5120420 3062136 |
40 |
> Swap: 23446848 82868 23363980 |
41 |
> |
42 |
> even with lots of ram, you will hit swap. And since you are using the wear- |
43 |
> leveling of the drive's firmware it does not matter that your swap resides on |
44 |
> its own partition - every page written means a block-rewrite somewhere. Really |
45 |
> not good for your ssd. |
46 |
|
47 |
Fair enough. |
48 |
|
49 |
It Would Be Nice(tm) if the SSD's block size and alignment matched |
50 |
that of the kernel's pagesize. Not certain if it's possible to tune |
51 |
those settings (reliably) in the kernel. |
52 |
|
53 |
Also, my stats, from three different systems (they appear to be using |
54 |
trivial amounts of swap, though my Gentoo box doesn't appear to be |
55 |
using any) |
56 |
|
57 |
(Desktop box) |
58 |
shortcircuit:1@serenity~ |
59 |
Sun Jul 31 07:03 PM |
60 |
!499 #1 j0 ?0 $ free -m |
61 |
total used free shared buffers cached |
62 |
Mem: 5975 3718 2256 0 617 1106 |
63 |
-/+ buffers/cache: 1994 3980 |
64 |
Swap: 9993 0 9993 |
65 |
|
66 |
(laptop) |
67 |
shortcircuit@saffron:~$ free -m |
68 |
total used free shared buffers cached |
69 |
Mem: 1995 1732 263 0 169 913 |
70 |
-/+ buffers/cache: 648 1347 |
71 |
Swap: 3921 3 3918 |
72 |
|
73 |
(server) |
74 |
shortcircuit@×××××××××××××××××××××.com~ |
75 |
23:05:34 $ free -m |
76 |
total used free shared buffers cached |
77 |
Mem: 2048 2000 47 0 285 488 |
78 |
-/+ buffers/cache: 1225 822 |
79 |
Swap: 511 1 510 |
80 |
|
81 |
>> Also, despite the MBTF data provided by the manufacturers, there's |
82 |
>> more empirical evidence that the drives expire faster than expected, |
83 |
>> anyway. I'm aware of this, and not particularly concerned about it. |
84 |
> |
85 |
> well, it is your money to burn. |
86 |
|
87 |
Best evidence I've read lately is that the drives last about a year |
88 |
under heavy use. I was going to include a reference in the last email, |
89 |
but I can't find a link to the post. I thought it was something Joel |
90 |
Spolsky (or *someone* at StackOverflow) wrote, but I was unable to |
91 |
find it quickly. |
92 |
|
93 |
My parts usually last 3-5 years, so that's pretty low. Still, having |
94 |
my swap partition drop (and the entire system halt) would be generally |
95 |
less damaging to me than having real data on the drive. |
96 |
|
97 |
>> False dichotomy. Yes, it increases the wear on the device. That says |
98 |
>> nothing of its impact on system performance, which was the nature of |
99 |
>> my point. |
100 |
> |
101 |
> if you are so concerned of swap performance you should probably go with a |
102 |
> smaller ssd, get more ram and let that few mb of swap you need been handled by |
103 |
> several swap partitions. |
104 |
|
105 |
This is where I get back to my original, 'prohibitively expensive' |
106 |
bit. I can get 16GB of RAM into my system for about $200. The use |
107 |
cases where I've been contemplating this have been where I wanted to |
108 |
have 60GB to 80GB of data quickly accessible in a random-access |
109 |
fashion, but where that type of load wasn't what I normally spent my |
110 |
time doing. (Hence the idea to have a broader improvement from |
111 |
something such as the file cache) |
112 |
|
113 |
And, really, the whole point of the thread was for thought |
114 |
experiments. Posits are occasionally required. |
115 |
|
116 |
>> As for a filecache not being that important, that's only the case if |
117 |
>> your data of interest exists on the filesystem you put on the SSD. |
118 |
>> |
119 |
>> Let's say you're someone like me, who would tend to go with 60GB for / |
120 |
>> and 3TB for /home. At various times, I'll be doing HDR photo |
121 |
>> processing, some video transcoding, some random non-portage compile |
122 |
>> jobs, web browsing, coding, etc. |
123 |
> |
124 |
> 60gb for /, 75gb for /var, and 2.5tb data... |
125 |
> my current setup. |
126 |
|
127 |
Handy; we'll have common frames of reference. |
128 |
|
129 |
>> If I take a 160GB SSD, I could put / (or, at least, /var/ and /usr), |
130 |
>> and have some space left over for scratch--but it's going to be a pain |
131 |
>> trying to figure out which of my 3TB of /home data I want in that fast |
132 |
>> scratch. |
133 |
>> |
134 |
>> File cache is great, because it takes caches your most-used data from |
135 |
>> *anywhere* and keeps it in a fast-access datastore. I could have a 3 |
136 |
>> *petabyte* volume, not be particularly concerned about data |
137 |
>> distribution, and have just as response from the filecache as if I had |
138 |
>> a mere 30GB volume. Putting a filesystem on an SSD simply cannot scale |
139 |
>> that way. |
140 |
> |
141 |
> true, but all those microseconds saved with swap on ssd won't offset the pain |
142 |
> when the ssd dies earlier. |
143 |
|
144 |
It really depends on the quantity and nature of the pain. When the |
145 |
things I'm toying around with have projected completion times of a |
146 |
*week* rather than an hour or two, and when I don't normally need so |
147 |
much memory, it wouldn't be too much of a hassle to remove the dead |
148 |
drive from fstab and boot back up. (after fsck, etc, natch). In the |
149 |
words of the Architect, "There are levels of existence we are prepared |
150 |
to accept..." |
151 |
|
152 |
>> Actually, this conversation reminds me of another idea I'd had at one |
153 |
>> point...putting ext3/ext4's journal on an SSD, while keeping the bulk |
154 |
>> of the data on large, dense spinning platters. |
155 |
> |
156 |
> which sounds nice in theory. |
157 |
|
158 |
Yet would potentially run afoul of the SSD's write block resolution. |
159 |
And, of course, having the journal fail out from under me would be a |
160 |
fair bit worse than the kernel panicking during a swap operation. |
161 |
|
162 |
>> Did you miss the last week's worth of discussion of memory limits on tmpfs? |
163 |
> |
164 |
> probably. Because I am using tempfs for /var/tmp/portage for ages and the only |
165 |
> problematic packet is openoffice/libreoffice. |
166 |
|
167 |
I ran into trouble with Thunderbird a couple months ago, which is why |
168 |
I had to drop from using tmpfs. (Also, I compile with -ggdb in CFLAGS, |
169 |
so I expect my build sizes bloat a bit more than most) |
170 |
|
171 |
Anyway, the edge cases and caveats like the ones discussed are why I |
172 |
ask about what people have tried, and what mitigators, workarounds and |
173 |
technological improvements people have been working on. |
174 |
|
175 |
-- |
176 |
:wq |