1 |
On Sun, Aug 28, 2022 at 8:24 AM Dale <rdalek1967@×××××.com> wrote: |
2 |
> |
3 |
> What I would like to do is limit the amount of memory torrent |
4 |
> software can use. |
5 |
|
6 |
While ulimit/cgroups/etc will definitely do the job, they're probably |
7 |
not the solution you want. Those will cause memory allocation to |
8 |
fail, and I'm guessing at that point your torrent software will just |
9 |
die. |
10 |
|
11 |
I'd see if you can do something within the settings of the program to |
12 |
limit its memory use, and then use a resource limit at the OS level as |
13 |
a failsafe, so that a memory leak doesn't eat up all your memory. |
14 |
|
15 |
Otherwise your next email will be asking how to automatically restart |
16 |
a dead service. Systemd has support for that built-in, and there are |
17 |
also options for non-systemd, but you're going to be constantly having |
18 |
restarts and it might not even run for much time at all depending on |
19 |
how bad the problem is. It is always best to tame memory use within |
20 |
an application. |
21 |
|
22 |
Something I wish linux supported was discardable memory, for |
23 |
caches/etc. A program should be able to allocate memory while passing |
24 |
a hint to the kernel saying that the memory is discardable. If the |
25 |
kernel is under memory pressure it can then just deallocate the memory |
26 |
and then have some way to notify the process that the memory no longer |
27 |
is allocated. That might optionally support giving warning first, or |
28 |
it might be some kind of new trappable exception for segfaults to |
29 |
discarded memory. (Since access to memory doesn't involve system |
30 |
calls it might be hard to have more graceful error handling. I guess |
31 |
an option would be to first tell the kernel to lock the memory before |
32 |
accessing it, then release the lock, so that the memory isn't |
33 |
discarded after checking that it is safe.) |
34 |
|
35 |
-- |
36 |
Rich |