Gentoo Archives: gentoo-user

From: james <garftd@×××××××.net>
To: gentoo-user@l.g.o
Subject: Re: [gentoo-user] kde-apps/kde-l10n-16.04.3:5/5::gentoo conflicting with kde-apps/kdepim-l10n-15.12.3:5/5::gentoo
Date: Tue, 09 Aug 2016 17:15:50
Message-Id: c80d2a84-259a-c936-9fb1-6122f4c06e0e@verizon.net
In Reply to: Re: [gentoo-user] kde-apps/kde-l10n-16.04.3:5/5::gentoo conflicting with kde-apps/kdepim-l10n-15.12.3:5/5::gentoo by Michael Mol
1 On 08/09/2016 09:17 AM, Michael Mol wrote:
2 > On Tuesday, August 09, 2016 09:13:31 AM james wrote:
3 >
4 >> On 08/09/2016 07:42 AM, Michael Mol wrote:
5 >
6 >> > On Monday, August 08, 2016 10:45:09 PM Alan McKinnon wrote:
7 >
8 >> >> On 08/08/2016 19:20, Michael Mol wrote:
9 >
10 >> >>> On Monday, August 08, 2016 06:52:15 PM Alan McKinnon wrote:
11 >
12 >> >>>> On 08/08/2016 17:02, Michael Mol wrote:
13 >
14 >> >>> snip <<<
15 >
16 >> >>
17 >
18 >> >> KMail is the lost child of KDE for many months now, I reckon this
19 >
20 >> >> situation is just going to get worse and worse. I know for myself my
21 >
22 >> >> mail problems ceased the day I dumped KMail4 for claws and/or
23 > thunderbird
24 >
25 >> >
26 >
27 >> > That's really, really sad.
28 >
29 >> >
30 >
31 >> > I used Thunderbird for years, but I eventually had to stop when it
32 > would,
33 >
34 >> > averaging once a month (though sometimes not for a couple months,
35 >
36 >> > sometimes a couple times a week) explode in memory consumption and drive
37 >
38 >> > the entire system unresponsively into swap.
39 >
40 >> >
41 >
42 >> > I've tried claws from time to time due to other annoyances with
43 >
44 >> > Thunderbird, but I kept switching back. Not because I liked Tbird, but
45 >
46 >> > (IIRC) because of stability issues I had with claws.
47 >
48 >> >
49 >
50 >> > Even with the bugs it has, Kontact and Akonadi has been the most
51 > reliable
52 >
53 >> > mail client I've used in the last year. When it gives me problems, I
54 > know
55 >
56 >> > why, and I can address it. (Running a heavily tuned MySQLd instance
57 >
58 >> > behind Akonadi, for example...)
59 >
60 >> >
61 >
62 >> > I wish someone would pay me to fix this stuff; I'd be able to spend the
63 >
64 >> > time on it.
65 >
66 >>
67 >
68 >> Perhaps an experiment. Locate some folks that know about how to promote
69 >
70 >> 'crowd funding'. The propose a project like this, targeted at business
71 >
72 >> and user, to all pitch in. In fact, quite a few beloved open source
73 >
74 >> projects could benefit, if the idea of crowd funding took hold
75 >
76 >> on open source soft. Perhaps one of the foundations deeply involved in
77 >
78 >> the open source movement would get behind the idea?
79 >
80 >>
81 >
82 >> KDE is very popular, so the concept or something similar might just have
83 >
84 >> legs, even if it only funds a series of grad-students or young
85 >
86 >> programmers to maintain good FOSS projects?
87 >
88 >
89 >
90 > A wonderful thought. I rather expect KDE is already doing this, but if
91 > not, they ought to. (I'm sure someone who commits code to KDE reads this
92 > list...)
93 >
94 >
95 >
96 > Certainly wouldn't cover someone like me who has a family to support,
97 > but still.
98 >
99 >
100 >
101 >>
102 >
103 >> AS a side note, I put 32G of ram on my system and still at times it is
104 >
105 >> laggy with little processor load and htop shows little <30% ram usage.
106 >
107 >> What tools do you use to track down mem. management issues?
108 >
109 >
110 >
111 > I use Zabbix extensively at work, and have the Zabbix agent on my
112 > workstation reporting back various supported metrics. There's a great
113 > deal you can use (and--my favorite--abuse) Zabbix for, especially once
114 > you understand how it thinks.
115
116 Congradualtions! Of the net-analyzer crowd, you've manage to find one I
117 have not spent time with........
118 >
119 >
120 >
121 >>
122 >
123 >> Any specific kernel tweaks?
124 >
125 >
126 >
127 > Most of my tweaks for KDE revolved around tuning mysqld itself. But for
128 > sysctls improving workstation responsiveness as it relates to memory
129 > interactions with I/O, these are my go-tos:
130 >
131 >
132 >
133 > vm.dirty_background_bytes = 1048576
134 > vm.dirty_bytes = 10485760
135 > vm.swappiness = 0
136
137 Mine are::
138 cat dirty_bytes
139 0
140 cat dirty_background_bytes
141 0
142 cat swappiness
143 60
144
145
146 >
147 > vm.dirty_background_bytes ensures that any data (i.e. from mmap or
148 > fwrite, not from swapping) waiting to be written to disk *starts*
149 > getting written to disk once you've got at least the configured amount
150 > (1MB) of data waiting. (If you've got a disk controller with
151 > battery-backed or flash-backed write cache, you might consider
152 > increasing this to some significant fraction of your write cache. I.e.
153 > if you've got a 1GB FBWC with 768MB of that dedicated to write cache,
154 > you might set this to 512MB or so. Depending on your workload. I/O
155 > tuning is for those of us who enjoy the dark arts.)
156 >
157 >
158 >
159 > vm.dirty_bytes says that once you've got the configured amount (10MB) of
160 > data waiting to be disk, then no more asynchronous I/O is permitted
161 > until you have no more data waiting; all outstanding writes must be
162 > finished first. (My rule of thumb is to have this between 2-10 times the
163 > value of vm.dirty_background_bytes. Though I'm really trying to avoid it
164 > being high enough that it could take more than 50ms to transfer to disk;
165 > that way, any stalls that do happen are almost imperceptible.)
166 >
167 >
168 >
169 > You want vm.dirty_background_bytes to be high enough that your hardware
170 > doesn't spend its time powered on if it doesn't have to be, and so that
171 > your hardware can transfer data in large, efficient, streamable chunks.
172 >
173 >
174 >
175 > You want vm.dirty_bytes enough higher than your first number so that
176 > your hardware has enough time to spin up and transfer data before you
177 > put the hammer down and say, "all right, nobody else gets to queue
178 > writes until all the waiting data has reached disk."
179 >
180 >
181 >
182 > You want vm.dirty_bytes *low* enough that when you *do* have to put that
183 > hammer down, it doesn't interfere with your perceptions of a responsive
184 > system. (And in a server context, you want it low enough that things
185 > can't time out--or be pushed into timing out--waiting for it. Call your
186 > user attention a matter of timing out expecting things to respond to
187 > you, and the same principle applies...)
188 >
189 >
190 >
191 > Now, vm.swappiness? That's a weighting factor for how quickly the kernel
192 > should try moving memory to swap to be able to speedily respond to new
193 > allocations. Me, I prefer the kernel to not preemptively move
194 > lesser-used data to swap, because that's going to be a few hundred
195 > megabytes worth of data all associated with one application, and it'll
196 > be a real drag when I switch back to the application I haven't used for
197 > half an hour. So I set vm.swappiness to 0, to tell the kernel to only
198 > move data to swap if it has no other alternative while trying to satisfy
199 > a new memory allocation request.
200
201
202 OK, OK, OK. I need to read a bit about these. Any references or docs or
203 is the result of parsing out what is the least painful for a
204 workstation? I do not run any heavy databases on my workstation; they
205 are only there to hack on them. I test db centric stuff on domain
206 servers, sometimes with limited resources. I run lxde and I'm moving to
207 lxqt for workstations and humanoid (terminal) IO.
208
209
210 Do you set these differently for servers?
211
212 Nodes in a cluster?
213
214 I use OpenRC, just so you know. I also have a motherboard with IOMMU
215 that is currently has questionable settings in the kernel config file. I
216 cannot find consensus if/how IOMMU that affects IO with the Sata HD
217 devices versus mm mapped peripherals.... in the context of 4.x kernel
218 options. I'm trying very hard here to avoid a deep dive on these issues,
219 so trendy strategies are most welcome, as workstation and cluster node
220 optimizations are all I'm really working on atm.
221
222
223 THANKS (as always)!
224
225 James

Replies