1 |
On 10/10/15 9:54 PM, Mike Frysinger wrote: |
2 |
> We create self.snapcache_lock to hold the lock, then assign it to |
3 |
> self.snapshot_lock_object, and then operate on self.snapshot_lock_object. |
4 |
> There's no need for this indirection, so operate on self.snapcache_lock |
5 |
> directly instead. |
6 |
> --- |
7 |
> catalyst/base/stagebase.py | 10 ++++------ |
8 |
> 1 file changed, 4 insertions(+), 6 deletions(-) |
9 |
> |
10 |
|
11 |
This and the next patch look fine and simplify matters a lot. This |
12 |
doesn't fix bug #519656 but it clears the way. When running multiple |
13 |
instances of catalyst, if the second instance tries to acquire the lock |
14 |
on the snapcache while the first one has it, it throws and exception |
15 |
instead of waiting on the lock. It would be nice for it to wait and |
16 |
timeout if it doesn't get the lock within some acceptable time limit. |
17 |
|
18 |
On systems with lots-o-cores I run mutliple instances of catalyst to |
19 |
economize on throughput. |
20 |
|
21 |
-- |
22 |
Anthony G. Basile, Ph.D. |
23 |
Gentoo Linux Developer [Hardened] |
24 |
E-Mail : blueness@g.o |
25 |
GnuPG FP : 1FED FAD9 D82C 52A5 3BAB DC79 9384 FA6E F52D 4BBA |
26 |
GnuPG ID : F52D4BBA |