Gentoo Archives: gentoo-commits

From: "Fabian Groffen (grobian)" <grobian@g.o>
To: gentoo-commits@l.g.o
Subject: [gentoo-commits] portage r9999 - in main/branches/prefix: bin doc/dependency_resolution pym/_emerge pym/portage pym/portage/dbapi pym/portage/sets
Date: Sun, 27 Apr 2008 14:56:14
Message-Id: E1Jq8IU-00044h-Rz@stork.gentoo.org
1 Author: grobian
2 Date: 2008-04-27 14:56:09 +0000 (Sun, 27 Apr 2008)
3 New Revision: 9999
4
5 Modified:
6 main/branches/prefix/bin/misc-functions.sh
7 main/branches/prefix/doc/dependency_resolution/task_scheduling.docbook
8 main/branches/prefix/pym/_emerge/__init__.py
9 main/branches/prefix/pym/portage/__init__.py
10 main/branches/prefix/pym/portage/dbapi/porttree.py
11 main/branches/prefix/pym/portage/dbapi/vartree.py
12 main/branches/prefix/pym/portage/sets/__init__.py
13 Log:
14 Merged from trunk 9962:9994
15
16 | 9963 | Rename the "consistent" depgraph parameter to "complete" |
17 | zmedico | since what it really means is that the graph will be |
18 | | complete in the sense that no known dependencies are |
19 | | neglected. |
20
21 | 9964 | Update description of "complete" depgraph param. |
22 | zmedico | |
23
24 | 9965 | Bug #172812 - If any Uninstall tasks need to be executed in |
25 | zmedico | order to avoid a conflict, complete the graph with any |
26 | | dependencies that may have been initially neglected (to |
27 | | ensure that unsafe Uninstall tasks are properly identified |
28 | | and blocked from execution). |
29
30 | 9967 | remove unused function |
31 | genone | |
32
33 | 9968 | Add some more spinner.update() calls in possibly time |
34 | zmedico | consuming loops. |
35
36 | 9970 | as NEEDED files don't conain enough formation for e.g. |
37 | genone | preserve-libsto work properly and we don't want to change |
38 | | the format of existing files create another file including |
39 | | additional information |
40
41 | 9971 | remove debug output |
42 | genone | |
43
44 | 9972 | s/be only/only be/ |
45 | zmedico | |
46
47 | 9976 | Bug #219251 - Fix typo in PORTDIR_OVERLAY when searching for |
48 | zmedico | sets.conf files. Thanks to Manuel Nickschas |
49 | | <sputnick@×××××××××××.org> for fixing this. |
50
51 | 9977 | Refactor the way that depgraph.altlist(), _complete_graph(), |
52 | zmedico | and validate_blockers() interact with eachother. This |
53 | | simplifies things by eliminating the need for recursive |
54 | | calls to validate_blockers(). |
55
56 | 9978 | add LibraryPackageMap replacement using NEEDED.2 files |
57 | genone | |
58
59 | 9979 | * Add a Blocker class to use instead of tuples. * Fix the |
60 | zmedico | Task constructor to properly traverse __slots__ of all |
61 | | inherited classes. |
62
63 | 9980 | Don't assume that altlist() will succeed inside |
64 | zmedico | display_problems(). |
65
66 | 9981 | Use digraphs to clean up blocker reference counting in the |
67 | zmedico | depgraph. |
68
69 | 9982 | Add a PackageVirtualDbapi.copy() method. |
70 | zmedico | |
71
72 | 9983 | Bug #172812 - When a package needs to be uninstalled in |
73 | zmedico | advance rather than through replacement, show the |
74 | | corresponding [blocks] entries in the displayed list. In |
75 | | order to show more structure in the --tree display, expand |
76 | | Package -> Uninstall edges into Package -> Blocker -> |
77 | | Uninstall edges. Also, create edges between a package's own |
78 | | blockers and it's Uninstall task since it's blockers become |
79 | | irrelevant as soon as it's uninstalled. |
80
81 | 9990 | Remove unnecessary "mydbapi" variable in depgraph.display(). |
82 | zmedico | |
83
84 | 9992 | Create a digraph.difference_update() method and use it to |
85 | zmedico | amortize the cost of removing nodes from the digraph.order |
86 | | list. |
87
88 | 9994 | Take the classes that initialize variables in __slots__ with |
89 | zmedico | keyword constructor arguments and make them all derive from |
90 | | a new SlotObject class. |
91 Merged from trunk 9995:9996
92
93 | 9996 | some minor code fixes |
94 | genone | |
95 Merged from trunk 9996:9998
96
97 | 9997 | actually use rpath for the internal lib check |
98 | genone | |
99
100 | 9998 | fix logic error |
101 | genone | |
102
103
104 Modified: main/branches/prefix/bin/misc-functions.sh
105 ===================================================================
106 --- main/branches/prefix/bin/misc-functions.sh 2008-04-27 09:19:20 UTC (rev 9998)
107 +++ main/branches/prefix/bin/misc-functions.sh 2008-04-27 14:56:09 UTC (rev 9999)
108 @@ -160,13 +160,16 @@
109 fi
110
111 # Save NEEDED information after removing self-contained providers
112 - scanelf -qyRF '%p:%r %n' "${D}" | sed -e 's:^:/:' | { while IFS= read l; do
113 - obj=${l%%:*}
114 - rpath=${l#*:}; rpath=${rpath% *}
115 - needed=${l##* }
116 + scanelf -qyRF '%a;%p;%S;%r;%n' "${D}" | { while IFS= read l; do
117 + arch=${l%%;*}; l=${l#*;}
118 + obj="/${l%%;*}"; l=${l#*;}
119 + soname=${l%%;*}; l=${l#*;}
120 + rpath=${l%%;*}; l=${l#*;}; [ "${rpath}" = " - " ] && rpath=""
121 + needed=${l%%;*}; l=${l#*;}
122 if [ -z "${rpath}" -o -n "${rpath//*ORIGIN*}" ]; then
123 # object doesn't contain $ORIGIN in its runpath attribute
124 echo "${obj} ${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED
125 + echo "${arch:3};${obj};${soname};${rpath};${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED.2
126 else
127 dir=$(dirname ${obj})
128 # replace $ORIGIN with the dirname of the current object for the lookup
129 @@ -174,10 +177,17 @@
130 sneeded=$(echo ${needed} | tr , ' ')
131 rneeded=""
132 for lib in ${sneeded}; do
133 - [ -e "${D}/${dir}/${lib}" ] || rneeded="${rneeded},${lib}"
134 + found=0
135 + for path in ${opath//:/ }; do
136 + [ -e "${D}/${path}/${lib}" ] && found=1
137 + done
138 + [ "${found}" -eq 0 ] && rneeded="${rneeded},${lib}"
139 done
140 rneeded=${rneeded:1}
141 - [ -n "${rneeded}" ] && echo "${obj} ${rneeded}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED
142 + if [ -n "${rneeded}" ]; then
143 + echo "${obj} ${rneeded}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED
144 + echo "${arch:3};${obj};${soname};${rpath};${rneeded}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED.2
145 + fi
146 fi
147 done }
148
149 @@ -249,31 +259,35 @@
150 # extension, not after it. Check for this, and *only* warn
151 # about it. Some packages do ship .so files on Darwin and make
152 # it work (ugly!).
153 - f=""
154 + rm -f "${T}/mach-o.check"
155 find ${ED%/} -name "*.so" -or -name "*.so.*" | \
156 while read i ; do
157 - [[ $(file $i) == *"Mach-O"* ]] && f="${f} ${i#${D}}"
158 + [[ $(file $i) == *"Mach-O"* ]] && \
159 + echo "${i#${D}}" >> "${T}/mach-o.check"
160 done
161 - if [[ -n ${f} ]] ; then
162 - f=${f# }
163 + if [[ -f ${T}/mach-o.check ]] ; then
164 + f=$(< "${T}/mach-o.check")
165 vecho -ne '\a\n'
166 eqawarn "QA Notice: Found .so dynamic libraries on Darwin:"
167 - eqawarn " ${f// /\n }"
168 + eqawarn " ${f//\n/\n }"
169 fi
170 + rm -f "${T}/mach-o.check"
171 +
172 # The naming for dynamic libraries is different on Darwin; the
173 # version component is before the extention, instead of after
174 # it, as with .sos. Again, make this a warning only.
175 - f=""
176 + rm -f "${T}/mach-o.check"
177 find ${ED%/} -name "*.dylib.*" | \
178 while read i ; do
179 - f="${f} ${i#${D}}"
180 + echo "${i#${D}}" >> "${T}/mach-o.check"
181 done
182 - if [[ -n ${f} ]] ; then
183 - f=${f# }
184 + if [[ -f "${T}/mach-o.check" ]] ; then
185 + f=$(< "${T}/mach-o.check")
186 vecho -ne '\a\n'
187 eqawarn "QA Notice: Found wrongly named dynamic libraries on Darwin:"
188 eqawarn " ${f// /\n }"
189 fi
190 + rm -f "${T}/mach-o.check"
191 fi
192
193 # this should help to ensure that all (most?) shared libraries are executable
194 @@ -359,7 +373,7 @@
195 # use otool to do the magic. Since this is expensive, we do it
196 # together with the scan for broken installs.
197 rm -f "${T}"/.install_name_check_failed
198 - [[ ${CHOST} == *-darwin* ]] && find "${ED}" -type f | while read f ; do
199 + [[ ${CHOST} == *-darwin* ]] && find "${ED}" -type f | while IFS= read f ; do
200 rm -f "${T}"/.NEEDED.tmp
201 otool -LX "${f}" \
202 | grep -v "Archive : " \
203
204 Modified: main/branches/prefix/doc/dependency_resolution/task_scheduling.docbook
205 ===================================================================
206 --- main/branches/prefix/doc/dependency_resolution/task_scheduling.docbook 2008-04-27 09:19:20 UTC (rev 9998)
207 +++ main/branches/prefix/doc/dependency_resolution/task_scheduling.docbook 2008-04-27 14:56:09 UTC (rev 9999)
208 @@ -29,13 +29,14 @@
209 Installed packages that have been pulled into the current dependency
210 graph will not be uninstalled. Due to
211 <link linkend='dependency-resolution-package-modeling-dependency-neglection'>
212 - dependency neglection</link>, other checks may be necessary in order
213 + dependency neglection</link> and special properties of packages
214 + in the "system" set, other checks may be necessary in order
215 to protect inappropriate packages from being uninstalled.
216 </listitem>
217 <listitem>
218 An installed package that is matched by a dependency atom from the
219 "system" set will not be uninstalled in advance since it might not
220 - be safe. Such a package will be uninstalled through replacement.
221 + be safe. Such a package will only be uninstalled through replacement.
222 </listitem>
223 <listitem>
224 An installed package that is matched by a dependency atom from the
225
226 Modified: main/branches/prefix/pym/_emerge/__init__.py
227 ===================================================================
228 --- main/branches/prefix/pym/_emerge/__init__.py 2008-04-27 09:19:20 UTC (rev 9998)
229 +++ main/branches/prefix/pym/_emerge/__init__.py 2008-04-27 14:56:09 UTC (rev 9999)
230 @@ -374,9 +374,7 @@
231 # recurse: go into the dependencies
232 # deep: go into the dependencies of already merged packages
233 # empty: pretend nothing is merged
234 - # consistent: ensure that installation of new packages does not break
235 - # any deep dependencies of required sets (args, system, or
236 - # world).
237 + # complete: completely account for all known dependencies
238 myparams = set(["recurse"])
239 if "--update" in myopts or \
240 "--newuse" in myopts or \
241 @@ -391,7 +389,7 @@
242 if "--deep" in myopts:
243 myparams.add("deep")
244 if "--complete-graph" in myopts:
245 - myparams.add("consistent")
246 + myparams.add("complete")
247 return myparams
248
249 # search functionality
250 @@ -840,15 +838,26 @@
251 else:
252 yield flag
253
254 -class AbstractDepPriority(object):
255 - __slots__ = ("__weakref__", "buildtime", "runtime", "runtime_post")
256 +class SlotObject(object):
257 + __slots__ = ("__weakref__")
258 +
259 def __init__(self, **kwargs):
260 - for myattr in chain(self.__slots__, AbstractDepPriority.__slots__):
261 - if myattr == "__weakref__":
262 + classes = [self.__class__]
263 + while classes:
264 + c = classes.pop()
265 + if c is SlotObject:
266 continue
267 - myvalue = kwargs.get(myattr, False)
268 - setattr(self, myattr, myvalue)
269 + classes.extend(c.__bases__)
270 + slots = getattr(c, "__slots__", None)
271 + if not slots:
272 + continue
273 + for myattr in slots:
274 + myvalue = kwargs.get(myattr, None)
275 + setattr(self, myattr, myvalue)
276
277 +class AbstractDepPriority(SlotObject):
278 + __slots__ = ("buildtime", "runtime", "runtime_post")
279 +
280 def __lt__(self, other):
281 return self.__int__() < other
282
283 @@ -940,6 +949,8 @@
284 def __int__(self):
285 return 0
286
287 +BlockerDepPriority.instance = BlockerDepPriority()
288 +
289 class UnmergeDepPriority(AbstractDepPriority):
290 __slots__ = ()
291 """
292 @@ -1228,21 +1239,14 @@
293 shown_licenses.add(l)
294 return have_eapi_mask
295
296 -class Task(object):
297 - __slots__ = ("__weakref__", "_hash_key",)
298 +class Task(SlotObject):
299 + __slots__ = ("_hash_key",)
300
301 - def __init__(self, **kwargs):
302 - for myattr in self.__slots__:
303 - if myattr == "__weakref__":
304 - continue
305 - myvalue = kwargs.get(myattr, None)
306 - setattr(self, myattr, myvalue)
307 -
308 def _get_hash_key(self):
309 - try:
310 - return self._hash_key
311 - except AttributeError:
312 + hash_key = getattr(self, "_hash_key", None)
313 + if hash_key is None:
314 raise NotImplementedError(self)
315 + return hash_key
316
317 def __eq__(self, other):
318 return self._get_hash_key() == other
319 @@ -1268,6 +1272,16 @@
320 def __str__(self):
321 return str(self._get_hash_key())
322
323 +class Blocker(Task):
324 + __slots__ = ("root", "atom", "satisfied")
325 +
326 + def _get_hash_key(self):
327 + hash_key = getattr(self, "_hash_key", None)
328 + if hash_key is None:
329 + self._hash_key = \
330 + ("blocks", self.root, self.atom)
331 + return self._hash_key
332 +
333 class Package(Task):
334 __slots__ = ("built", "cpv", "depth",
335 "installed", "metadata", "root", "onlydeps", "type_name",
336 @@ -1279,9 +1293,8 @@
337 self.cpv_slot = "%s:%s" % (self.cpv, self.metadata["SLOT"])
338
339 def _get_hash_key(self):
340 - try:
341 - return self._hash_key
342 - except AttributeError:
343 + hash_key = getattr(self, "_hash_key", None)
344 + if hash_key is None:
345 operation = "merge"
346 if self.onlydeps or self.installed:
347 operation = "nomerge"
348 @@ -1308,10 +1321,10 @@
349 return False
350
351 class Uninstall(Package):
352 + __slots__ = ()
353 def _get_hash_key(self):
354 - try:
355 - return self._hash_key
356 - except AttributeError:
357 + hash_key = getattr(self, "_hash_key", None)
358 + if hash_key is None:
359 self._hash_key = \
360 (self.type_name, self.root, self.cpv, "uninstall")
361 return self._hash_key
362 @@ -1343,16 +1356,11 @@
363 self.set = set
364 self.name = self.arg[len(SETPREFIX):]
365
366 -class Dependency(object):
367 - __slots__ = ("__weakref__", "atom", "blocker", "depth",
368 +class Dependency(SlotObject):
369 + __slots__ = ("atom", "blocker", "depth",
370 "parent", "onlydeps", "priority", "root")
371 def __init__(self, **kwargs):
372 - for myattr in self.__slots__:
373 - if myattr == "__weakref__":
374 - continue
375 - myvalue = kwargs.get(myattr, None)
376 - setattr(self, myattr, myvalue)
377 -
378 + SlotObject.__init__(self, **kwargs)
379 if self.priority is None:
380 self.priority = DepPriority()
381 if self.depth is None:
382 @@ -1515,6 +1523,15 @@
383 self._cp_map = {}
384 self._cpv_map = {}
385
386 + def copy(self):
387 + obj = PackageVirtualDbapi(self.settings)
388 + obj._match_cache = self._match_cache.copy()
389 + obj._cp_map = self._cp_map.copy()
390 + for k, v in obj._cp_map.iteritems():
391 + obj._cp_map[k] = v[:]
392 + obj._cpv_map = self._cpv_map.copy()
393 + return obj
394 +
395 def __contains__(self, item):
396 existing = self._cpv_map.get(item.cpv)
397 if existing is not None and \
398 @@ -1727,14 +1744,17 @@
399 self._atom_arg_map = {}
400 # contains all nodes pulled in by self._set_atoms
401 self._set_nodes = set()
402 - self.blocker_digraph = digraph()
403 - self.blocker_parents = {}
404 - self._unresolved_blocker_parents = {}
405 + # Contains only Blocker -> Uninstall edges
406 + self._blocker_uninstalls = digraph()
407 + # Contains only Package -> Blocker edges
408 + self._blocker_parents = digraph()
409 + # Contains only unsolvable Package -> Blocker edges
410 + self._unsolvable_blockers = digraph()
411 self._slot_collision_info = set()
412 # Slot collision nodes are not allowed to block other packages since
413 # blocker validation is only able to account for one package per slot.
414 self._slot_collision_nodes = set()
415 - self._altlist_cache = {}
416 + self._serialized_tasks_cache = None
417 self._pprovided_args = []
418 self._missing_args = []
419 self._masked_installed = []
420 @@ -1865,6 +1885,7 @@
421 def _create_graph(self, allow_unsatisfied=False):
422 dep_stack = self._dep_stack
423 while dep_stack:
424 + self.spinner.update()
425 dep = dep_stack.pop()
426 if isinstance(dep, Package):
427 if not self._add_pkg_deps(dep,
428 @@ -1881,7 +1902,6 @@
429 nodeps = "--nodeps" in self.myopts
430 empty = "empty" in self.myparams
431 deep = "deep" in self.myparams
432 - consistent = "consistent" in self.myparams
433 update = "--update" in self.myopts and dep.depth <= 1
434 if dep.blocker:
435 if not buildpkgonly and \
436 @@ -1893,9 +1913,8 @@
437 return 1
438 # The blocker applies to the root where
439 # the parent is or will be installed.
440 - self.blocker_parents.setdefault(
441 - ("blocks", dep.parent.root, dep.atom), set()).add(
442 - dep.parent)
443 + blocker = Blocker(atom=dep.atom, root=dep.parent.root)
444 + self._blocker_parents.add(blocker, dep.parent)
445 return 1
446 dep_pkg, existing_node = self._select_package(dep.root, dep.atom,
447 onlydeps=dep.onlydeps)
448 @@ -1925,8 +1944,7 @@
449 # should have been masked.
450 raise
451 if not myarg:
452 - if consistent:
453 - self._ignored_deps.append(dep)
454 + self._ignored_deps.append(dep)
455 return 1
456
457 if not self._add_pkg(dep_pkg, dep.parent,
458 @@ -2073,8 +2091,6 @@
459 return 1
460 elif pkg.installed and \
461 "deep" not in self.myparams:
462 - if "consistent" not in self.myparams:
463 - return 1
464 dep_stack = self._ignored_deps
465
466 self.spinner.update()
467 @@ -2585,12 +2601,11 @@
468 missing += 1
469 print "Missing binary for:",xs[2]
470
471 - if not self._complete_graph():
472 + try:
473 + self.altlist()
474 + except self._unknown_internal_error:
475 return False, myfavorites
476
477 - if not self.validate_blockers():
478 - return False, myfavorites
479 -
480 # We're true here unless we are missing binaries.
481 return (not missing,myfavorites)
482
483 @@ -2967,7 +2982,7 @@
484 Since this method can consume enough time to disturb users, it is
485 currently only enabled by the --complete-graph option.
486 """
487 - if "consistent" not in self.myparams:
488 + if "complete" not in self.myparams:
489 # Skip this to avoid consuming enough time to disturb users.
490 return 1
491
492 @@ -3141,24 +3156,25 @@
493 blocker_cache.BlockerData(counter, blocker_atoms)
494 if blocker_atoms:
495 for myatom in blocker_atoms:
496 - blocker = ("blocks", myroot, myatom[1:])
497 - myparents = \
498 - self.blocker_parents.get(blocker, None)
499 - if not myparents:
500 - myparents = set()
501 - self.blocker_parents[blocker] = myparents
502 - myparents.add(node)
503 + blocker = Blocker(atom=myatom[1:], root=myroot)
504 + self._blocker_parents.add(blocker, node)
505 blocker_cache.flush()
506 del blocker_cache
507
508 - for blocker in self.blocker_parents.keys():
509 + for blocker in self._blocker_parents.leaf_nodes():
510 + self.spinner.update()
511 mytype, myroot, mydep = blocker
512 initial_db = self.trees[myroot]["vartree"].dbapi
513 final_db = self.mydbapi[myroot]
514 blocked_initial = initial_db.match(mydep)
515 blocked_final = final_db.match(mydep)
516 if not blocked_initial and not blocked_final:
517 - del self.blocker_parents[blocker]
518 + parent_pkgs = self._blocker_parents.parent_nodes(blocker)
519 + self._blocker_parents.remove(blocker)
520 + # Discard any parents that don't have any more blockers.
521 + for pkg in parent_pkgs:
522 + if not self._blocker_parents.child_nodes(pkg):
523 + self._blocker_parents.remove(pkg)
524 continue
525 blocked_slots_initial = {}
526 blocked_slots_final = {}
527 @@ -3170,7 +3186,7 @@
528 blocked_slots_final[cpv] = \
529 "%s:%s" % (portage.dep_getkey(cpv),
530 final_db.aux_get(cpv, ["SLOT"])[0])
531 - for parent in list(self.blocker_parents[blocker]):
532 + for parent in self._blocker_parents.parent_nodes(blocker):
533 ptype, proot, pcpv, pstatus = parent
534 pdbapi = self.trees[proot][self.pkg_tree_map[ptype]].dbapi
535 pslot = pdbapi.aux_get(pcpv, ["SLOT"])[0]
536 @@ -3248,30 +3264,20 @@
537 self._pkg_cache[uninst_task] = uninst_task
538 # Enforce correct merge order with a hard dep.
539 self.digraph.addnode(uninst_task, inst_task,
540 - priority=BlockerDepPriority())
541 + priority=BlockerDepPriority.instance)
542 # Count references to this blocker so that it can be
543 # invalidated after nodes referencing it have been
544 # merged.
545 - self.blocker_digraph.addnode(uninst_task, blocker)
546 + self._blocker_uninstalls.addnode(uninst_task, blocker)
547 if not unresolved_blocks and not depends_on_order:
548 - self.blocker_parents[blocker].remove(parent)
549 + self._blocker_parents.remove_edge(blocker, parent)
550 + if not self._blocker_parents.parent_nodes(blocker):
551 + self._blocker_parents.remove(blocker)
552 + if not self._blocker_parents.child_nodes(parent):
553 + self._blocker_parents.remove(parent)
554 if unresolved_blocks:
555 - self._unresolved_blocker_parents.setdefault(
556 - blocker, set()).add(parent)
557 - if not self.blocker_parents[blocker]:
558 - del self.blocker_parents[blocker]
559 - # Validate blockers that depend on merge order.
560 - if not self.blocker_digraph.empty():
561 - self.altlist()
562 - if self._slot_collision_info:
563 - # The user is only notified of a slot collision if there are no
564 - # unresolvable blocks.
565 - for x in self.altlist():
566 - if x[0] == "blocks":
567 - self._slot_collision_info.clear()
568 - return True
569 - if not self._accept_collisions():
570 - return False
571 + self._unsolvable_blockers.add(blocker, parent)
572 +
573 return True
574
575 def _accept_collisions(self):
576 @@ -3294,27 +3300,44 @@
577 mygraph.order.sort(cmp_merge_preference)
578
579 def altlist(self, reversed=False):
580 - if reversed in self._altlist_cache:
581 - return self._altlist_cache[reversed][:]
582 +
583 + while self._serialized_tasks_cache is None:
584 + self._resolve_conflicts()
585 + try:
586 + self._serialized_tasks_cache = self._serialize_tasks()
587 + except self._serialize_tasks_retry:
588 + pass
589 +
590 + retlist = self._serialized_tasks_cache[:]
591 if reversed:
592 - retlist = self.altlist()
593 retlist.reverse()
594 - self._altlist_cache[reversed] = retlist[:]
595 - return retlist
596 + return retlist
597 +
598 + def _resolve_conflicts(self):
599 + if not self._complete_graph():
600 + raise self._unknown_internal_error()
601 +
602 + if not self.validate_blockers():
603 + raise self._unknown_internal_error()
604 +
605 + def _serialize_tasks(self):
606 mygraph=self.digraph.copy()
607 # Prune "nomerge" root nodes if nothing depends on them, since
608 # otherwise they slow down merge order calculation. Don't remove
609 # non-root nodes since they help optimize merge order in some cases
610 # such as revdep-rebuild.
611 + removed_nodes = set()
612 while True:
613 - removed_something = False
614 for node in mygraph.root_nodes():
615 if not isinstance(node, Package) or \
616 node.installed or node.onlydeps:
617 - mygraph.remove(node)
618 - removed_something = True
619 - if not removed_something:
620 + removed_nodes.add(node)
621 + if removed_nodes:
622 + self.spinner.update()
623 + mygraph.difference_update(removed_nodes)
624 + if not removed_nodes:
625 break
626 + removed_nodes.clear()
627 self._merge_order_bias(mygraph)
628 def cmp_circular_bias(n1, n2):
629 """
630 @@ -3331,14 +3354,16 @@
631 elif n1_n2_medium:
632 return 1
633 return -1
634 - myblockers = self.blocker_digraph.copy()
635 + myblocker_uninstalls = self._blocker_uninstalls.copy()
636 retlist=[]
637 # Contains any Uninstall tasks that have been ignored
638 # in order to avoid the circular deps code path. These
639 # correspond to blocker conflicts that could not be
640 # resolved.
641 ignored_uninstall_tasks = set()
642 - blocker_deps = None
643 + have_uninstall_task = False
644 + complete = "complete" in self.myparams
645 + myblocker_parents = self._blocker_parents.copy()
646 asap_nodes = []
647 portage_node = None
648 def get_nodes(**kwargs):
649 @@ -3385,6 +3410,7 @@
650 # unresolved blockers or circular dependencies.
651
652 while not mygraph.empty():
653 + self.spinner.update()
654 selected_nodes = None
655 ignore_priority = None
656 if prefer_asap and asap_nodes:
657 @@ -3510,13 +3536,13 @@
658 selected_nodes = list(selected_nodes)
659 selected_nodes.sort(cmp_circular_bias)
660
661 - if not selected_nodes and not myblockers.is_empty():
662 + if not selected_nodes and not myblocker_uninstalls.is_empty():
663 # An Uninstall task needs to be executed in order to
664 # avoid conflict if possible.
665
666 min_parent_deps = None
667 uninst_task = None
668 - for task in myblockers.leaf_nodes():
669 + for task in myblocker_uninstalls.leaf_nodes():
670 # Do some sanity checks so that system or world packages
671 # don't get uninstalled inappropriately here (only really
672 # necessary when --complete-graph has not been enabled).
673 @@ -3528,43 +3554,54 @@
674 inst_pkg = self._pkg_cache[
675 ("installed", task.root, task.cpv, "nomerge")]
676
677 - # For packages in the system set, don't take
678 - # any chances. If the conflict can't be resolved
679 - # by a normal upgrade operation then require
680 - # user intervention.
681 - skip = False
682 - try:
683 - for atom in root_config.sets[
684 - "system"].iterAtomsForPackage(task):
685 - skip = True
686 - break
687 - except portage.exception.InvalidDependString:
688 - skip = True
689 - if skip:
690 + if self.digraph.contains(inst_pkg):
691 continue
692
693 - # For packages in the world set, go ahead an uninstall
694 - # when necessary, as long as the atom will be satisfied
695 - # in the final state.
696 - graph_db = self.mydbapi[task.root]
697 - try:
698 - for atom in root_config.sets[
699 - "world"].iterAtomsForPackage(task):
700 - satisfied = False
701 - for cpv in graph_db.match(atom):
702 - if cpv == inst_pkg.cpv and \
703 - inst_pkg in graph_db:
704 - continue
705 - satisfied = True
706 - break
707 - if not satisfied:
708 + if "/" == task.root:
709 + # For packages in the system set, don't take
710 + # any chances. If the conflict can't be resolved
711 + # by a normal replacement operation then abort.
712 + skip = False
713 + try:
714 + for atom in root_config.sets[
715 + "system"].iterAtomsForPackage(task):
716 skip = True
717 break
718 - except portage.exception.InvalidDependString:
719 - skip = True
720 - if skip:
721 - continue
722 + except portage.exception.InvalidDependString:
723 + skip = True
724 + if skip:
725 + continue
726
727 + # Note that the world check isn't always
728 + # necessary since self._complete_graph() will
729 + # add all packages from the system and world sets to the
730 + # graph. This just allows unresolved conflicts to be
731 + # detected as early as possible, which makes it possible
732 + # to avoid calling self._complete_graph() when it is
733 + # unnecessary due to blockers triggering an abortion.
734 + if not complete:
735 + # For packages in the world set, go ahead an uninstall
736 + # when necessary, as long as the atom will be satisfied
737 + # in the final state.
738 + graph_db = self.mydbapi[task.root]
739 + try:
740 + for atom in root_config.sets[
741 + "world"].iterAtomsForPackage(task):
742 + satisfied = False
743 + for cpv in graph_db.match(atom):
744 + if cpv == inst_pkg.cpv and \
745 + inst_pkg in graph_db:
746 + continue
747 + satisfied = True
748 + break
749 + if not satisfied:
750 + skip = True
751 + break
752 + except portage.exception.InvalidDependString:
753 + skip = True
754 + if skip:
755 + continue
756 +
757 # Check the deps of parent nodes to ensure that
758 # the chosen task produces a leaf node. Maybe
759 # this can be optimized some more to make the
760 @@ -3590,7 +3627,7 @@
761 # to avoid the circular deps code path, but the
762 # blocker will still be counted as an unresolved
763 # conflict.
764 - for node in myblockers.leaf_nodes():
765 + for node in myblocker_uninstalls.leaf_nodes():
766 try:
767 mygraph.remove(node)
768 except KeyError:
769 @@ -3650,12 +3687,16 @@
770 prefer_asap = True
771 accept_root_node = False
772
773 + mygraph.difference_update(selected_nodes)
774 +
775 for node in selected_nodes:
776
777 # Handle interactions between blockers
778 # and uninstallation tasks.
779 + solved_blockers = set()
780 uninst_task = None
781 if isinstance(node, Uninstall):
782 + have_uninstall_task = True
783 uninst_task = node
784 else:
785 vardb = self.trees[node.root]["vartree"].dbapi
786 @@ -3672,33 +3713,45 @@
787 pass
788 if uninst_task is not None and \
789 uninst_task not in ignored_uninstall_tasks and \
790 - myblockers.contains(uninst_task):
791 - myblockers.remove(uninst_task)
792 - for blocker in myblockers.root_nodes():
793 - if myblockers.child_nodes(blocker):
794 - continue
795 - myblockers.remove(blocker)
796 - unresolved = \
797 - self._unresolved_blocker_parents.get(blocker)
798 - if unresolved:
799 - self.blocker_parents[blocker] = unresolved
800 - else:
801 - del self.blocker_parents[blocker]
802 + myblocker_uninstalls.contains(uninst_task):
803 + blocker_nodes = myblocker_uninstalls.parent_nodes(uninst_task)
804 + myblocker_uninstalls.remove(uninst_task)
805 + # Discard any blockers that this Uninstall solves.
806 + for blocker in blocker_nodes:
807 + if not myblocker_uninstalls.child_nodes(blocker):
808 + myblocker_uninstalls.remove(blocker)
809 + solved_blockers.add(blocker)
810
811 if node[-1] != "nomerge":
812 - retlist.append(list(node))
813 - mygraph.remove(node)
814 + retlist.append(node)
815
816 - if not reversed:
817 - """Blocker validation does not work with reverse mode,
818 - so self.altlist() should first be called with reverse disabled
819 - so that blockers are properly validated."""
820 - self.blocker_digraph = myblockers
821 + if isinstance(node, Uninstall):
822 + # Include satisfied blockers in the merge list so
823 + # that the user can see why the package had to be
824 + # uninstalled in advance rather than through
825 + # replacement.
826 + for blocker in solved_blockers:
827 + retlist.append(Blocker(atom=blocker.atom,
828 + root=blocker.root, satisfied=True))
829
830 - """ Add any unresolved blocks so that they can be displayed."""
831 - for blocker in self.blocker_parents:
832 - retlist.append(list(blocker))
833 - self._altlist_cache[reversed] = retlist[:]
834 + unsolvable_blockers = set(self._unsolvable_blockers.leaf_nodes())
835 + for node in myblocker_uninstalls.root_nodes():
836 + unsolvable_blockers.add(node)
837 +
838 + for blocker in unsolvable_blockers:
839 + retlist.append(blocker)
840 +
841 + # If any Uninstall tasks need to be executed in order
842 + # to avoid a conflict, complete the graph with any
843 + # dependencies that may have been initially
844 + # neglected (to ensure that unsafe Uninstall tasks
845 + # are properly identified and blocked from execution).
846 + if have_uninstall_task and \
847 + not complete and \
848 + not unsolvable_blockers:
849 + self.myparams.add("complete")
850 + raise self._serialize_tasks_retry("")
851 +
852 return retlist
853
854 def display(self, mylist, favorites=[], verbosity=None):
855 @@ -3787,15 +3840,53 @@
856
857 tree_nodes = []
858 display_list = []
859 - mygraph = self.digraph
860 + mygraph = self.digraph.copy()
861 +
862 + # If there are any Uninstall instances, add the corresponding
863 + # blockers to the digraph (useful for --tree display).
864 + for uninstall in self._blocker_uninstalls.leaf_nodes():
865 + uninstall_parents = \
866 + self._blocker_uninstalls.parent_nodes(uninstall)
867 + if not uninstall_parents:
868 + continue
869 +
870 + # Remove the corresponding "nomerge" node and substitute
871 + # the Uninstall node.
872 + inst_pkg = self._pkg_cache[
873 + ("installed", uninstall.root, uninstall.cpv, "nomerge")]
874 + try:
875 + mygraph.remove(inst_pkg)
876 + except KeyError:
877 + pass
878 +
879 + try:
880 + inst_pkg_blockers = self._blocker_parents.child_nodes(inst_pkg)
881 + except KeyError:
882 + inst_pkg_blockers = []
883 +
884 + # Break the Package -> Uninstall edges.
885 + mygraph.remove(uninstall)
886 +
887 + # Resolution of a package's blockers
888 + # depend on it's own uninstallation.
889 + for blocker in inst_pkg_blockers:
890 + mygraph.add(uninstall, blocker)
891 +
892 + # Expand Package -> Uninstall edges into
893 + # Package -> Blocker -> Uninstall edges.
894 + for blocker in uninstall_parents:
895 + mygraph.add(uninstall, blocker)
896 + for parent in self._blocker_parents.parent_nodes(blocker):
897 + if parent != inst_pkg:
898 + mygraph.add(blocker, parent)
899 +
900 i = 0
901 depth = 0
902 shown_edges = set()
903 for x in mylist:
904 - if "blocks" == x[0]:
905 - display_list.append((x, 0, True))
906 - continue
907 - graph_key = tuple(x)
908 + graph_key = x
909 + if isinstance(graph_key, list):
910 + graph_key = tuple(graph_key)
911 if "--tree" in self.myopts:
912 depth = len(tree_nodes)
913 while depth and graph_key not in \
914 @@ -3821,7 +3912,7 @@
915 selected_parent = None
916 # First, try to avoid a direct cycle.
917 for node in parent_nodes:
918 - if not isinstance(node, Package):
919 + if not isinstance(node, (Blocker, Package)):
920 continue
921 if node not in traversed_nodes and \
922 node not in child_nodes:
923 @@ -3833,7 +3924,7 @@
924 if not selected_parent:
925 # A direct cycle is unavoidable.
926 for node in parent_nodes:
927 - if not isinstance(node, Package):
928 + if not isinstance(node, (Blocker, Package)):
929 continue
930 if node not in traversed_nodes:
931 edge = (current_node, node)
932 @@ -3845,7 +3936,7 @@
933 shown_edges.add((current_node, selected_parent))
934 traversed_nodes.add(selected_parent)
935 add_parents(selected_parent, False)
936 - display_list.append((list(current_node),
937 + display_list.append((current_node,
938 len(tree_nodes), ordered))
939 tree_nodes.append(current_node)
940 tree_nodes = []
941 @@ -3864,8 +3955,6 @@
942 # being filled in.
943 del mylist[i]
944 continue
945 - if "blocks" == graph_key[0]:
946 - continue
947 if ordered and graph_key[-1] != "nomerge":
948 last_merge_depth = depth
949 continue
950 @@ -3892,6 +3981,7 @@
951 pkgsettings = self.pkgsettings[myroot]
952
953 fetch=" "
954 + indent = " " * depth
955
956 if x[0]=="blocks":
957 addl=""+red("B")+" "+fetch+" "
958 @@ -3902,8 +3992,8 @@
959 if "--columns" in self.myopts and "--quiet" in self.myopts:
960 addl = addl + " " + red(resolved)
961 else:
962 - addl = "[blocks " + addl + "] " + red(resolved)
963 - block_parents = self.blocker_parents[tuple(x)]
964 + addl = "[blocks " + addl + "] " + indent + red(resolved)
965 + block_parents = self._blocker_parents.parent_nodes(tuple(x))
966 block_parents = set([pnode[2] for pnode in block_parents])
967 block_parents = ", ".join(block_parents)
968 if resolved!=x[2]:
969 @@ -3911,7 +4001,10 @@
970 (pkg_key, block_parents)
971 else:
972 addl += bad(" (is blocking %s)") % block_parents
973 - blockers.append(addl)
974 + if isinstance(x, Blocker) and x.satisfied:
975 + p.append(addl)
976 + else:
977 + blockers.append(addl)
978 else:
979 pkg = self._pkg_cache[tuple(x)]
980 metadata = pkg.metadata
981 @@ -3919,14 +4012,6 @@
982 pkg_merge = ordered and pkg_status == "merge"
983 if not pkg_merge and pkg_status == "merge":
984 pkg_status = "nomerge"
985 - if pkg_status == "uninstall":
986 - mydbapi = vardb
987 - elif pkg in self._slot_collision_nodes or pkg.onlydeps:
988 - # The metadata isn't cached due to a slot collision or
989 - # --onlydeps.
990 - mydbapi = self.trees[myroot][self.pkg_tree_map[pkg_type]].dbapi
991 - else:
992 - mydbapi = self.mydbapi[myroot] # contains cached metadata
993 ebuild_path = None
994 if pkg_type == "binary":
995 repo_name = self.roots[myroot].settings.get("PORTAGE_BINHOST")
996 @@ -3947,12 +4032,11 @@
997 pkg_use = metadata["USE"].split()
998 try:
999 restrict = flatten(use_reduce(paren_reduce(
1000 - mydbapi.aux_get(pkg_key, ["RESTRICT"])[0]),
1001 - uselist=pkg_use))
1002 + pkg.metadata["RESTRICT"]), uselist=pkg_use))
1003 except portage.exception.InvalidDependString, e:
1004 if not pkg.installed:
1005 - restrict = mydbapi.aux_get(pkg_key, ["RESTRICT"])[0]
1006 - show_invalid_depstring_notice(x, restrict, str(e))
1007 + show_invalid_depstring_notice(x,
1008 + pkg.metadata["RESTRICT"], str(e))
1009 del e
1010 return 1
1011 restrict = []
1012 @@ -3981,10 +4065,7 @@
1013 elif installed_versions and \
1014 portage.cpv_getkey(installed_versions[0]) == \
1015 portage.cpv_getkey(pkg_key):
1016 - mynewslot = mydbapi.aux_get(pkg_key, ["SLOT"])[0]
1017 - slot_atom = "%s:%s" % \
1018 - (portage.cpv_getkey(pkg_key), mynewslot)
1019 - myinslotlist = vardb.match(slot_atom)
1020 + myinslotlist = vardb.match(pkg.slot_atom)
1021 # If this is the first install of a new-style virtual, we
1022 # need to filter out old-style virtual matches.
1023 if myinslotlist and \
1024 @@ -4030,10 +4111,10 @@
1025 if True:
1026 # USE flag display
1027 cur_iuse = list(filter_iuse_defaults(
1028 - mydbapi.aux_get(pkg_key, ["IUSE"])[0].split()))
1029 + pkg.metadata["IUSE"].split()))
1030
1031 forced_flags = set()
1032 - pkgsettings.setcpv(pkg_key, mydb=mydbapi) # for package.use.{mask,force}
1033 + pkgsettings.setcpv(pkg.cpv, mydb=pkg.metadata) # for package.use.{mask,force}
1034 forced_flags.update(pkgsettings.useforce)
1035 forced_flags.update(pkgsettings.usemask)
1036
1037 @@ -4202,8 +4283,6 @@
1038 oldlp = mywidth - 30
1039 newlp = oldlp - 30
1040
1041 - indent = " " * depth
1042 -
1043 # Convert myoldbest from a list to a string.
1044 if not myoldbest:
1045 myoldbest = ""
1046 @@ -4366,8 +4445,19 @@
1047 to ensure that the user is notified of problems with the graph.
1048 """
1049
1050 - self._show_slot_collision_notice()
1051 + task_list = self._serialized_tasks_cache
1052
1053 + # Any blockers must be appended to the tail of the list,
1054 + # so we only need to check the last item.
1055 + have_blocker_conflict = bool(task_list and \
1056 + (isinstance(task_list[-1], Blocker) and \
1057 + not task_list[-1].satisfied))
1058 +
1059 + # The user is only notified of a slot conflict if
1060 + # there are no unresolvable blocker conflicts.
1061 + if not have_blocker_conflict:
1062 + self._show_slot_collision_notice()
1063 +
1064 # TODO: Add generic support for "set problem" handlers so that
1065 # the below warnings aren't special cases for world only.
1066
1067 @@ -4594,6 +4684,22 @@
1068 fakedb[myroot].cpv_inject(pkg)
1069 self.spinner.update()
1070
1071 + class _unknown_internal_error(portage.exception.PortageException):
1072 + """
1073 + Used by the depgraph internally to terminate graph creation.
1074 + The specific reason for the failure should have been dumped
1075 + to stderr, unfortunately, the exact reason for the failure
1076 + may not be known.
1077 + """
1078 +
1079 + class _serialize_tasks_retry(portage.exception.PortageException):
1080 + """
1081 + This is raised by the _serialize_tasks() method when it needs to
1082 + be called again for some reason. The only case that it's currently
1083 + used for is when neglected dependencies need to be added to the
1084 + graph in order to avoid making a potentially unsafe decision.
1085 + """
1086 +
1087 class _dep_check_composite_db(portage.dbapi):
1088 """
1089 A dbapi-like interface that is optimized for use in dep_check() calls.
1090 @@ -4959,7 +5065,8 @@
1091 world_set = root_config.sets["world"]
1092 if "--resume" not in self.myopts:
1093 mymergelist = mylist
1094 - mtimedb["resume"]["mergelist"]=mymergelist[:]
1095 + mtimedb["resume"]["mergelist"] = [list(x) for x in mymergelist \
1096 + if isinstance(x, (Package, Uninstall))]
1097 mtimedb.commit()
1098
1099 myfeat = self.settings.features[:]
1100 @@ -7518,6 +7625,8 @@
1101 return retval
1102 mergecount=0
1103 for x in mydepgraph.altlist():
1104 + if isinstance(x, Blocker) and x.satisfied:
1105 + continue
1106 if x[0] != "blocks" and x[3] != "nomerge":
1107 mergecount+=1
1108 #check for blocking dependencies
1109 @@ -7635,6 +7744,8 @@
1110 pkglist = [pkg for pkg in pkglist if pkg[0] != "blocks"]
1111 else:
1112 for x in pkglist:
1113 + if isinstance(x, Blocker) and x.satisfied:
1114 + continue
1115 if x[0] != "blocks":
1116 continue
1117 retval = mydepgraph.display(mydepgraph.altlist(
1118
1119 Modified: main/branches/prefix/pym/portage/__init__.py
1120 ===================================================================
1121 --- main/branches/prefix/pym/portage/__init__.py 2008-04-27 09:19:20 UTC (rev 9998)
1122 +++ main/branches/prefix/pym/portage/__init__.py 2008-04-27 14:56:09 UTC (rev 9999)
1123 @@ -394,6 +394,48 @@
1124 del self.nodes[node]
1125 self.order.remove(node)
1126
1127 + def difference_update(self, t):
1128 + """
1129 + Remove all given nodes from node_set. This is more efficient
1130 + than multiple calls to the remove() method.
1131 + """
1132 + if isinstance(t, (list, tuple)) or \
1133 + not hasattr(t, "__contains__"):
1134 + t = frozenset(t)
1135 + order = []
1136 + for node in self.order:
1137 + if node not in t:
1138 + order.append(node)
1139 + continue
1140 + for parent in self.nodes[node][1]:
1141 + del self.nodes[parent][0][node]
1142 + for child in self.nodes[node][0]:
1143 + del self.nodes[child][1][node]
1144 + del self.nodes[node]
1145 + self.order = order
1146 +
1147 + def remove_edge(self, child, parent):
1148 + """
1149 + Remove edge in the direction from child to parent. Note that it is
1150 + possible for a remaining edge to exist in the opposite direction.
1151 + Any endpoint vertices that become isolated will remain in the graph.
1152 + """
1153 +
1154 + # Nothing should be modified when a KeyError is raised.
1155 + for k in parent, child:
1156 + if k not in self.nodes:
1157 + raise KeyError(k)
1158 +
1159 + # Make sure the edge exists.
1160 + if child not in self.nodes[parent][0]:
1161 + raise KeyError(child)
1162 + if parent not in self.nodes[child][1]:
1163 + raise KeyError(parent)
1164 +
1165 + # Remove the edge.
1166 + del self.nodes[child][1][parent]
1167 + del self.nodes[parent][0][child]
1168 +
1169 def contains(self, node):
1170 """Checks if the digraph contains mynode"""
1171 return node in self.nodes
1172
1173 Modified: main/branches/prefix/pym/portage/dbapi/porttree.py
1174 ===================================================================
1175 --- main/branches/prefix/pym/portage/dbapi/porttree.py 2008-04-27 09:19:20 UTC (rev 9998)
1176 +++ main/branches/prefix/pym/portage/dbapi/porttree.py 2008-04-27 14:56:09 UTC (rev 9999)
1177 @@ -521,14 +521,6 @@
1178 l.sort()
1179 return l
1180
1181 - def p_list(self,mycp):
1182 - d={}
1183 - for oroot in self.porttrees:
1184 - for x in listdir(oroot+"/"+mycp,EmptyOnError=1,ignorecvs=1):
1185 - if x[-7:]==".ebuild":
1186 - d[x[:-7]] = None
1187 - return d.keys()
1188 -
1189 def cp_list(self, mycp, use_cache=1, mytree=None):
1190 if self.frozen and mytree is None:
1191 cachelist = self.xcache["cp-list"].get(mycp)
1192
1193 Modified: main/branches/prefix/pym/portage/dbapi/vartree.py
1194 ===================================================================
1195 --- main/branches/prefix/pym/portage/dbapi/vartree.py 2008-04-27 09:19:20 UTC (rev 9998)
1196 +++ main/branches/prefix/pym/portage/dbapi/vartree.py 2008-04-27 14:56:09 UTC (rev 9999)
1197 @@ -125,6 +125,76 @@
1198 rValue[self._data[cps][0]] = self._data[cps][2]
1199 return rValue
1200
1201 +class LinkageMap(object):
1202 + def __init__(self, vardbapi):
1203 + self._dbapi = vardbapi
1204 + self._libs = {}
1205 + self._obj_properties = {}
1206 + self._defpath = getlibpaths()
1207 +
1208 + def rebuild(self):
1209 + libs = {}
1210 + obj_properties = {}
1211 + for cpv in self._dbapi.cpv_all():
1212 + lines = grabfile(self._dbapi.getpath(cpv, filename="NEEDED.2"))
1213 + for l in lines:
1214 + fields = l.strip("\n").split(";")
1215 + if len(fields) < 5:
1216 + print "Error", fields
1217 + # insufficient field length
1218 + continue
1219 + arch = fields[0]
1220 + obj = fields[1]
1221 + soname = fields[2]
1222 + path = fields[3].replace("${ORIGIN}", os.path.dirname(obj)).replace("$ORIGIN", os.path.dirname(obj)).split(":")
1223 + needed = fields[4].split(",")
1224 + if soname:
1225 + libs.setdefault(soname, {arch: {"providers": [], "consumers": []}})
1226 + libs[soname].setdefault(arch, {"providers": [], "consumers": []})
1227 + libs[soname][arch]["providers"].append(obj)
1228 + for x in needed:
1229 + libs.setdefault(x, {arch: {"providers": [], "consumers": []}})
1230 + libs[x].setdefault(arch, {"providers": [], "consumers": []})
1231 + libs[x][arch]["consumers"].append(obj)
1232 + obj_properties[obj] = (arch, path, needed, soname)
1233 +
1234 + self._libs = libs
1235 + self._obj_properties = obj_properties
1236 +
1237 + def findProviders(self, obj):
1238 + obj = os.path.realpath(obj)
1239 + rValue = {}
1240 + if obj not in self._obj_properties:
1241 + raise KeyError("%s not in object list" % obj)
1242 + arch, path, needed, soname = self._obj_properties[obj]
1243 + path.extend(self._defpath)
1244 + path = [os.path.realpath(x) for x in path]
1245 + for x in needed:
1246 + rValue[x] = set()
1247 + if x not in self._libs or arch not in self._libs[x]:
1248 + continue
1249 + for y in self._libs[x][arch]["providers"]:
1250 + if x[0] == os.sep and os.path.realpath(x) == os.path.realpath(y):
1251 + rValue[x].add(y)
1252 + elif os.path.realpath(os.path.dirname(y)) in path:
1253 + rValue[x].add(y)
1254 + return rValue
1255 +
1256 + def findConsumers(self, obj):
1257 + obj = os.path.realpath(obj)
1258 + rValue = set()
1259 + for soname in self._libs:
1260 + for arch in self._libs[soname]:
1261 + if obj in self._libs[soname][arch]["providers"]:
1262 + for x in self._libs[soname][arch]["consumers"]:
1263 + path = self._obj_properties[x][1]
1264 + path = [os.path.realpath(y) for y in path+self._defpath]
1265 + if soname[0] == os.sep and os.path.realpath(soname) == os.path.realpath(obj):
1266 + rValue.add(x)
1267 + elif os.path.realpath(os.path.dirname(obj)) in path:
1268 + rValue.add(x)
1269 + return rValue
1270 +
1271 class LibraryPackageMap(object):
1272 """ This class provides a library->consumer mapping generated from VDB data """
1273 def __init__(self, filename, vardbapi):
1274 @@ -1649,7 +1719,12 @@
1275
1276 for lib in list(preserve_libs):
1277 if not has_external_consumers(lib, old_contents, preserve_libs):
1278 - preserve_libs.remove(lib)
1279 + preserve_libs.remove(lib)
1280 + # only preserve the lib if there is no other copy in the search path
1281 + for path in getlibpaths():
1282 + fullname = os.path.join(path, lib)
1283 + if fullname not in old_contents and os.path.exists(fullname) and lib in preserve_libs:
1284 + preserve_libs.remove(lib)
1285
1286 # get the real paths for the libs
1287 preserve_paths = [x for x in old_contents if os.path.basename(x) in preserve_libs]
1288
1289 Modified: main/branches/prefix/pym/portage/sets/__init__.py
1290 ===================================================================
1291 --- main/branches/prefix/pym/portage/sets/__init__.py 2008-04-27 09:19:20 UTC (rev 9998)
1292 +++ main/branches/prefix/pym/portage/sets/__init__.py 2008-04-27 14:56:09 UTC (rev 9999)
1293 @@ -102,7 +102,7 @@
1294 def load_default_config(settings, trees):
1295 setconfigpaths = [os.path.join(GLOBAL_CONFIG_PATH, "sets.conf")]
1296 setconfigpaths.append(os.path.join(settings["PORTDIR"], "sets.conf"))
1297 - setconfigpaths += [os.path.join(x, "sets.conf") for x in settings["PORDIR_OVERLAY"].split()]
1298 + setconfigpaths += [os.path.join(x, "sets.conf") for x in settings["PORTDIR_OVERLAY"].split()]
1299 setconfigpaths.append(os.path.join(settings["PORTAGE_CONFIGROOT"],
1300 USER_CONFIG_PATH.lstrip(os.path.sep), "sets.conf"))
1301 return SetConfig(setconfigpaths, settings, trees)
1302
1303 --
1304 gentoo-commits@l.g.o mailing list