Gentoo Archives: gentoo-doc-cvs

From: Xavier Neys <neysx@×××××××××××.org>
To: gentoo-doc-cvs@l.g.o
Subject: [gentoo-doc-cvs] cvs commit: l-posix1.xml
Date: Wed, 03 Aug 2005 10:36:50
Message-Id: 200508031036.j73Aa0vD021343@robin.gentoo.org
1 neysx 05/08/03 10:36:19
2
3 Added: xml/htdocs/doc/en/articles l-posix1.xml l-posix2.xml
4 l-posix3.xml
5 Log:
6 #100538 xmlified posix articles
7
8 Revision Changes Path
9 1.1 xml/htdocs/doc/en/articles/l-posix1.xml
10
11 file : http://www.gentoo.org/cgi-bin/viewcvs.cgi/xml/htdocs/doc/en/articles/l-posix1.xml?rev=1.1&content-type=text/x-cvsweb-markup&cvsroot=gentoo
12 plain: http://www.gentoo.org/cgi-bin/viewcvs.cgi/xml/htdocs/doc/en/articles/l-posix1.xml?rev=1.1&content-type=text/plain&cvsroot=gentoo
13
14 Index: l-posix1.xml
15 ===================================================================
16 <?xml version='1.0' encoding="UTF-8"?>
17 <!-- $Header: /var/cvsroot/gentoo/xml/htdocs/doc/en/articles/l-posix1.xml,v 1.1 2005/08/03 10:36:19 neysx Exp $ -->
18 <!DOCTYPE guide SYSTEM "/dtd/guide.dtd">
19
20 <guide link="/doc/en/articles/l-posix1.xml">
21 <title>POSIX threads explained, part 1</title>
22
23 <author title="Author">
24 <mail link="drobbins@g.o">Daniel Robbins</mail>
25 </author>
26 <author title="Editor">
27 <mail link="rane@××××××.pl">Łukasz Damentko</mail>
28 </author>
29
30 <abstract>
31 POSIX (Portable Operating System Interface) threads are a great way to increase
32 the responsiveness and performance of your code. In this series, Daniel Robbins
33 shows you exactly how to use threads in your code. A lot of behind-the-scenes
34 details are covered, so by the end of this series you'll really be ready to
35 create your own multithreaded programs.
36 </abstract>
37
38 <!-- The original version of this article was published on IBM developerWorks,
39 and is property of Westtech Information Services. This document is an updated
40 version of the original article, and contains various improvements made by the
41 Gentoo Linux Documentation team -->
42
43 <version>1.0</version>
44 <date>2005-07-27</date>
45
46 <chapter>
47 <title>A simple and nimble tool for memory sharing</title>
48 <section>
49 <title>Threads are fun</title>
50 <body>
51
52 <note>
53 The original version of this article was published on IBM developerWorks, and is
54 property of Westtech Information Services. This document is an updated version
55 of the original article, and contains various improvements made by the Gentoo
56 Linux Documentation team.
57 </note>
58
59 <p>
60 Knowing how to properly use threads should be part of every good programmer's
61 repertoire. Threads are similar to processes. Threads, like processes, are
62 time-sliced by the kernel. On uniprocessor systems the kernel uses time slicing
63 to simulate simultaneous execution of threads in much the same way it uses time
64 slicing with processes. And, on multiprocessor systems, threads are actually
65 able to run simultaneously, just like two or more processes can.
66 </p>
67
68 <p>
69 So why is multithreading preferable to multiple independent processes for most
70 cooperative tasks? Well, threads share the same memory space. Independent
71 threads can access the same variables in memory. So all of your program's
72 threads can read or write the declared global integers. If you've ever
73 programmed any non-trivial code that uses fork(), you'll recognize the
74 importance of this tool. Why? While fork() allows you to create multiple
75 processes, it also creates the following communication problem: how to get
76 multiple processes, each with their own independent memory space, to
77 communicate. There is no one simple answer to this problem. While there are many
78 different kinds of local IPC (inter-process communication), they all suffer from
79 two important drawbacks:
80 </p>
81
82 <ul>
83 <li>
84 They impose some form of additional kernel overhead, lowering performance.
85 </li>
86 <li>
87 In almost all situations, IPC is not a "natural" extension of your code. It
88 often dramatically increases the complexity of your program.
89 </li>
90 </ul>
91
92 <p>
93 Double bummer: overhead and complication aren't good things. If you've ever had
94 to make massive modifications to one of your programs so that it supports IPC,
95 you'll really appreciate the simple memory-sharing approach that threads
96 provide. POSIX threads don't need to make expensive and complicated
97 long-distance calls because all our threads happen to live in the same house.
98 With a little synchronization, all your threads can read and modify your
99 program's existing data structures. You don't have to pump the data through a
100 file descriptor or squeeze it into a tight, shared memory space. For this reason
101 alone you should consider the one process/multithread model rather than the
102 multiprocess/single-thread model.
103 </p>
104
105 </body>
106 </section>
107 <section>
108 <title>Threads are nimble</title>
109 <body>
110
111 <p>
112 But there's more. Threads also happen to be extremely nimble. Compared to a
113 standard fork(), they carry a lot less overhead. The kernel does not need to
114 make a new independent copy of the process memory space, file descriptors, etc.
115 That saves a lot of CPU time, making thread creation ten to a hundred times
116 faster than new process creation. Because of this, you can use a whole bunch of
117 threads and not worry too much about the CPU and memory overhead incurred. You
118 don't have a big CPU hit the way you do with fork(). This means you can
119 generally create threads whenever it makes sense in your program.
120 </p>
121
122 <p>
123 Of course, just like processes, threads will take advantage of multiple CPUs.
124 This is a really great feature if your software is designed to be used on a
125 multiprocessor machine (if the software is open source, it will probably end up
126 running on quite a few of these). The performance of certain kinds of threaded
127 programs (CPU-intensive ones in particular) will scale almost linearly with the
128 number of processors in the system. If you're writing a program that is very
129 CPU-intensive, you'll definitely want to find ways to use multiple threads in
130 your code. Once you're adept at writing threaded code, you'll also be able to
131 approach coding challenges in new and creative ways without a lot of IPC red
132 tape and miscellaneous mumbo-jumbo. All these benefits work synergistically to
133 make multithreaded programming fun, fast, and flexible.
134 </p>
135
136 </body>
137 </section>
138 <section>
139 <title>I think I'm a clone now</title>
140 <body>
141
142 <p>
143 If you've been in the Linux programming world for a while, you may know about
144 the __clone() system call. __clone() is similar to fork(), but allows you to do
145 lots of things that threads can do. For example, with __clone() you can
146 selectively share parts of your parent's execution context (memory space, file
147 descriptors, etc.) with a new child process. That's a good thing. But there is
148 also a not-so-good thing about __clone(). As the __clone() man page states:
149 </p>
150
151 <pre caption="__clone() man page excerpt">
152 "The __clone call is Linux-specific and should not be used in programs
153 intended to be portable. For programming threaded applications (multiple
154 threads of control in the same memory space), it is better to use a library
155 implementing the POSIX 1003.1c thread API, such as the Linux-Threads
156 library. See pthread_create(3thr)."
157 </pre>
158
159 <p>
160 So, while __clone() offers many of the benefits of threads, it is not portable.
161 That doesn't mean you shouldn't use it in your code. But you should weigh this
162 fact when you are considering using __clone() in your software. Fortunately, as
163 the __clone() man page states, there's a better alternative: POSIX threads. When
164 you want to write portable multithreaded code, code that works under Solaris,
165 FreeBSD, Linux, and more, POSIX threads are the way to go.
166 </p>
167
168 </body>
169 </section>
170 <section>
171 <title>Beginning threads</title>
172 <body>
173
174 <p>
175 Here's a simple example program that uses POSIX threads:
176 </p>
177
178 <pre caption="Sample program using POSIX threads">
179 #include &lt;pthread.h&gt;
180 #include &lt;stdlib.h&gt;
181 #include &lt;unistd.h&gt;
182
183 void *thread_function(void *arg) {
184 int i;
185 for ( i=0; i&lt;20; i++ ) {
186 printf("Thread says hi!\n");
187 sleep(1);
188 }
189 return NULL;
190 }
191
192 int main(void) {
193
194 pthread_t mythread;
195
196 if ( pthread_create( &amp;mythread, NULL, thread_function, NULL) ) {
197 printf("error creating thread.");
198 abort();
199 }
200
201 if ( pthread_join ( mythread, NULL ) ) {
202 printf("error joining thread.");
203 abort();
204 }
205
206 exit(0);
207
208 }
209 </pre>
210
211 <p>
212 To compile this program, simply save this program as thread1.c, and type:
213 </p>
214
215
216
217
218 1.1 xml/htdocs/doc/en/articles/l-posix2.xml
219
220 file : http://www.gentoo.org/cgi-bin/viewcvs.cgi/xml/htdocs/doc/en/articles/l-posix2.xml?rev=1.1&content-type=text/x-cvsweb-markup&cvsroot=gentoo
221 plain: http://www.gentoo.org/cgi-bin/viewcvs.cgi/xml/htdocs/doc/en/articles/l-posix2.xml?rev=1.1&content-type=text/plain&cvsroot=gentoo
222
223 Index: l-posix2.xml
224 ===================================================================
225 <?xml version='1.0' encoding="UTF-8"?>
226 <!-- $Header: /var/cvsroot/gentoo/xml/htdocs/doc/en/articles/l-posix2.xml,v 1.1 2005/08/03 10:36:19 neysx Exp $ -->
227 <!DOCTYPE guide SYSTEM "/dtd/guide.dtd">
228
229 <guide link="/doc/en/articles/l-posix2.xml">
230 <title>POSIX threads explained, part 2</title>
231
232 <author title="Author">
233 <mail link="drobbins@g.o">Daniel Robbins</mail>
234 </author>
235 <author title="Editor">
236 <mail link="rane@××××××.pl">Łukasz Damentko</mail>
237 </author>
238
239 <abstract>
240 POSIX threads are a great way to increase the responsiveness and performance of
241 your code. In this second article of a three-part series, Daniel Robbins shows
242 you how to protect the integrity of shared data structures in your threaded code
243 by using nifty little things called mutexes.
244 </abstract>
245
246 <!-- The original version of this article was published on IBM developerWorks,
247 and is property of Westtech Information Services. This document is an updated
248 version of the original article, and contains various improvements made by the
249 Gentoo Linux Documentation team -->
250
251 <version>1.0</version>
252 <date>2005-07-27</date>
253
254 <chapter>
255 <title>The little things called mutexes</title>
256 <section id="thread3c">
257 <title>Mutex me!</title>
258 <body>
259
260 <note>
261 The original version of this article was published on IBM developerWorks, and is
262 property of Westtech Information Services. This document is an updated version
263 of the original article, and contains various improvements made by the Gentoo
264 Linux Documentation team.
265 </note>
266
267 <p>
268 In my <uri link="/doc/en/articles/l-posix1.xml">previous article</uri>, I talked
269 about threaded code that did unusual and unexpected things. Two threads each
270 incremented a global variable twenty times. The variable was supposed to end up
271 with a value of 40, but ended up with a value of 21 instead. What happened? The
272 problem occurred because one thread repeatedly "cancelled out" the increment
273 performed by the other thread. Let's take a look at some corrected code that
274 uses a <b>mutex</b> to solve the problem:
275 </p>
276
277 <pre caption="thread3.c">
278 #include &lt;pthread.h&gt;
279 #include &lt;stdlib.h&gt;
280 #include &lt;unistd.h&gt;
281 #include &lt;stdio.h&gt;
282
283 int myglobal;
284 pthread_mutex_t mymutex=PTHREAD_MUTEX_INITIALIZER;
285
286 void *thread_function(void *arg) {
287 int i,j;
288 for ( i=0; i&lt;20; i++ ) {
289 pthread_mutex_lock(&amp;mymutex);
290 j=myglobal;
291 j=j+1;
292 printf(".");
293 fflush(stdout);
294 sleep(1);
295 myglobal=j;
296 pthread_mutex_unlock(&amp;mymutex);
297 }
298 return NULL;
299 }
300
301 int main(void) {
302
303 pthread_t mythread;
304 int i;
305
306 if ( pthread_create( &amp;mythread, NULL, thread_function, NULL) ) {
307 printf("error creating thread.");
308 bort();
309 }
310
311 for ( i=0; i&lt;20; i++) {
312 pthread_mutex_lock(&amp;mymutex);
313 myglobal=myglobal+1;
314 pthread_mutex_unlock(&amp;mymutex);
315 printf("o");
316 fflush(stdout);
317 sleep(1);
318 }
319
320 if ( pthread_join ( mythread, NULL ) ) {
321 printf("error joining thread.");
322 abort();
323 }
324
325 printf("\nmyglobal equals %d\n",myglobal);
326
327 exit(0);
328
329 }
330 </pre>
331
332 </body>
333 </section>
334 <section>
335 <title>Comprehension time</title>
336 <body>
337
338 <p>
339 If you compare this code to the version in my <uri
340 link="/doc/en/articles/l-posix1.xml">previous article</uri>, you'll notice the
341 addition of the calls pthread_mutex_lock() and pthread_mutex_unlock(). These
342 calls perform a much-needed function in threaded programs. They provide a means
343 of mutual exclusion (hence the name). No two threads can have the same mutex
344 locked at the same time.
345 </p>
346
347 <p>
348 This is how mutexes work. If thread "a" tries to lock a mutex while thread "b"
349 has the same mutex locked, thread "a" goes to sleep. As soon as thread "b"
350 releases the mutex (via a pthread_mutex_unlock() call), thread "a" will be able
351 to lock the mutex (in other words, it will return from the pthread_mutex_lock()
352 call with the mutex locked). Likewise, if thread "c" tries to lock the mutex
353 while thread "a" is holding it, thread "c" will also be put to sleep
354 temporarily. All threads that go to sleep from calling pthread_mutex_lock() on
355 an already-locked mutex will "queue up" for access to that mutex.
356 </p>
357
358 <p>
359 pthread_mutex_lock() and pthread_mutex_unlock() are normally used to protect
360 data structures. That is, you make sure that only one thread at a time can
361 access a certain data structure by locking and unlocking it. As you may have
362 guessed, the POSIX threads library will grant a lock without having put the
363 thread to sleep at all if a thread tries to lock an unlocked mutex.
364 </p>
365
366 <figure link="/images/docs/l-posix-mutex.gif" caption="For your enjoyment, four
367 znurts re-enact a scene from recent pthread_mutex_lock() calls"/>
368
369 <p>
370 The thread in this image that has the mutex locked gets to access the complex
371 data structure without worrying about having other threads mess with it at the
372 same time. The data structure is in effect "frozen" until the mutex is unlocked.
373 It's as if the pthread_mutex_lock() and pthread_mutex_unlock() calls are "under
374 construction" signs that surround a particular piece of shared data that's being
375 modified or read. The calls act as a warning to other threads to go to sleep and
376 wait their turn for the mutex lock. Of course this is only true if your surround
377 every read and write to a particular data structure with calls to
378 pthread_mutex_lock() and pthread_mutex_unlock().
379 </p>
380
381 </body>
382 </section>
383 <section>
384 <title>Why mutex at all?</title>
385 <body>
386
387 <p>
388 Sounds interesting, but why exactly do we want to put our threads to sleep?
389 After all, isn't the main advantage of threads their ability to work
390 independently and in many cases simultaneously? Yes, that's completely true.
391 However, every non-trivial threads program will require at least some use of
392 mutexes. Let's refer to our example program again to understand why.
393 </p>
394
395 <p>
396 If you take a look at thread_function(), you'll notice that the mutex is locked
397 at the beginning of the loop and released at the very end. In this example
398 program, mymutex is used to protect the value of myglobal. If you look carefully
399 at thread_function() you'll notice that the increment code copies myglobal to a
400 local variable, increments the local variable, sleeps for one second, and only
401 then copies the local value back to myglobal. Without the mutex,
402 thread_function() will overwrite the incremented value when it wakes up if our
403 main thread increments myglobal during thread_function()'s one-second nap. Using
404 a mutex ensures that this doesn't happen. (In case you're wondering, I added the
405 one-second delay to trigger a flawed result. There is no real reason for
406 thread_function() to go to sleep for one second before writing the local value
407 back to myglobal.) Our new program using mutex produces the desired result:
408 </p>
409
410 <pre caption="Output of program using mutex">
411 $ <i>./thread3</i>
412 o..o..o.o..o..o.o.o.o.o..o..o..o.ooooooo
413 myglobal equals 40
414 </pre>
415
416 <p>
417 To further explore this extremely important concept, let's take a look at the
418 increment code from our program:
419 </p>
420
421 <pre caption="Incremented code">
422 thread_function() increment code:
423 j=myglobal;
424
425
426
427 1.1 xml/htdocs/doc/en/articles/l-posix3.xml
428
429 file : http://www.gentoo.org/cgi-bin/viewcvs.cgi/xml/htdocs/doc/en/articles/l-posix3.xml?rev=1.1&content-type=text/x-cvsweb-markup&cvsroot=gentoo
430 plain: http://www.gentoo.org/cgi-bin/viewcvs.cgi/xml/htdocs/doc/en/articles/l-posix3.xml?rev=1.1&content-type=text/plain&cvsroot=gentoo
431
432 Index: l-posix3.xml
433 ===================================================================
434 <?xml version='1.0' encoding="UTF-8"?>
435 <!-- $Header: /var/cvsroot/gentoo/xml/htdocs/doc/en/articles/l-posix3.xml,v 1.1 2005/08/03 10:36:19 neysx Exp $ -->
436 <!DOCTYPE guide SYSTEM "/dtd/guide.dtd">
437
438 <guide link="/doc/en/articles/l-posix3.xml">
439 <title>POSIX threads explained, part 3</title>
440
441 <author title="Author">
442 <mail link="drobbins@g.o">Daniel Robbins</mail>
443 </author>
444 <author title="Editor">
445 <mail link="rane@××××××.pl">Łukasz Damentko</mail>
446 </author>
447
448 <abstract>
449 In this article, the last of a three-part series on POSIX threads, Daniel takes
450 a good look at how to use condition variables. Condition variables are POSIX
451 thread structures that allow you to "wake up" threads when certain conditions
452 are met. You can think of them as a thread-safe form of signalling. Daniel wraps
453 up the article by using all that you've learned so far to implement a
454 multi-threaded work crew application.
455 </abstract>
456
457 <!-- The original version of this article was published on IBM developerWorks,
458 and is property of Westtech Information Services. This document is an updated
459 version of the original article, and contains various improvements made by the
460 Gentoo Linux Documentation team -->
461
462 <version>1.0</version>
463 <date>2005-07-28</date>
464
465 <chapter>
466 <title>Improve efficiency with condition variables</title>
467 <section>
468 <title>Condition variables explained</title>
469 <body>
470
471 <note>
472 The original version of this article was published on IBM developerWorks, and is
473 property of Westtech Information Services. This document is an updated version
474 of the original article, and contains various improvements made by the Gentoo
475 Linux Documentation team.
476 </note>
477
478 <p>
479 I ended my <uri link="/doc/en/articles/l-posix2.xml">previous article</uri> by
480 describing a particular dilemma how does a thread deal with a situation where
481 it is waiting for a specific condition to become true? It could repeatedly lock
482 and unlock a mutex, each time checking a shared data structure for a certain
483 value. But this is a waste of time and resources, and this form of busy polling
484 is extremely inefficient. The best way to do this is to use the
485 pthread_cond_wait() call to wait on a particular condition to become true.
486 </p>
487
488 <p>
489 It's important to understand what pthread_cond_wait() does -- it's the heart of
490 the POSIX threads signalling system, and also the hardest part to understand.
491 </p>
492
493 <p>
494 First, let's consider a scenario where a thread has locked a mutex, in order to
495 take a look at a linked list, and the list happens to be empty. This particular
496 thread can't do anything -- it's designed to remove a node from the list, and
497 there are no nodes available. So, this is what it does.
498 </p>
499
500 <p>
501 While still holding the mutex lock, our thread will call
502 pthread_cond_wait(&amp;mycond,&amp;mymutex). The pthread_cond_wait() call is
503 rather complex, so we'll step through each of its operations one at a time.
504 </p>
505
506 <p>
507 The first thing pthread_cond_wait() does is simultaneously unlock the mutex
508 mymutex (so that other threads can modify the linked list) and wait on the
509 condition mycond (so that pthread_cond_wait() will wake up when it is
510 "signalled" by another thread). Now that the mutex is unlocked, other threads
511 can access and modify the linked list, possibly adding items.
512 </p>
513
514 <p>
515 At this point, the pthread_cond_wait() call has not yet returned. Unlocking the
516 mutex happens immediately, but waiting on the condition mycond is normally a
517 blocking operation, meaning that our thread will go to sleep, consuming no CPU
518 cycles until it is woken up. This is exactly what we want to happen. Our thread
519 is sleeping, waiting for a particular condition to become true, without
520 performing any kind of busy polling that would waste CPU time. From our thread's
521 perspective, it's simply waiting for the pthread_cond_wait() call to return.
522 </p>
523
524 <p>
525 Now, to continue the explanation, let's say that another thread (call it "thread
526 2") locks mymutex and adds an item to our linked list. Immediately after
527 unlocking the mutex, thread 2 calls the function
528 pthread_cond_broadcast(&amp;mycond). By doing so, thread 2 will cause all
529 threads waiting on the mycond condition variable to immediately wake up. This
530 means that our first thread (which is in the middle of a pthread_cond_wait()
531 call) will now wake up.
532 </p>
533
534 <p>
535 Now, let's take a look at what happens to our first thread. After thread 2
536 called pthread_cond_broadcast(&amp;mymutex) you might think that thread 1's
537 pthread_cond_wait() will immediately return. Not so! Instead,
538 pthread_cond_wait() will perform one last operation: relock mymutex. Once
539 pthread_cond_wait() has the lock, it will then return and allow thread 1 to
540 continue execution. At that point, it can immediately check the list for any
541 interesting changes.
542 </p>
543
544 </body>
545 </section>
546 <section>
547 <title>Stop and review!</title>
548 <body>
549
550 <!-- These bits do not make any sense to me, commented out
551
552 <pre caption="queue.h">
553 pthread_cond_t mycond;
554 </pre>
555
556 <pre caption="control.h">
557 pthread_cond_t mycond;
558
559 pthread_cond_init(&amp;mycond,NULL);
560
561 pthread_cond_destroy(&amp;mycond);
562
563 pthread_cond_wait(&amp;mycond, &amp;mymutex);
564
565 pthread_cond_broadcast(&amp;mycond);
566
567 pthread_cond_signal(&amp;mycond);
568 </pre>
569 -->
570 <pre caption="queue.h">
571 /* queue.h
572 <comment>** Copyright 2000 Daniel Robbins, Gentoo Technologies, Inc.
573 ** Author: Daniel Robbins
574 ** Date: 16 Jun 2000</comment>
575 */
576 typedef struct node {
577 struct node *next;
578 } node;
579 typedef struct queue {
580 node *head, *tail;
581 } queue;
582 void queue_init(queue *myroot);
583 void queue_put(queue *myroot, node *mynode);
584 node *queue_get(queue *myroot);
585 </pre>
586
587 <pre caption="queue.c">
588 /* queue.c
589 <comment>** Copyright 2000 Daniel Robbins, Gentoo Technologies, Inc.
590 ** Author: Daniel Robbins
591 ** Date: 16 Jun 2000
592 **
593 ** This set of queue functions was originally thread-aware. I
594 ** redesigned the code to make this set of queue routines
595 ** thread-ignorant (just a generic, boring yet very fast set of queue
596 ** routines). Why the change? Because it makes more sense to have
597 ** the thread support as an optional add-on. Consider a situation
598 ** where you want to add 5 nodes to the queue. With the
599 ** thread-enabled version, each call to queue_put() would
600 ** automatically lock and unlock the queue mutex 5 times -- that's a
601 ** lot of unnecessary overhead. However, by moving the thread stuff
602 ** out of the queue routines, the caller can lock the mutex once at
603 ** the beginning, then insert 5 items, and then unlock at the end.
604 ** Moving the lock/unlock code out of the queue functions allows for
605 ** optimizations that aren't possible otherwise. It also makes this
606 ** code useful for non-threaded applications.
607 **
608 ** We can easily thread-enable this data structure by using the
609 ** data_control type defined in control.c and control.h.</comment> */
610 #include &lt;stdio.h&gt;
611 #include "queue.h"
612 void queue_init(queue *myroot) {
613 myroot->head=NULL;
614 myroot->tail=NULL;
615 }
616 void queue_put(queue *myroot,node *mynode) {
617 mynode->next=NULL;
618 if (myroot->tail!=NULL)
619 myroot->tail->next=mynode;
620 myroot->tail=mynode;
621 if (myroot->head==NULL)
622 myroot->head=mynode;
623 }
624 node *queue_get(queue *myroot) {
625 //get from root
626 node *mynode;
627 mynode=myroot->head;
628 if (myroot->head!=NULL)
629 myroot->head=myroot->head->next;
630 return mynode;
631 }
632 </pre>
633
634
635
636 --
637 gentoo-doc-cvs@g.o mailing list