| 1 | @c -*-texinfo-*- |
| 2 | @c This is part of the GNU Guile Reference Manual. |
| 3 | @c Copyright (C) 1996, 1997, 2000, 2001, 2002, 2003, 2004, 2007, 2009, 2010, 2012, 2013 |
| 4 | @c Free Software Foundation, Inc. |
| 5 | @c See the file guile.texi for copying conditions. |
| 6 | |
| 7 | @node Scheduling |
| 8 | @section Threads, Mutexes, Asyncs and Dynamic Roots |
| 9 | |
| 10 | @menu |
| 11 | * Arbiters:: Synchronization primitives. |
| 12 | * Asyncs:: Asynchronous procedure invocation. |
| 13 | * Threads:: Multiple threads of execution. |
| 14 | * Mutexes and Condition Variables:: Synchronization primitives. |
| 15 | * Blocking:: How to block properly in guile mode. |
| 16 | * Critical Sections:: Avoiding concurrency and reentries. |
| 17 | * Fluids and Dynamic States:: Thread-local variables, etc. |
| 18 | * Parameters:: Dynamic scoping in Scheme. |
| 19 | * Futures:: Fine-grain parallelism. |
| 20 | * Parallel Forms:: Parallel execution of forms. |
| 21 | @end menu |
| 22 | |
| 23 | |
| 24 | @node Arbiters |
| 25 | @subsection Arbiters |
| 26 | @cindex arbiters |
| 27 | |
| 28 | Arbiters are synchronization objects, they can be used by threads to |
| 29 | control access to a shared resource. An arbiter can be locked to |
| 30 | indicate a resource is in use, and unlocked when done. |
| 31 | |
| 32 | An arbiter is like a light-weight mutex (@pxref{Mutexes and Condition |
| 33 | Variables}). It uses less memory and may be faster, but there's no |
| 34 | way for a thread to block waiting on an arbiter, it can only test and |
| 35 | get the status returned. |
| 36 | |
| 37 | @deffn {Scheme Procedure} make-arbiter name |
| 38 | @deffnx {C Function} scm_make_arbiter (name) |
| 39 | Return an object of type arbiter and name @var{name}. Its |
| 40 | state is initially unlocked. Arbiters are a way to achieve |
| 41 | process synchronization. |
| 42 | @end deffn |
| 43 | |
| 44 | @deffn {Scheme Procedure} try-arbiter arb |
| 45 | @deffnx {C Function} scm_try_arbiter (arb) |
| 46 | If @var{arb} is unlocked, then lock it and return @code{#t}. |
| 47 | If @var{arb} is already locked, then do nothing and return |
| 48 | @code{#f}. |
| 49 | @end deffn |
| 50 | |
| 51 | @deffn {Scheme Procedure} release-arbiter arb |
| 52 | @deffnx {C Function} scm_release_arbiter (arb) |
| 53 | If @var{arb} is locked, then unlock it and return @code{#t}. If |
| 54 | @var{arb} is already unlocked, then do nothing and return @code{#f}. |
| 55 | |
| 56 | Typical usage is for the thread which locked an arbiter to later |
| 57 | release it, but that's not required, any thread can release it. |
| 58 | @end deffn |
| 59 | |
| 60 | |
| 61 | @node Asyncs |
| 62 | @subsection Asyncs |
| 63 | |
| 64 | @cindex asyncs |
| 65 | @cindex user asyncs |
| 66 | @cindex system asyncs |
| 67 | |
| 68 | Asyncs are a means of deferring the execution of Scheme code until it is |
| 69 | safe to do so. |
| 70 | |
| 71 | Guile provides two kinds of asyncs that share the basic concept but are |
| 72 | otherwise quite different: system asyncs and user asyncs. System asyncs |
| 73 | are integrated into the core of Guile and are executed automatically |
| 74 | when the system is in a state to allow the execution of Scheme code. |
| 75 | For example, it is not possible to execute Scheme code in a POSIX signal |
| 76 | handler, but such a signal handler can queue a system async to be |
| 77 | executed in the near future, when it is safe to do so. |
| 78 | |
| 79 | System asyncs can also be queued for threads other than the current one. |
| 80 | This way, you can cause threads to asynchronously execute arbitrary |
| 81 | code. |
| 82 | |
| 83 | User asyncs offer a convenient means of queuing procedures for future |
| 84 | execution and triggering this execution. They will not be executed |
| 85 | automatically. |
| 86 | |
| 87 | @menu |
| 88 | * System asyncs:: |
| 89 | * User asyncs:: |
| 90 | @end menu |
| 91 | |
| 92 | @node System asyncs |
| 93 | @subsubsection System asyncs |
| 94 | |
| 95 | To cause the future asynchronous execution of a procedure in a given |
| 96 | thread, use @code{system-async-mark}. |
| 97 | |
| 98 | Automatic invocation of system asyncs can be temporarily disabled by |
| 99 | calling @code{call-with-blocked-asyncs}. This function works by |
| 100 | temporarily increasing the @emph{async blocking level} of the current |
| 101 | thread while a given procedure is running. The blocking level starts |
| 102 | out at zero, and whenever a safe point is reached, a blocking level |
| 103 | greater than zero will prevent the execution of queued asyncs. |
| 104 | |
| 105 | Analogously, the procedure @code{call-with-unblocked-asyncs} will |
| 106 | temporarily decrease the blocking level of the current thread. You |
| 107 | can use it when you want to disable asyncs by default and only allow |
| 108 | them temporarily. |
| 109 | |
| 110 | In addition to the C versions of @code{call-with-blocked-asyncs} and |
| 111 | @code{call-with-unblocked-asyncs}, C code can use |
| 112 | @code{scm_dynwind_block_asyncs} and @code{scm_dynwind_unblock_asyncs} |
| 113 | inside a @dfn{dynamic context} (@pxref{Dynamic Wind}) to block or |
| 114 | unblock system asyncs temporarily. |
| 115 | |
| 116 | @deffn {Scheme Procedure} system-async-mark proc [thread] |
| 117 | @deffnx {C Function} scm_system_async_mark (proc) |
| 118 | @deffnx {C Function} scm_system_async_mark_for_thread (proc, thread) |
| 119 | Mark @var{proc} (a procedure with zero arguments) for future execution |
| 120 | in @var{thread}. When @var{proc} has already been marked for |
| 121 | @var{thread} but has not been executed yet, this call has no effect. |
| 122 | When @var{thread} is omitted, the thread that called |
| 123 | @code{system-async-mark} is used. |
| 124 | |
| 125 | This procedure is not safe to be called from signal handlers. Use |
| 126 | @code{scm_sigaction} or @code{scm_sigaction_for_thread} to install |
| 127 | signal handlers. |
| 128 | @end deffn |
| 129 | |
| 130 | @deffn {Scheme Procedure} call-with-blocked-asyncs proc |
| 131 | @deffnx {C Function} scm_call_with_blocked_asyncs (proc) |
| 132 | Call @var{proc} and block the execution of system asyncs by one level |
| 133 | for the current thread while it is running. Return the value returned |
| 134 | by @var{proc}. For the first two variants, call @var{proc} with no |
| 135 | arguments; for the third, call it with @var{data}. |
| 136 | @end deffn |
| 137 | |
| 138 | @deftypefn {C Function} {void *} scm_c_call_with_blocked_asyncs (void * (*proc) (void *data), void *data) |
| 139 | The same but with a C function @var{proc} instead of a Scheme thunk. |
| 140 | @end deftypefn |
| 141 | |
| 142 | @deffn {Scheme Procedure} call-with-unblocked-asyncs proc |
| 143 | @deffnx {C Function} scm_call_with_unblocked_asyncs (proc) |
| 144 | Call @var{proc} and unblock the execution of system asyncs by one |
| 145 | level for the current thread while it is running. Return the value |
| 146 | returned by @var{proc}. For the first two variants, call @var{proc} |
| 147 | with no arguments; for the third, call it with @var{data}. |
| 148 | @end deffn |
| 149 | |
| 150 | @deftypefn {C Function} {void *} scm_c_call_with_unblocked_asyncs (void *(*proc) (void *data), void *data) |
| 151 | The same but with a C function @var{proc} instead of a Scheme thunk. |
| 152 | @end deftypefn |
| 153 | |
| 154 | @deftypefn {C Function} void scm_dynwind_block_asyncs () |
| 155 | During the current dynwind context, increase the blocking of asyncs by |
| 156 | one level. This function must be used inside a pair of calls to |
| 157 | @code{scm_dynwind_begin} and @code{scm_dynwind_end} (@pxref{Dynamic |
| 158 | Wind}). |
| 159 | @end deftypefn |
| 160 | |
| 161 | @deftypefn {C Function} void scm_dynwind_unblock_asyncs () |
| 162 | During the current dynwind context, decrease the blocking of asyncs by |
| 163 | one level. This function must be used inside a pair of calls to |
| 164 | @code{scm_dynwind_begin} and @code{scm_dynwind_end} (@pxref{Dynamic |
| 165 | Wind}). |
| 166 | @end deftypefn |
| 167 | |
| 168 | @node User asyncs |
| 169 | @subsubsection User asyncs |
| 170 | |
| 171 | A user async is a pair of a thunk (a parameterless procedure) and a |
| 172 | mark. Setting the mark on a user async will cause the thunk to be |
| 173 | executed when the user async is passed to @code{run-asyncs}. Setting |
| 174 | the mark more than once is satisfied by one execution of the thunk. |
| 175 | |
| 176 | User asyncs are created with @code{async}. They are marked with |
| 177 | @code{async-mark}. |
| 178 | |
| 179 | @deffn {Scheme Procedure} async thunk |
| 180 | @deffnx {C Function} scm_async (thunk) |
| 181 | Create a new user async for the procedure @var{thunk}. |
| 182 | @end deffn |
| 183 | |
| 184 | @deffn {Scheme Procedure} async-mark a |
| 185 | @deffnx {C Function} scm_async_mark (a) |
| 186 | Mark the user async @var{a} for future execution. |
| 187 | @end deffn |
| 188 | |
| 189 | @deffn {Scheme Procedure} run-asyncs list_of_a |
| 190 | @deffnx {C Function} scm_run_asyncs (list_of_a) |
| 191 | Execute all thunks from the marked asyncs of the list @var{list_of_a}. |
| 192 | @end deffn |
| 193 | |
| 194 | @node Threads |
| 195 | @subsection Threads |
| 196 | @cindex threads |
| 197 | @cindex Guile threads |
| 198 | @cindex POSIX threads |
| 199 | |
| 200 | Guile supports POSIX threads, unless it was configured with |
| 201 | @code{--without-threads} or the host lacks POSIX thread support. When |
| 202 | thread support is available, the @code{threads} feature is provided |
| 203 | (@pxref{Feature Manipulation, @code{provided?}}). |
| 204 | |
| 205 | The procedures below manipulate Guile threads, which are wrappers around |
| 206 | the system's POSIX threads. For application-level parallelism, using |
| 207 | higher-level constructs, such as futures, is recommended |
| 208 | (@pxref{Futures}). |
| 209 | |
| 210 | @deffn {Scheme Procedure} all-threads |
| 211 | @deffnx {C Function} scm_all_threads () |
| 212 | Return a list of all threads. |
| 213 | @end deffn |
| 214 | |
| 215 | @deffn {Scheme Procedure} current-thread |
| 216 | @deffnx {C Function} scm_current_thread () |
| 217 | Return the thread that called this function. |
| 218 | @end deffn |
| 219 | |
| 220 | @c begin (texi-doc-string "guile" "call-with-new-thread") |
| 221 | @deffn {Scheme Procedure} call-with-new-thread thunk [handler] |
| 222 | Call @code{thunk} in a new thread and with a new dynamic state, |
| 223 | returning the new thread. The procedure @var{thunk} is called via |
| 224 | @code{with-continuation-barrier}. |
| 225 | |
| 226 | When @var{handler} is specified, then @var{thunk} is called from |
| 227 | within a @code{catch} with tag @code{#t} that has @var{handler} as its |
| 228 | handler. This catch is established inside the continuation barrier. |
| 229 | |
| 230 | Once @var{thunk} or @var{handler} returns, the return value is made |
| 231 | the @emph{exit value} of the thread and the thread is terminated. |
| 232 | @end deffn |
| 233 | |
| 234 | @deftypefn {C Function} SCM scm_spawn_thread (scm_t_catch_body body, void *body_data, scm_t_catch_handler handler, void *handler_data) |
| 235 | Call @var{body} in a new thread, passing it @var{body_data}, returning |
| 236 | the new thread. The function @var{body} is called via |
| 237 | @code{scm_c_with_continuation_barrier}. |
| 238 | |
| 239 | When @var{handler} is non-@code{NULL}, @var{body} is called via |
| 240 | @code{scm_internal_catch} with tag @code{SCM_BOOL_T} that has |
| 241 | @var{handler} and @var{handler_data} as the handler and its data. This |
| 242 | catch is established inside the continuation barrier. |
| 243 | |
| 244 | Once @var{body} or @var{handler} returns, the return value is made the |
| 245 | @emph{exit value} of the thread and the thread is terminated. |
| 246 | @end deftypefn |
| 247 | |
| 248 | @deffn {Scheme Procedure} thread? obj |
| 249 | @deffnx {C Function} scm_thread_p (obj) |
| 250 | Return @code{#t} ff @var{obj} is a thread; otherwise, return |
| 251 | @code{#f}. |
| 252 | @end deffn |
| 253 | |
| 254 | @c begin (texi-doc-string "guile" "join-thread") |
| 255 | @deffn {Scheme Procedure} join-thread thread [timeout [timeoutval]] |
| 256 | @deffnx {C Function} scm_join_thread (thread) |
| 257 | @deffnx {C Function} scm_join_thread_timed (thread, timeout, timeoutval) |
| 258 | Wait for @var{thread} to terminate and return its exit value. Threads |
| 259 | that have not been created with @code{call-with-new-thread} or |
| 260 | @code{scm_spawn_thread} have an exit value of @code{#f}. When |
| 261 | @var{timeout} is given, it specifies a point in time where the waiting |
| 262 | should be aborted. It can be either an integer as returned by |
| 263 | @code{current-time} or a pair as returned by @code{gettimeofday}. |
| 264 | When the waiting is aborted, @var{timeoutval} is returned (if it is |
| 265 | specified; @code{#f} is returned otherwise). |
| 266 | @end deffn |
| 267 | |
| 268 | @deffn {Scheme Procedure} thread-exited? thread |
| 269 | @deffnx {C Function} scm_thread_exited_p (thread) |
| 270 | Return @code{#t} if @var{thread} has exited, or @code{#f} otherwise. |
| 271 | @end deffn |
| 272 | |
| 273 | @c begin (texi-doc-string "guile" "yield") |
| 274 | @deffn {Scheme Procedure} yield |
| 275 | If one or more threads are waiting to execute, calling yield forces an |
| 276 | immediate context switch to one of them. Otherwise, yield has no effect. |
| 277 | @end deffn |
| 278 | |
| 279 | @deffn {Scheme Procedure} cancel-thread thread |
| 280 | @deffnx {C Function} scm_cancel_thread (thread) |
| 281 | Asynchronously notify @var{thread} to exit. Immediately after |
| 282 | receiving this notification, @var{thread} will call its cleanup handler |
| 283 | (if one has been set) and then terminate, aborting any evaluation that |
| 284 | is in progress. |
| 285 | |
| 286 | Because Guile threads are isomorphic with POSIX threads, @var{thread} |
| 287 | will not receive its cancellation signal until it reaches a cancellation |
| 288 | point. See your operating system's POSIX threading documentation for |
| 289 | more information on cancellation points; note that in Guile, unlike |
| 290 | native POSIX threads, a thread can receive a cancellation notification |
| 291 | while attempting to lock a mutex. |
| 292 | @end deffn |
| 293 | |
| 294 | @deffn {Scheme Procedure} set-thread-cleanup! thread proc |
| 295 | @deffnx {C Function} scm_set_thread_cleanup_x (thread, proc) |
| 296 | Set @var{proc} as the cleanup handler for the thread @var{thread}. |
| 297 | @var{proc}, which must be a thunk, will be called when @var{thread} |
| 298 | exits, either normally or by being canceled. Thread cleanup handlers |
| 299 | can be used to perform useful tasks like releasing resources, such as |
| 300 | locked mutexes, when thread exit cannot be predicted. |
| 301 | |
| 302 | The return value of @var{proc} will be set as the @emph{exit value} of |
| 303 | @var{thread}. |
| 304 | |
| 305 | To remove a cleanup handler, pass @code{#f} for @var{proc}. |
| 306 | @end deffn |
| 307 | |
| 308 | @deffn {Scheme Procedure} thread-cleanup thread |
| 309 | @deffnx {C Function} scm_thread_cleanup (thread) |
| 310 | Return the cleanup handler currently installed for the thread |
| 311 | @var{thread}. If no cleanup handler is currently installed, |
| 312 | thread-cleanup returns @code{#f}. |
| 313 | @end deffn |
| 314 | |
| 315 | Higher level thread procedures are available by loading the |
| 316 | @code{(ice-9 threads)} module. These provide standardized |
| 317 | thread creation. |
| 318 | |
| 319 | @deffn macro make-thread proc arg @dots{} |
| 320 | Apply @var{proc} to @var{arg} @dots{} in a new thread formed by |
| 321 | @code{call-with-new-thread} using a default error handler that display |
| 322 | the error to the current error port. The @var{arg} @dots{} |
| 323 | expressions are evaluated in the new thread. |
| 324 | @end deffn |
| 325 | |
| 326 | @deffn macro begin-thread expr1 expr2 @dots{} |
| 327 | Evaluate forms @var{expr1} @var{expr2} @dots{} in a new thread formed by |
| 328 | @code{call-with-new-thread} using a default error handler that display |
| 329 | the error to the current error port. |
| 330 | @end deffn |
| 331 | |
| 332 | @node Mutexes and Condition Variables |
| 333 | @subsection Mutexes and Condition Variables |
| 334 | @cindex mutex |
| 335 | @cindex condition variable |
| 336 | |
| 337 | A mutex is a thread synchronization object, it can be used by threads |
| 338 | to control access to a shared resource. A mutex can be locked to |
| 339 | indicate a resource is in use, and other threads can then block on the |
| 340 | mutex to wait for the resource (or can just test and do something else |
| 341 | if not available). ``Mutex'' is short for ``mutual exclusion''. |
| 342 | |
| 343 | There are two types of mutexes in Guile, ``standard'' and |
| 344 | ``recursive''. They're created by @code{make-mutex} and |
| 345 | @code{make-recursive-mutex} respectively, the operation functions are |
| 346 | then common to both. |
| 347 | |
| 348 | Note that for both types of mutex there's no protection against a |
| 349 | ``deadly embrace''. For instance if one thread has locked mutex A and |
| 350 | is waiting on mutex B, but another thread owns B and is waiting on A, |
| 351 | then an endless wait will occur (in the current implementation). |
| 352 | Acquiring requisite mutexes in a fixed order (like always A before B) |
| 353 | in all threads is one way to avoid such problems. |
| 354 | |
| 355 | @sp 1 |
| 356 | @deffn {Scheme Procedure} make-mutex flag @dots{} |
| 357 | @deffnx {C Function} scm_make_mutex () |
| 358 | @deffnx {C Function} scm_make_mutex_with_flags (SCM flags) |
| 359 | Return a new mutex. It is initially unlocked. If @var{flag} @dots{} is |
| 360 | specified, it must be a list of symbols specifying configuration flags |
| 361 | for the newly-created mutex. The supported flags are: |
| 362 | @table @code |
| 363 | @item unchecked-unlock |
| 364 | Unless this flag is present, a call to `unlock-mutex' on the returned |
| 365 | mutex when it is already unlocked will cause an error to be signalled. |
| 366 | |
| 367 | @item allow-external-unlock |
| 368 | Allow the returned mutex to be unlocked by the calling thread even if |
| 369 | it was originally locked by a different thread. |
| 370 | |
| 371 | @item recursive |
| 372 | The returned mutex will be recursive. |
| 373 | |
| 374 | @end table |
| 375 | @end deffn |
| 376 | |
| 377 | @deffn {Scheme Procedure} mutex? obj |
| 378 | @deffnx {C Function} scm_mutex_p (obj) |
| 379 | Return @code{#t} if @var{obj} is a mutex; otherwise, return |
| 380 | @code{#f}. |
| 381 | @end deffn |
| 382 | |
| 383 | @deffn {Scheme Procedure} make-recursive-mutex |
| 384 | @deffnx {C Function} scm_make_recursive_mutex () |
| 385 | Create a new recursive mutex. It is initially unlocked. Calling this |
| 386 | function is equivalent to calling `make-mutex' and specifying the |
| 387 | @code{recursive} flag. |
| 388 | @end deffn |
| 389 | |
| 390 | @deffn {Scheme Procedure} lock-mutex mutex [timeout [owner]] |
| 391 | @deffnx {C Function} scm_lock_mutex (mutex) |
| 392 | @deffnx {C Function} scm_lock_mutex_timed (mutex, timeout, owner) |
| 393 | Lock @var{mutex}. If the mutex is already locked, then block and |
| 394 | return only when @var{mutex} has been acquired. |
| 395 | |
| 396 | When @var{timeout} is given, it specifies a point in time where the |
| 397 | waiting should be aborted. It can be either an integer as returned |
| 398 | by @code{current-time} or a pair as returned by @code{gettimeofday}. |
| 399 | When the waiting is aborted, @code{#f} is returned. |
| 400 | |
| 401 | When @var{owner} is given, it specifies an owner for @var{mutex} other |
| 402 | than the calling thread. @var{owner} may also be @code{#f}, |
| 403 | indicating that the mutex should be locked but left unowned. |
| 404 | |
| 405 | For standard mutexes (@code{make-mutex}), and error is signalled if |
| 406 | the thread has itself already locked @var{mutex}. |
| 407 | |
| 408 | For a recursive mutex (@code{make-recursive-mutex}), if the thread has |
| 409 | itself already locked @var{mutex}, then a further @code{lock-mutex} |
| 410 | call increments the lock count. An additional @code{unlock-mutex} |
| 411 | will be required to finally release. |
| 412 | |
| 413 | If @var{mutex} was locked by a thread that exited before unlocking it, |
| 414 | the next attempt to lock @var{mutex} will succeed, but |
| 415 | @code{abandoned-mutex-error} will be signalled. |
| 416 | |
| 417 | When a system async (@pxref{System asyncs}) is activated for a thread |
| 418 | blocked in @code{lock-mutex}, the wait is interrupted and the async is |
| 419 | executed. When the async returns, the wait resumes. |
| 420 | @end deffn |
| 421 | |
| 422 | @deftypefn {C Function} void scm_dynwind_lock_mutex (SCM mutex) |
| 423 | Arrange for @var{mutex} to be locked whenever the current dynwind |
| 424 | context is entered and to be unlocked when it is exited. |
| 425 | @end deftypefn |
| 426 | |
| 427 | @deffn {Scheme Procedure} try-mutex mx |
| 428 | @deffnx {C Function} scm_try_mutex (mx) |
| 429 | Try to lock @var{mutex} as per @code{lock-mutex}. If @var{mutex} can |
| 430 | be acquired immediately then this is done and the return is @code{#t}. |
| 431 | If @var{mutex} is locked by some other thread then nothing is done and |
| 432 | the return is @code{#f}. |
| 433 | @end deffn |
| 434 | |
| 435 | @deffn {Scheme Procedure} unlock-mutex mutex [condvar [timeout]] |
| 436 | @deffnx {C Function} scm_unlock_mutex (mutex) |
| 437 | @deffnx {C Function} scm_unlock_mutex_timed (mutex, condvar, timeout) |
| 438 | Unlock @var{mutex}. An error is signalled if @var{mutex} is not locked |
| 439 | and was not created with the @code{unchecked-unlock} flag set, or if |
| 440 | @var{mutex} is locked by a thread other than the calling thread and was |
| 441 | not created with the @code{allow-external-unlock} flag set. |
| 442 | |
| 443 | If @var{condvar} is given, it specifies a condition variable upon |
| 444 | which the calling thread will wait to be signalled before returning. |
| 445 | (This behavior is very similar to that of |
| 446 | @code{wait-condition-variable}, except that the mutex is left in an |
| 447 | unlocked state when the function returns.) |
| 448 | |
| 449 | When @var{timeout} is also given and not false, it specifies a point in |
| 450 | time where the waiting should be aborted. It can be either an integer |
| 451 | as returned by @code{current-time} or a pair as returned by |
| 452 | @code{gettimeofday}. When the waiting is aborted, @code{#f} is |
| 453 | returned. Otherwise the function returns @code{#t}. |
| 454 | @end deffn |
| 455 | |
| 456 | @deffn {Scheme Procedure} mutex-owner mutex |
| 457 | @deffnx {C Function} scm_mutex_owner (mutex) |
| 458 | Return the current owner of @var{mutex}, in the form of a thread or |
| 459 | @code{#f} (indicating no owner). Note that a mutex may be unowned but |
| 460 | still locked. |
| 461 | @end deffn |
| 462 | |
| 463 | @deffn {Scheme Procedure} mutex-level mutex |
| 464 | @deffnx {C Function} scm_mutex_level (mutex) |
| 465 | Return the current lock level of @var{mutex}. If @var{mutex} is |
| 466 | currently unlocked, this value will be 0; otherwise, it will be the |
| 467 | number of times @var{mutex} has been recursively locked by its current |
| 468 | owner. |
| 469 | @end deffn |
| 470 | |
| 471 | @deffn {Scheme Procedure} mutex-locked? mutex |
| 472 | @deffnx {C Function} scm_mutex_locked_p (mutex) |
| 473 | Return @code{#t} if @var{mutex} is locked, regardless of ownership; |
| 474 | otherwise, return @code{#f}. |
| 475 | @end deffn |
| 476 | |
| 477 | @deffn {Scheme Procedure} make-condition-variable |
| 478 | @deffnx {C Function} scm_make_condition_variable () |
| 479 | Return a new condition variable. |
| 480 | @end deffn |
| 481 | |
| 482 | @deffn {Scheme Procedure} condition-variable? obj |
| 483 | @deffnx {C Function} scm_condition_variable_p (obj) |
| 484 | Return @code{#t} if @var{obj} is a condition variable; otherwise, |
| 485 | return @code{#f}. |
| 486 | @end deffn |
| 487 | |
| 488 | @deffn {Scheme Procedure} wait-condition-variable condvar mutex [time] |
| 489 | @deffnx {C Function} scm_wait_condition_variable (condvar, mutex, time) |
| 490 | Wait until @var{condvar} has been signalled. While waiting, |
| 491 | @var{mutex} is atomically unlocked (as with @code{unlock-mutex}) and |
| 492 | is locked again when this function returns. When @var{time} is given, |
| 493 | it specifies a point in time where the waiting should be aborted. It |
| 494 | can be either a integer as returned by @code{current-time} or a pair |
| 495 | as returned by @code{gettimeofday}. When the waiting is aborted, |
| 496 | @code{#f} is returned. When the condition variable has in fact been |
| 497 | signalled, @code{#t} is returned. The mutex is re-locked in any case |
| 498 | before @code{wait-condition-variable} returns. |
| 499 | |
| 500 | When a system async is activated for a thread that is blocked in a |
| 501 | call to @code{wait-condition-variable}, the waiting is interrupted, |
| 502 | the mutex is locked, and the async is executed. When the async |
| 503 | returns, the mutex is unlocked again and the waiting is resumed. When |
| 504 | the thread block while re-acquiring the mutex, execution of asyncs is |
| 505 | blocked. |
| 506 | @end deffn |
| 507 | |
| 508 | @deffn {Scheme Procedure} signal-condition-variable condvar |
| 509 | @deffnx {C Function} scm_signal_condition_variable (condvar) |
| 510 | Wake up one thread that is waiting for @var{condvar}. |
| 511 | @end deffn |
| 512 | |
| 513 | @deffn {Scheme Procedure} broadcast-condition-variable condvar |
| 514 | @deffnx {C Function} scm_broadcast_condition_variable (condvar) |
| 515 | Wake up all threads that are waiting for @var{condvar}. |
| 516 | @end deffn |
| 517 | |
| 518 | @sp 1 |
| 519 | The following are higher level operations on mutexes. These are |
| 520 | available from |
| 521 | |
| 522 | @example |
| 523 | (use-modules (ice-9 threads)) |
| 524 | @end example |
| 525 | |
| 526 | @deffn macro with-mutex mutex body1 body2 @dots{} |
| 527 | Lock @var{mutex}, evaluate the body @var{body1} @var{body2} @dots{}, |
| 528 | then unlock @var{mutex}. The return value is that returned by the last |
| 529 | body form. |
| 530 | |
| 531 | The lock, body and unlock form the branches of a @code{dynamic-wind} |
| 532 | (@pxref{Dynamic Wind}), so @var{mutex} is automatically unlocked if an |
| 533 | error or new continuation exits the body, and is re-locked if |
| 534 | the body is re-entered by a captured continuation. |
| 535 | @end deffn |
| 536 | |
| 537 | @deffn macro monitor body1 body2 @dots{} |
| 538 | Evaluate the body form @var{body1} @var{body2} @dots{} with a mutex |
| 539 | locked so only one thread can execute that code at any one time. The |
| 540 | return value is the return from the last body form. |
| 541 | |
| 542 | Each @code{monitor} form has its own private mutex and the locking and |
| 543 | evaluation is as per @code{with-mutex} above. A standard mutex |
| 544 | (@code{make-mutex}) is used, which means the body must not |
| 545 | recursively re-enter the @code{monitor} form. |
| 546 | |
| 547 | The term ``monitor'' comes from operating system theory, where it |
| 548 | means a particular bit of code managing access to some resource and |
| 549 | which only ever executes on behalf of one process at any one time. |
| 550 | @end deffn |
| 551 | |
| 552 | |
| 553 | @node Blocking |
| 554 | @subsection Blocking in Guile Mode |
| 555 | |
| 556 | Up to Guile version 1.8, a thread blocked in guile mode would prevent |
| 557 | the garbage collector from running. Thus threads had to explicitly |
| 558 | leave guile mode with @code{scm_without_guile ()} before making a |
| 559 | potentially blocking call such as a mutex lock, a @code{select ()} |
| 560 | system call, etc. The following functions could be used to temporarily |
| 561 | leave guile mode or to perform some common blocking operations in a |
| 562 | supported way. |
| 563 | |
| 564 | Starting from Guile 2.0, blocked threads no longer hinder garbage |
| 565 | collection. Thus, the functions below are not needed anymore. They can |
| 566 | still be used to inform the GC that a thread is about to block, giving |
| 567 | it a (small) optimization opportunity for ``stop the world'' garbage |
| 568 | collections, should they occur while the thread is blocked. |
| 569 | |
| 570 | @deftypefn {C Function} {void *} scm_without_guile (void *(*func) (void *), void *data) |
| 571 | Leave guile mode, call @var{func} on @var{data}, enter guile mode and |
| 572 | return the result of calling @var{func}. |
| 573 | |
| 574 | While a thread has left guile mode, it must not call any libguile |
| 575 | functions except @code{scm_with_guile} or @code{scm_without_guile} and |
| 576 | must not use any libguile macros. Also, local variables of type |
| 577 | @code{SCM} that are allocated while not in guile mode are not |
| 578 | protected from the garbage collector. |
| 579 | |
| 580 | When used from non-guile mode, calling @code{scm_without_guile} is |
| 581 | still allowed: it simply calls @var{func}. In that way, you can leave |
| 582 | guile mode without having to know whether the current thread is in |
| 583 | guile mode or not. |
| 584 | @end deftypefn |
| 585 | |
| 586 | @deftypefn {C Function} int scm_pthread_mutex_lock (pthread_mutex_t *mutex) |
| 587 | Like @code{pthread_mutex_lock}, but leaves guile mode while waiting for |
| 588 | the mutex. |
| 589 | @end deftypefn |
| 590 | |
| 591 | @deftypefn {C Function} int scm_pthread_cond_wait (pthread_cond_t *cond, pthread_mutex_t *mutex) |
| 592 | @deftypefnx {C Function} int scm_pthread_cond_timedwait (pthread_cond_t *cond, pthread_mutex_t *mutex, struct timespec *abstime) |
| 593 | Like @code{pthread_cond_wait} and @code{pthread_cond_timedwait}, but |
| 594 | leaves guile mode while waiting for the condition variable. |
| 595 | @end deftypefn |
| 596 | |
| 597 | @deftypefn {C Function} int scm_std_select (int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, struct timeval *timeout) |
| 598 | Like @code{select} but leaves guile mode while waiting. Also, the |
| 599 | delivery of a system async causes this function to be interrupted with |
| 600 | error code @code{EINTR}. |
| 601 | @end deftypefn |
| 602 | |
| 603 | @deftypefn {C Function} {unsigned int} scm_std_sleep ({unsigned int} seconds) |
| 604 | Like @code{sleep}, but leaves guile mode while sleeping. Also, the |
| 605 | delivery of a system async causes this function to be interrupted. |
| 606 | @end deftypefn |
| 607 | |
| 608 | @deftypefn {C Function} {unsigned long} scm_std_usleep ({unsigned long} usecs) |
| 609 | Like @code{usleep}, but leaves guile mode while sleeping. Also, the |
| 610 | delivery of a system async causes this function to be interrupted. |
| 611 | @end deftypefn |
| 612 | |
| 613 | |
| 614 | @node Critical Sections |
| 615 | @subsection Critical Sections |
| 616 | |
| 617 | @deffn {C Macro} SCM_CRITICAL_SECTION_START |
| 618 | @deffnx {C Macro} SCM_CRITICAL_SECTION_END |
| 619 | These two macros can be used to delimit a critical section. |
| 620 | Syntactically, they are both statements and need to be followed |
| 621 | immediately by a semicolon. |
| 622 | |
| 623 | Executing @code{SCM_CRITICAL_SECTION_START} will lock a recursive |
| 624 | mutex and block the executing of system asyncs. Executing |
| 625 | @code{SCM_CRITICAL_SECTION_END} will unblock the execution of system |
| 626 | asyncs and unlock the mutex. Thus, the code that executes between |
| 627 | these two macros can only be executed in one thread at any one time |
| 628 | and no system asyncs will run. However, because the mutex is a |
| 629 | recursive one, the code might still be reentered by the same thread. |
| 630 | You must either allow for this or avoid it, both by careful coding. |
| 631 | |
| 632 | On the other hand, critical sections delimited with these macros can |
| 633 | be nested since the mutex is recursive. |
| 634 | |
| 635 | You must make sure that for each @code{SCM_CRITICAL_SECTION_START}, |
| 636 | the corresponding @code{SCM_CRITICAL_SECTION_END} is always executed. |
| 637 | This means that no non-local exit (such as a signalled error) might |
| 638 | happen, for example. |
| 639 | @end deffn |
| 640 | |
| 641 | @deftypefn {C Function} void scm_dynwind_critical_section (SCM mutex) |
| 642 | Call @code{scm_dynwind_lock_mutex} on @var{mutex} and call |
| 643 | @code{scm_dynwind_block_asyncs}. When @var{mutex} is false, a recursive |
| 644 | mutex provided by Guile is used instead. |
| 645 | |
| 646 | The effect of a call to @code{scm_dynwind_critical_section} is that |
| 647 | the current dynwind context (@pxref{Dynamic Wind}) turns into a |
| 648 | critical section. Because of the locked mutex, no second thread can |
| 649 | enter it concurrently and because of the blocked asyncs, no system |
| 650 | async can reenter it from the current thread. |
| 651 | |
| 652 | When the current thread reenters the critical section anyway, the kind |
| 653 | of @var{mutex} determines what happens: When @var{mutex} is recursive, |
| 654 | the reentry is allowed. When it is a normal mutex, an error is |
| 655 | signalled. |
| 656 | @end deftypefn |
| 657 | |
| 658 | |
| 659 | @node Fluids and Dynamic States |
| 660 | @subsection Fluids and Dynamic States |
| 661 | |
| 662 | @cindex fluids |
| 663 | |
| 664 | A @emph{fluid} is an object that can store one value per @emph{dynamic |
| 665 | state}. Each thread has a current dynamic state, and when accessing a |
| 666 | fluid, this current dynamic state is used to provide the actual value. |
| 667 | In this way, fluids can be used for thread local storage, but they are |
| 668 | in fact more flexible: dynamic states are objects of their own and can |
| 669 | be made current for more than one thread at the same time, or only be |
| 670 | made current temporarily, for example. |
| 671 | |
| 672 | Fluids can also be used to simulate the desirable effects of |
| 673 | dynamically scoped variables. Dynamically scoped variables are useful |
| 674 | when you want to set a variable to a value during some dynamic extent |
| 675 | in the execution of your program and have them revert to their |
| 676 | original value when the control flow is outside of this dynamic |
| 677 | extent. See the description of @code{with-fluids} below for details. |
| 678 | |
| 679 | New fluids are created with @code{make-fluid} and @code{fluid?} is |
| 680 | used for testing whether an object is actually a fluid. The values |
| 681 | stored in a fluid can be accessed with @code{fluid-ref} and |
| 682 | @code{fluid-set!}. |
| 683 | |
| 684 | @deffn {Scheme Procedure} make-fluid [dflt] |
| 685 | @deffnx {C Function} scm_make_fluid () |
| 686 | @deffnx {C Function} scm_make_fluid_with_default (dflt) |
| 687 | Return a newly created fluid, whose initial value is @var{dflt}, or |
| 688 | @code{#f} if @var{dflt} is not given. |
| 689 | Fluids are objects that can hold one |
| 690 | value per dynamic state. That is, modifications to this value are |
| 691 | only visible to code that executes with the same dynamic state as |
| 692 | the modifying code. When a new dynamic state is constructed, it |
| 693 | inherits the values from its parent. Because each thread normally executes |
| 694 | with its own dynamic state, you can use fluids for thread local storage. |
| 695 | @end deffn |
| 696 | |
| 697 | @deffn {Scheme Procedure} make-unbound-fluid |
| 698 | @deffnx {C Function} scm_make_unbound_fluid () |
| 699 | Return a new fluid that is initially unbound (instead of being |
| 700 | implicitly bound to some definite value). |
| 701 | @end deffn |
| 702 | |
| 703 | @deffn {Scheme Procedure} fluid? obj |
| 704 | @deffnx {C Function} scm_fluid_p (obj) |
| 705 | Return @code{#t} if @var{obj} is a fluid; otherwise, return |
| 706 | @code{#f}. |
| 707 | @end deffn |
| 708 | |
| 709 | @deffn {Scheme Procedure} fluid-ref fluid |
| 710 | @deffnx {C Function} scm_fluid_ref (fluid) |
| 711 | Return the value associated with @var{fluid} in the current |
| 712 | dynamic root. If @var{fluid} has not been set, then return |
| 713 | its default value. Calling @code{fluid-ref} on an unbound fluid produces |
| 714 | a runtime error. |
| 715 | @end deffn |
| 716 | |
| 717 | @deffn {Scheme Procedure} fluid-set! fluid value |
| 718 | @deffnx {C Function} scm_fluid_set_x (fluid, value) |
| 719 | Set the value associated with @var{fluid} in the current dynamic root. |
| 720 | @end deffn |
| 721 | |
| 722 | @deffn {Scheme Procedure} fluid-unset! fluid |
| 723 | @deffnx {C Function} scm_fluid_unset_x (fluid) |
| 724 | Disassociate the given fluid from any value, making it unbound. |
| 725 | @end deffn |
| 726 | |
| 727 | @deffn {Scheme Procedure} fluid-bound? fluid |
| 728 | @deffnx {C Function} scm_fluid_bound_p (fluid) |
| 729 | Returns @code{#t} if the given fluid is bound to a value, otherwise |
| 730 | @code{#f}. |
| 731 | @end deffn |
| 732 | |
| 733 | @code{with-fluids*} temporarily changes the values of one or more fluids, |
| 734 | so that the given procedure and each procedure called by it access the |
| 735 | given values. After the procedure returns, the old values are restored. |
| 736 | |
| 737 | @deffn {Scheme Procedure} with-fluid* fluid value thunk |
| 738 | @deffnx {C Function} scm_with_fluid (fluid, value, thunk) |
| 739 | Set @var{fluid} to @var{value} temporarily, and call @var{thunk}. |
| 740 | @var{thunk} must be a procedure with no argument. |
| 741 | @end deffn |
| 742 | |
| 743 | @deffn {Scheme Procedure} with-fluids* fluids values thunk |
| 744 | @deffnx {C Function} scm_with_fluids (fluids, values, thunk) |
| 745 | Set @var{fluids} to @var{values} temporary, and call @var{thunk}. |
| 746 | @var{fluids} must be a list of fluids and @var{values} must be the |
| 747 | same number of their values to be applied. Each substitution is done |
| 748 | in the order given. @var{thunk} must be a procedure with no argument. |
| 749 | It is called inside a @code{dynamic-wind} and the fluids are |
| 750 | set/restored when control enter or leaves the established dynamic |
| 751 | extent. |
| 752 | @end deffn |
| 753 | |
| 754 | @deffn {Scheme Macro} with-fluids ((fluid value) @dots{}) body1 body2 @dots{} |
| 755 | Execute body @var{body1} @var{body2} @dots{} while each @var{fluid} is |
| 756 | set to the corresponding @var{value}. Both @var{fluid} and @var{value} |
| 757 | are evaluated and @var{fluid} must yield a fluid. The body is executed |
| 758 | inside a @code{dynamic-wind} and the fluids are set/restored when |
| 759 | control enter or leaves the established dynamic extent. |
| 760 | @end deffn |
| 761 | |
| 762 | @deftypefn {C Function} SCM scm_c_with_fluids (SCM fluids, SCM vals, SCM (*cproc)(void *), void *data) |
| 763 | @deftypefnx {C Function} SCM scm_c_with_fluid (SCM fluid, SCM val, SCM (*cproc)(void *), void *data) |
| 764 | The function @code{scm_c_with_fluids} is like @code{scm_with_fluids} |
| 765 | except that it takes a C function to call instead of a Scheme thunk. |
| 766 | |
| 767 | The function @code{scm_c_with_fluid} is similar but only allows one |
| 768 | fluid to be set instead of a list. |
| 769 | @end deftypefn |
| 770 | |
| 771 | @deftypefn {C Function} void scm_dynwind_fluid (SCM fluid, SCM val) |
| 772 | This function must be used inside a pair of calls to |
| 773 | @code{scm_dynwind_begin} and @code{scm_dynwind_end} (@pxref{Dynamic |
| 774 | Wind}). During the dynwind context, the fluid @var{fluid} is set to |
| 775 | @var{val}. |
| 776 | |
| 777 | More precisely, the value of the fluid is swapped with a `backup' |
| 778 | value whenever the dynwind context is entered or left. The backup |
| 779 | value is initialized with the @var{val} argument. |
| 780 | @end deftypefn |
| 781 | |
| 782 | @deffn {Scheme Procedure} make-dynamic-state [parent] |
| 783 | @deffnx {C Function} scm_make_dynamic_state (parent) |
| 784 | Return a copy of the dynamic state object @var{parent} |
| 785 | or of the current dynamic state when @var{parent} is omitted. |
| 786 | @end deffn |
| 787 | |
| 788 | @deffn {Scheme Procedure} dynamic-state? obj |
| 789 | @deffnx {C Function} scm_dynamic_state_p (obj) |
| 790 | Return @code{#t} if @var{obj} is a dynamic state object; |
| 791 | return @code{#f} otherwise. |
| 792 | @end deffn |
| 793 | |
| 794 | @deftypefn {C Procedure} int scm_is_dynamic_state (SCM obj) |
| 795 | Return non-zero if @var{obj} is a dynamic state object; |
| 796 | return zero otherwise. |
| 797 | @end deftypefn |
| 798 | |
| 799 | @deffn {Scheme Procedure} current-dynamic-state |
| 800 | @deffnx {C Function} scm_current_dynamic_state () |
| 801 | Return the current dynamic state object. |
| 802 | @end deffn |
| 803 | |
| 804 | @deffn {Scheme Procedure} set-current-dynamic-state state |
| 805 | @deffnx {C Function} scm_set_current_dynamic_state (state) |
| 806 | Set the current dynamic state object to @var{state} |
| 807 | and return the previous current dynamic state object. |
| 808 | @end deffn |
| 809 | |
| 810 | @deffn {Scheme Procedure} with-dynamic-state state proc |
| 811 | @deffnx {C Function} scm_with_dynamic_state (state, proc) |
| 812 | Call @var{proc} while @var{state} is the current dynamic |
| 813 | state object. |
| 814 | @end deffn |
| 815 | |
| 816 | @deftypefn {C Procedure} void scm_dynwind_current_dynamic_state (SCM state) |
| 817 | Set the current dynamic state to @var{state} for the current dynwind |
| 818 | context. |
| 819 | @end deftypefn |
| 820 | |
| 821 | @deftypefn {C Procedure} {void *} scm_c_with_dynamic_state (SCM state, void *(*func)(void *), void *data) |
| 822 | Like @code{scm_with_dynamic_state}, but call @var{func} with |
| 823 | @var{data}. |
| 824 | @end deftypefn |
| 825 | |
| 826 | @node Parameters |
| 827 | @subsection Parameters |
| 828 | |
| 829 | @cindex SRFI-39 |
| 830 | @cindex parameter object |
| 831 | @tindex Parameter |
| 832 | |
| 833 | A parameter object is a procedure. Calling it with no arguments returns |
| 834 | its value. Calling it with one argument sets the value. |
| 835 | |
| 836 | @example |
| 837 | (define my-param (make-parameter 123)) |
| 838 | (my-param) @result{} 123 |
| 839 | (my-param 456) |
| 840 | (my-param) @result{} 456 |
| 841 | @end example |
| 842 | |
| 843 | The @code{parameterize} special form establishes new locations for |
| 844 | parameters, those new locations having effect within the dynamic scope |
| 845 | of the @code{parameterize} body. Leaving restores the previous |
| 846 | locations. Re-entering (through a saved continuation) will again use |
| 847 | the new locations. |
| 848 | |
| 849 | @example |
| 850 | (parameterize ((my-param 789)) |
| 851 | (my-param)) @result{} 789 |
| 852 | (my-param) @result{} 456 |
| 853 | @end example |
| 854 | |
| 855 | Parameters are like dynamically bound variables in other Lisp dialects. |
| 856 | They allow an application to establish parameter settings (as the name |
| 857 | suggests) just for the execution of a particular bit of code, restoring |
| 858 | when done. Examples of such parameters might be case-sensitivity for a |
| 859 | search, or a prompt for user input. |
| 860 | |
| 861 | Global variables are not as good as parameter objects for this sort of |
| 862 | thing. Changes to them are visible to all threads, but in Guile |
| 863 | parameter object locations are per-thread, thereby truly limiting the |
| 864 | effect of @code{parameterize} to just its dynamic execution. |
| 865 | |
| 866 | Passing arguments to functions is thread-safe, but that soon becomes |
| 867 | tedious when there's more than a few or when they need to pass down |
| 868 | through several layers of calls before reaching the point they should |
| 869 | affect. And introducing a new setting to existing code is often easier |
| 870 | with a parameter object than adding arguments. |
| 871 | |
| 872 | @deffn {Scheme Procedure} make-parameter init [converter] |
| 873 | Return a new parameter object, with initial value @var{init}. |
| 874 | |
| 875 | If a @var{converter} is given, then a call @code{(@var{converter} |
| 876 | val)} is made for each value set, its return is the value stored. |
| 877 | Such a call is made for the @var{init} initial value too. |
| 878 | |
| 879 | A @var{converter} allows values to be validated, or put into a |
| 880 | canonical form. For example, |
| 881 | |
| 882 | @example |
| 883 | (define my-param (make-parameter 123 |
| 884 | (lambda (val) |
| 885 | (if (not (number? val)) |
| 886 | (error "must be a number")) |
| 887 | (inexact->exact val)))) |
| 888 | (my-param 0.75) |
| 889 | (my-param) @result{} 3/4 |
| 890 | @end example |
| 891 | @end deffn |
| 892 | |
| 893 | @deffn {library syntax} parameterize ((param value) @dots{}) body1 body2 @dots{} |
| 894 | Establish a new dynamic scope with the given @var{param}s bound to new |
| 895 | locations and set to the given @var{value}s. @var{body1} @var{body2} |
| 896 | @dots{} is evaluated in that environment. The value returned is that of |
| 897 | last body form. |
| 898 | |
| 899 | Each @var{param} is an expression which is evaluated to get the |
| 900 | parameter object. Often this will just be the name of a variable |
| 901 | holding the object, but it can be anything that evaluates to a |
| 902 | parameter. |
| 903 | |
| 904 | The @var{param} expressions and @var{value} expressions are all |
| 905 | evaluated before establishing the new dynamic bindings, and they're |
| 906 | evaluated in an unspecified order. |
| 907 | |
| 908 | For example, |
| 909 | |
| 910 | @example |
| 911 | (define prompt (make-parameter "Type something: ")) |
| 912 | (define (get-input) |
| 913 | (display (prompt)) |
| 914 | ...) |
| 915 | |
| 916 | (parameterize ((prompt "Type a number: ")) |
| 917 | (get-input) |
| 918 | ...) |
| 919 | @end example |
| 920 | @end deffn |
| 921 | |
| 922 | Parameter objects are implemented using fluids (@pxref{Fluids and |
| 923 | Dynamic States}), so each dynamic state has its own parameter |
| 924 | locations. That includes the separate locations when outside any |
| 925 | @code{parameterize} form. When a parameter is created it gets a |
| 926 | separate initial location in each dynamic state, all initialized to the |
| 927 | given @var{init} value. |
| 928 | |
| 929 | New code should probably just use parameters instead of fluids, because |
| 930 | the interface is better. But for migrating old code or otherwise |
| 931 | providing interoperability, Guile provides the @code{fluid->parameter} |
| 932 | procedure: |
| 933 | |
| 934 | @deffn {Scheme Procedure} fluid->parameter fluid [conv] |
| 935 | Make a parameter that wraps a fluid. |
| 936 | |
| 937 | The value of the parameter will be the same as the value of the fluid. |
| 938 | If the parameter is rebound in some dynamic extent, perhaps via |
| 939 | @code{parameterize}, the new value will be run through the optional |
| 940 | @var{conv} procedure, as with any parameter. Note that unlike |
| 941 | @code{make-parameter}, @var{conv} is not applied to the initial value. |
| 942 | @end deffn |
| 943 | |
| 944 | As alluded to above, because each thread usually has a separate dynamic |
| 945 | state, each thread has its own locations behind parameter objects, and |
| 946 | changes in one thread are not visible to any other. When a new dynamic |
| 947 | state or thread is created, the values of parameters in the originating |
| 948 | context are copied, into new locations. |
| 949 | |
| 950 | @cindex SRFI-39 |
| 951 | Guile's parameters conform to SRFI-39 (@pxref{SRFI-39}). |
| 952 | |
| 953 | |
| 954 | @node Futures |
| 955 | @subsection Futures |
| 956 | @cindex futures |
| 957 | @cindex fine-grain parallelism |
| 958 | @cindex parallelism |
| 959 | |
| 960 | The @code{(ice-9 futures)} module provides @dfn{futures}, a construct |
| 961 | for fine-grain parallelism. A future is a wrapper around an expression |
| 962 | whose computation may occur in parallel with the code of the calling |
| 963 | thread, and possibly in parallel with other futures. Like promises, |
| 964 | futures are essentially proxies that can be queried to obtain the value |
| 965 | of the enclosed expression: |
| 966 | |
| 967 | @lisp |
| 968 | (touch (future (+ 2 3))) |
| 969 | @result{} 5 |
| 970 | @end lisp |
| 971 | |
| 972 | However, unlike promises, the expression associated with a future may be |
| 973 | evaluated on another CPU core, should one be available. This supports |
| 974 | @dfn{fine-grain parallelism}, because even relatively small computations |
| 975 | can be embedded in futures. Consider this sequential code: |
| 976 | |
| 977 | @lisp |
| 978 | (define (find-prime lst1 lst2) |
| 979 | (or (find prime? lst1) |
| 980 | (find prime? lst2))) |
| 981 | @end lisp |
| 982 | |
| 983 | The two arms of @code{or} are potentially computation-intensive. They |
| 984 | are independent of one another, yet, they are evaluated sequentially |
| 985 | when the first one returns @code{#f}. Using futures, one could rewrite |
| 986 | it like this: |
| 987 | |
| 988 | @lisp |
| 989 | (define (find-prime lst1 lst2) |
| 990 | (let ((f (future (find prime? lst2)))) |
| 991 | (or (find prime? lst1) |
| 992 | (touch f)))) |
| 993 | @end lisp |
| 994 | |
| 995 | This preserves the semantics of @code{find-prime}. On a multi-core |
| 996 | machine, though, the computation of @code{(find prime? lst2)} may be |
| 997 | done in parallel with that of the other @code{find} call, which can |
| 998 | reduce the execution time of @code{find-prime}. |
| 999 | |
| 1000 | Futures may be nested: a future can itself spawn and then @code{touch} |
| 1001 | other futures, leading to a directed acyclic graph of futures. Using |
| 1002 | this facility, a parallel @code{map} procedure can be defined along |
| 1003 | these lines: |
| 1004 | |
| 1005 | @lisp |
| 1006 | (use-modules (ice-9 futures) (ice-9 match)) |
| 1007 | |
| 1008 | (define (par-map proc lst) |
| 1009 | (match lst |
| 1010 | (() |
| 1011 | '()) |
| 1012 | ((head tail ...) |
| 1013 | (let ((tail (future (par-map proc tail))) |
| 1014 | (head (proc head))) |
| 1015 | (cons head (touch tail)))))) |
| 1016 | @end lisp |
| 1017 | |
| 1018 | Note that futures are intended for the evaluation of purely functional |
| 1019 | expressions. Expressions that have side-effects or rely on I/O may |
| 1020 | require additional care, such as explicit synchronization |
| 1021 | (@pxref{Mutexes and Condition Variables}). |
| 1022 | |
| 1023 | Guile's futures are implemented on top of POSIX threads |
| 1024 | (@pxref{Threads}). Internally, a fixed-size pool of threads is used to |
| 1025 | evaluate futures, such that offloading the evaluation of an expression |
| 1026 | to another thread doesn't incur thread creation costs. By default, the |
| 1027 | pool contains one thread per available CPU core, minus one, to account |
| 1028 | for the main thread. The number of available CPU cores is determined |
| 1029 | using @code{current-processor-count} (@pxref{Processes}). |
| 1030 | |
| 1031 | When a thread touches a future that has not completed yet, it processes |
| 1032 | any pending future while waiting for it to complete, or just waits if |
| 1033 | there are no pending futures. When @code{touch} is called from within a |
| 1034 | future, the execution of the calling future is suspended, allowing its |
| 1035 | host thread to process other futures, and resumed when the touched |
| 1036 | future has completed. This suspend/resume is achieved by capturing the |
| 1037 | calling future's continuation, and later reinstating it (@pxref{Prompts, |
| 1038 | delimited continuations}). |
| 1039 | |
| 1040 | Note that @code{par-map} above is not tail-recursive. This could lead |
| 1041 | to stack overflows when @var{lst} is large compared to |
| 1042 | @code{(current-processor-count)}. To address that, @code{touch} uses |
| 1043 | the suspend mechanism described above to limit the number of nested |
| 1044 | futures executing on the same stack. Thus, the above code should never |
| 1045 | run into stack overflows. |
| 1046 | |
| 1047 | @deffn {Scheme Syntax} future exp |
| 1048 | Return a future for expression @var{exp}. This is equivalent to: |
| 1049 | |
| 1050 | @lisp |
| 1051 | (make-future (lambda () exp)) |
| 1052 | @end lisp |
| 1053 | @end deffn |
| 1054 | |
| 1055 | @deffn {Scheme Procedure} make-future thunk |
| 1056 | Return a future for @var{thunk}, a zero-argument procedure. |
| 1057 | |
| 1058 | This procedure returns immediately. Execution of @var{thunk} may begin |
| 1059 | in parallel with the calling thread's computations, if idle CPU cores |
| 1060 | are available, or it may start when @code{touch} is invoked on the |
| 1061 | returned future. |
| 1062 | |
| 1063 | If the execution of @var{thunk} throws an exception, that exception will |
| 1064 | be re-thrown when @code{touch} is invoked on the returned future. |
| 1065 | @end deffn |
| 1066 | |
| 1067 | @deffn {Scheme Procedure} future? obj |
| 1068 | Return @code{#t} if @var{obj} is a future. |
| 1069 | @end deffn |
| 1070 | |
| 1071 | @deffn {Scheme Procedure} touch f |
| 1072 | Return the result of the expression embedded in future @var{f}. |
| 1073 | |
| 1074 | If the result was already computed in parallel, @code{touch} returns |
| 1075 | instantaneously. Otherwise, it waits for the computation to complete, |
| 1076 | if it already started, or initiates it. In the former case, the calling |
| 1077 | thread may process other futures in the meantime. |
| 1078 | @end deffn |
| 1079 | |
| 1080 | |
| 1081 | @node Parallel Forms |
| 1082 | @subsection Parallel forms |
| 1083 | @cindex parallel forms |
| 1084 | |
| 1085 | The functions described in this section are available from |
| 1086 | |
| 1087 | @example |
| 1088 | (use-modules (ice-9 threads)) |
| 1089 | @end example |
| 1090 | |
| 1091 | They provide high-level parallel constructs. The following functions |
| 1092 | are implemented in terms of futures (@pxref{Futures}). Thus they are |
| 1093 | relatively cheap as they re-use existing threads, and portable, since |
| 1094 | they automatically use one thread per available CPU core. |
| 1095 | |
| 1096 | @deffn syntax parallel expr @dots{} |
| 1097 | Evaluate each @var{expr} expression in parallel, each in its own thread. |
| 1098 | Return the results of @var{n} expressions as a set of @var{n} multiple |
| 1099 | values (@pxref{Multiple Values}). |
| 1100 | @end deffn |
| 1101 | |
| 1102 | @deffn syntax letpar ((var expr) @dots{}) body1 body2 @dots{} |
| 1103 | Evaluate each @var{expr} in parallel, each in its own thread, then bind |
| 1104 | the results to the corresponding @var{var} variables, and then evaluate |
| 1105 | @var{body1} @var{body2} @enddots{} |
| 1106 | |
| 1107 | @code{letpar} is like @code{let} (@pxref{Local Bindings}), but all the |
| 1108 | expressions for the bindings are evaluated in parallel. |
| 1109 | @end deffn |
| 1110 | |
| 1111 | @deffn {Scheme Procedure} par-map proc lst1 lst2 @dots{} |
| 1112 | @deffnx {Scheme Procedure} par-for-each proc lst1 lst2 @dots{} |
| 1113 | Call @var{proc} on the elements of the given lists. @code{par-map} |
| 1114 | returns a list comprising the return values from @var{proc}. |
| 1115 | @code{par-for-each} returns an unspecified value, but waits for all |
| 1116 | calls to complete. |
| 1117 | |
| 1118 | The @var{proc} calls are @code{(@var{proc} @var{elem1} @var{elem2} |
| 1119 | @dots{})}, where each @var{elem} is from the corresponding @var{lst} . |
| 1120 | Each @var{lst} must be the same length. The calls are potentially made |
| 1121 | in parallel, depending on the number of CPU cores available. |
| 1122 | |
| 1123 | These functions are like @code{map} and @code{for-each} (@pxref{List |
| 1124 | Mapping}), but make their @var{proc} calls in parallel. |
| 1125 | @end deffn |
| 1126 | |
| 1127 | Unlike those above, the functions described below take a number of |
| 1128 | threads as an argument. This makes them inherently non-portable since |
| 1129 | the specified number of threads may differ from the number of available |
| 1130 | CPU cores as returned by @code{current-processor-count} |
| 1131 | (@pxref{Processes}). In addition, these functions create the specified |
| 1132 | number of threads when they are called and terminate them upon |
| 1133 | completion, which makes them quite expensive. |
| 1134 | |
| 1135 | Therefore, they should be avoided. |
| 1136 | |
| 1137 | @deffn {Scheme Procedure} n-par-map n proc lst1 lst2 @dots{} |
| 1138 | @deffnx {Scheme Procedure} n-par-for-each n proc lst1 lst2 @dots{} |
| 1139 | Call @var{proc} on the elements of the given lists, in the same way as |
| 1140 | @code{par-map} and @code{par-for-each} above, but use no more than |
| 1141 | @var{n} threads at any one time. The order in which calls are |
| 1142 | initiated within that threads limit is unspecified. |
| 1143 | |
| 1144 | These functions are good for controlling resource consumption if |
| 1145 | @var{proc} calls might be costly, or if there are many to be made. On |
| 1146 | a dual-CPU system for instance @math{@var{n}=4} might be enough to |
| 1147 | keep the CPUs utilized, and not consume too much memory. |
| 1148 | @end deffn |
| 1149 | |
| 1150 | @deffn {Scheme Procedure} n-for-each-par-map n sproc pproc lst1 lst2 @dots{} |
| 1151 | Apply @var{pproc} to the elements of the given lists, and apply |
| 1152 | @var{sproc} to each result returned by @var{pproc}. The final return |
| 1153 | value is unspecified, but all calls will have been completed before |
| 1154 | returning. |
| 1155 | |
| 1156 | The calls made are @code{(@var{sproc} (@var{pproc} @var{elem1} @dots{} |
| 1157 | @var{elemN}))}, where each @var{elem} is from the corresponding |
| 1158 | @var{lst}. Each @var{lst} must have the same number of elements. |
| 1159 | |
| 1160 | The @var{pproc} calls are made in parallel, in separate threads. No more |
| 1161 | than @var{n} threads are used at any one time. The order in which |
| 1162 | @var{pproc} calls are initiated within that limit is unspecified. |
| 1163 | |
| 1164 | The @var{sproc} calls are made serially, in list element order, one at |
| 1165 | a time. @var{pproc} calls on later elements may execute in parallel |
| 1166 | with the @var{sproc} calls. Exactly which thread makes each |
| 1167 | @var{sproc} call is unspecified. |
| 1168 | |
| 1169 | This function is designed for individual calculations that can be done |
| 1170 | in parallel, but with results needing to be handled serially, for |
| 1171 | instance to write them to a file. The @var{n} limit on threads |
| 1172 | controls system resource usage when there are many calculations or |
| 1173 | when they might be costly. |
| 1174 | |
| 1175 | It will be seen that @code{n-for-each-par-map} is like a combination |
| 1176 | of @code{n-par-map} and @code{for-each}, |
| 1177 | |
| 1178 | @example |
| 1179 | (for-each sproc (n-par-map n pproc lst1 ... lstN)) |
| 1180 | @end example |
| 1181 | |
| 1182 | @noindent |
| 1183 | But the actual implementation is more efficient since each @var{sproc} |
| 1184 | call, in turn, can be initiated once the relevant @var{pproc} call has |
| 1185 | completed, it doesn't need to wait for all to finish. |
| 1186 | @end deffn |
| 1187 | |
| 1188 | |
| 1189 | |
| 1190 | @c Local Variables: |
| 1191 | @c TeX-master: "guile.texi" |
| 1192 | @c End: |