Intel(R) Threading Building Blocks Doxygen Documentation version 4.2.3
Loading...
Searching...
No Matches
tbb::queuing_rw_mutex::scoped_lock Class Reference

The scoped locking pattern. More...

#include <queuing_rw_mutex.h>

Inheritance diagram for tbb::queuing_rw_mutex::scoped_lock:
Collaboration diagram for tbb::queuing_rw_mutex::scoped_lock:

Public Member Functions

 scoped_lock ()
 Construct lock that has not acquired a mutex.
 
 scoped_lock (queuing_rw_mutex &m, bool write=true)
 Acquire lock on given mutex.
 
 ~scoped_lock ()
 Release lock (if lock is held).
 
void acquire (queuing_rw_mutex &m, bool write=true)
 Acquire lock on given mutex.
 
bool try_acquire (queuing_rw_mutex &m, bool write=true)
 Acquire lock on given mutex if free (i.e. non-blocking)
 
void release ()
 Release lock.
 
bool upgrade_to_writer ()
 Upgrade reader to become a writer.
 
bool downgrade_to_reader ()
 Downgrade writer to become a reader.
 

Private Types

typedef unsigned char state_t
 

Private Member Functions

void initialize ()
 Initialize fields to mean "no lock held".
 
void acquire_internal_lock ()
 Acquire the internal lock.
 
bool try_acquire_internal_lock ()
 Try to acquire the internal lock.
 
void release_internal_lock ()
 Release the internal lock.
 
void wait_for_release_of_internal_lock ()
 Wait for internal lock to be released.
 
void unblock_or_wait_on_internal_lock (uintptr_t)
 A helper function.
 
- Private Member Functions inherited from tbb::internal::no_copy
 no_copy (const no_copy &)=delete
 
 no_copy ()=default
 

Private Attributes

queuing_rw_mutexmy_mutex
 The pointer to the mutex owned, or NULL if not holding a mutex.
 
scoped_lock *__TBB_atomic my_prev
 The pointer to the previous and next competitors for a mutex.
 
scoped_lock *__TBB_atomic *__TBB_atomic my_next
 
atomic< state_tmy_state
 State of the request: reader, writer, active reader, other service states.
 
unsigned char __TBB_atomic my_going
 The local spin-wait variable.
 
unsigned char my_internal_lock
 A tiny internal lock.
 

Detailed Description

The scoped locking pattern.

It helps to avoid the common problem of forgetting to release lock. It also nicely provides the "node" for queuing locks.

Definition at line 53 of file queuing_rw_mutex.h.

Member Typedef Documentation

◆ state_t

typedef unsigned char tbb::queuing_rw_mutex::scoped_lock::state_t
private

Definition at line 105 of file queuing_rw_mutex.h.

Constructor & Destructor Documentation

◆ scoped_lock() [1/2]

tbb::queuing_rw_mutex::scoped_lock::scoped_lock ( )
inline

Construct lock that has not acquired a mutex.

Equivalent to zero-initialization of *this.

Definition at line 69 of file queuing_rw_mutex.h.

69{initialize();}
void initialize()
Initialize fields to mean "no lock held".

References initialize().

Here is the call graph for this function:

◆ scoped_lock() [2/2]

tbb::queuing_rw_mutex::scoped_lock::scoped_lock ( queuing_rw_mutex m,
bool  write = true 
)
inline

Acquire lock on given mutex.

Definition at line 72 of file queuing_rw_mutex.h.

72 {
73 initialize();
74 acquire(m,write);
75 }
@ acquire
Acquire.
Definition: atomic.h:57

References tbb::acquire, and initialize().

Here is the call graph for this function:

◆ ~scoped_lock()

tbb::queuing_rw_mutex::scoped_lock::~scoped_lock ( )
inline

Release lock (if lock is held).

Definition at line 78 of file queuing_rw_mutex.h.

78 {
79 if( my_mutex ) release();
80 }
queuing_rw_mutex * my_mutex
The pointer to the mutex owned, or NULL if not holding a mutex.

References my_mutex, and release().

Here is the call graph for this function:

Member Function Documentation

◆ acquire()

void tbb::queuing_rw_mutex::scoped_lock::acquire ( queuing_rw_mutex m,
bool  write = true 
)

Acquire lock on given mutex.

A method to acquire queuing_rw_mutex lock.

Definition at line 140 of file queuing_rw_mutex.cpp.

141{
142 __TBB_ASSERT( !my_mutex, "scoped_lock is already holding a mutex");
143
144 // Must set all fields before the fetch_and_store, because once the
145 // fetch_and_store executes, *this becomes accessible to other threads.
146 my_mutex = &m;
152
153 queuing_rw_mutex::scoped_lock* pred = m.q_tail.fetch_and_store<tbb::release>(this);
154
155 if( write ) { // Acquiring for write
156
157 if( pred ) {
158 ITT_NOTIFY(sync_prepare, my_mutex);
159 pred = tricky_pointer(pred) & ~FLAG;
160 __TBB_ASSERT( !( uintptr_t(pred) & FLAG ), "use of corrupted pointer!" );
161#if TBB_USE_ASSERT
162 __TBB_control_consistency_helper(); // on "m.q_tail"
163 __TBB_ASSERT( !__TBB_load_relaxed(pred->my_next), "the predecessor has another successor!");
164#endif
165 __TBB_store_with_release(pred->my_next,this);
167 }
168
169 } else { // Acquiring for read
170#if DO_ITT_NOTIFY
171 bool sync_prepare_done = false;
172#endif
173 if( pred ) {
174 unsigned short pred_state;
175 __TBB_ASSERT( !__TBB_load_relaxed(my_prev), "the predecessor is already set" );
176 if( uintptr_t(pred) & FLAG ) {
177 /* this is only possible if pred is an upgrading reader and it signals us to wait */
178 pred_state = STATE_UPGRADE_WAITING;
179 pred = tricky_pointer(pred) & ~FLAG;
180 } else {
181 // Load pred->my_state now, because once pred->my_next becomes
182 // non-NULL, we must assume that *pred might be destroyed.
183 pred_state = pred->my_state.compare_and_swap<tbb::acquire>(STATE_READER_UNBLOCKNEXT, STATE_READER);
184 }
186 __TBB_ASSERT( !( uintptr_t(pred) & FLAG ), "use of corrupted pointer!" );
187#if TBB_USE_ASSERT
188 __TBB_control_consistency_helper(); // on "m.q_tail"
189 __TBB_ASSERT( !__TBB_load_relaxed(pred->my_next), "the predecessor has another successor!");
190#endif
191 __TBB_store_with_release(pred->my_next,this);
192 if( pred_state != STATE_ACTIVEREADER ) {
193#if DO_ITT_NOTIFY
194 sync_prepare_done = true;
195 ITT_NOTIFY(sync_prepare, my_mutex);
196#endif
198 }
199 }
200
201 // The protected state must have been acquired here before it can be further released to any other reader(s):
202 unsigned short old_state = my_state.compare_and_swap<tbb::acquire>(STATE_ACTIVEREADER, STATE_READER);
203 if( old_state!=STATE_READER ) {
204#if DO_ITT_NOTIFY
205 if( !sync_prepare_done )
206 ITT_NOTIFY(sync_prepare, my_mutex);
207#endif
208 // Failed to become active reader -> need to unblock the next waiting reader first
209 __TBB_ASSERT( my_state==STATE_READER_UNBLOCKNEXT, "unexpected state" );
211 /* my_state should be changed before unblocking the next otherwise it might finish
212 and another thread can get our old state and left blocked */
215 }
216 }
217
218 ITT_NOTIFY(sync_acquired, my_mutex);
219
220 // Force acquire so that user's critical section receives correct values
221 // from processor that was previously in the user's critical section.
223}
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
#define __TBB_control_consistency_helper()
Definition: gcc_generic.h:60
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:112
static const tricky_pointer::word FLAG
Mask for low order bit of a pointer.
tricky_atomic_pointer< queuing_rw_mutex::scoped_lock > tricky_pointer
@ STATE_UPGRADE_WAITING
@ STATE_ACTIVEREADER
@ STATE_READER_UNBLOCKNEXT
@ release
Release.
Definition: atomic.h:59
const unsigned char RELEASED
T __TBB_load_with_acquire(const volatile T &location)
Definition: tbb_machine.h:709
void __TBB_store_relaxed(volatile T &location, V value)
Definition: tbb_machine.h:739
void spin_wait_until_eq(const volatile T &location, const U value)
Spin UNTIL the value of the variable is equal to a given value.
Definition: tbb_machine.h:399
T __TBB_load_relaxed(const volatile T &location)
Definition: tbb_machine.h:735
void __TBB_store_with_release(volatile T &location, V value)
Definition: tbb_machine.h:713
void spin_wait_while_eq(const volatile T &location, U value)
Spin WHILE the value of the variable is equal to a given value.
Definition: tbb_machine.h:391
scoped_lock()
Construct lock that has not acquired a mutex.
scoped_lock *__TBB_atomic *__TBB_atomic my_next
atomic< state_t > my_state
State of the request: reader, writer, active reader, other service states.
unsigned char my_internal_lock
A tiny internal lock.
unsigned char __TBB_atomic my_going
The local spin-wait variable.
scoped_lock *__TBB_atomic my_prev
The pointer to the previous and next competitors for a mutex.

References __TBB_ASSERT, __TBB_control_consistency_helper, tbb::internal::__TBB_load_relaxed(), tbb::internal::__TBB_load_with_acquire(), tbb::internal::__TBB_store_relaxed(), tbb::internal::__TBB_store_with_release(), tbb::acquire, tbb::FLAG, ITT_NOTIFY, my_next, my_state, tbb::queuing_rw_mutex::q_tail, tbb::release, tbb::RELEASED, tbb::internal::spin_wait_until_eq(), tbb::internal::spin_wait_while_eq(), tbb::STATE_ACTIVEREADER, tbb::STATE_READER, tbb::STATE_READER_UNBLOCKNEXT, tbb::STATE_UPGRADE_WAITING, and tbb::STATE_WRITER.

Here is the call graph for this function:

◆ acquire_internal_lock()

void tbb::queuing_rw_mutex::scoped_lock::acquire_internal_lock ( )
inlineprivate

Acquire the internal lock.

Definition at line 55 of file queuing_rw_mutex.cpp.

56{
57 // Usually, we would use the test-test-and-set idiom here, with exponential backoff.
58 // But so far, experiments indicate there is no value in doing so here.
59 while( !try_acquire_internal_lock() ) {
60 __TBB_Pause(1);
61 }
62}
#define __TBB_Pause(V)
Definition: gcc_arm.h:216
bool try_acquire_internal_lock()
Try to acquire the internal lock.

References __TBB_Pause.

◆ downgrade_to_reader()

bool tbb::queuing_rw_mutex::scoped_lock::downgrade_to_reader ( )

Downgrade writer to become a reader.

Definition at line 360 of file queuing_rw_mutex.cpp.

361{
362 if ( my_state == STATE_ACTIVEREADER ) return true; // Already a reader
363
366 if( ! __TBB_load_relaxed(my_next) ) {
367 // the following load of q_tail must not be reordered with setting STATE_READER above
368 if( this==my_mutex->q_tail.load<full_fence>() ) {
369 unsigned short old_state = my_state.compare_and_swap<tbb::release>(STATE_ACTIVEREADER, STATE_READER);
370 if( old_state==STATE_READER )
371 return true; // Downgrade completed
372 }
373 /* wait for the next to register */
374 spin_wait_while_eq( my_next, (void*)NULL );
375 }
377 __TBB_ASSERT( n, "still no successor at this point!" );
378 if( n->my_state & STATE_COMBINED_WAITINGREADER )
379 __TBB_store_with_release(n->my_going,1);
380 else if( n->my_state==STATE_UPGRADE_WAITING )
381 // the next waiting for upgrade means this writer was upgraded before.
382 n->my_state = STATE_UPGRADE_LOSER;
384 return true;
385}
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p sync_releasing
@ STATE_COMBINED_WAITINGREADER
@ STATE_UPGRADE_LOSER
@ full_fence
Sequential consistency.
Definition: atomic.h:55
atomic< scoped_lock * > q_tail
The last competitor requesting the lock.

References __TBB_ASSERT, tbb::internal::__TBB_load_relaxed(), tbb::internal::__TBB_load_with_acquire(), tbb::internal::__TBB_store_with_release(), tbb::full_fence, ITT_NOTIFY, my_going, my_state, tbb::release, tbb::internal::spin_wait_while_eq(), tbb::STATE_ACTIVEREADER, tbb::STATE_COMBINED_WAITINGREADER, tbb::STATE_READER, tbb::STATE_UPGRADE_LOSER, tbb::STATE_UPGRADE_WAITING, and sync_releasing.

Here is the call graph for this function:

◆ initialize()

void tbb::queuing_rw_mutex::scoped_lock::initialize ( )
inlineprivate

Initialize fields to mean "no lock held".

Definition at line 55 of file queuing_rw_mutex.h.

55 {
56 my_mutex = NULL;
58 my_going = 0;
59#if TBB_USE_ASSERT
60 my_state = 0xFF; // Set to invalid state
63#endif /* TBB_USE_ASSERT */
64 }
void poison_pointer(T *__TBB_atomic &)
Definition: tbb_stddef.h:305

References my_going, my_internal_lock, my_mutex, my_next, my_prev, my_state, and tbb::internal::poison_pointer().

Referenced by scoped_lock().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ release()

void tbb::queuing_rw_mutex::scoped_lock::release ( )

Release lock.

A method to release queuing_rw_mutex lock.

Definition at line 254 of file queuing_rw_mutex.cpp.

255{
256 __TBB_ASSERT(my_mutex!=NULL, "no lock acquired");
257
259
260 if( my_state == STATE_WRITER ) { // Acquired for write
261
262 // The logic below is the same as "writerUnlock", but elides
263 // "return" from the middle of the routine.
264 // In the statement below, acquire semantics of reading my_next is required
265 // so that following operations with fields of my_next are safe.
267 if( !n ) {
268 if( this == my_mutex->q_tail.compare_and_swap<tbb::release>(NULL, this) ) {
269 // this was the only item in the queue, and the queue is now empty.
270 goto done;
271 }
274 }
275 __TBB_store_relaxed(n->my_going, 2); // protect next queue node from being destroyed too early
276 if( n->my_state==STATE_UPGRADE_WAITING ) {
277 // the next waiting for upgrade means this writer was upgraded before.
279 queuing_rw_mutex::scoped_lock* tmp = tricky_pointer::fetch_and_store<tbb::release>(&(n->my_prev), NULL);
280 n->my_state = STATE_UPGRADE_LOSER;
281 __TBB_store_with_release(n->my_going,1);
283 } else {
285 __TBB_ASSERT( !( uintptr_t(__TBB_load_relaxed(n->my_prev)) & FLAG ), "use of corrupted pointer!" );
286 __TBB_store_relaxed(n->my_prev, (scoped_lock*)0);
287 __TBB_store_with_release(n->my_going,1);
288 }
289
290 } else { // Acquired for read
291
292 queuing_rw_mutex::scoped_lock *tmp = NULL;
293retry:
294 // Addition to the original paper: Mark my_prev as in use
295 queuing_rw_mutex::scoped_lock *pred = tricky_pointer::fetch_and_add<tbb::acquire>(&my_prev, FLAG);
296
297 if( pred ) {
298 if( !(pred->try_acquire_internal_lock()) )
299 {
300 // Failed to acquire the lock on pred. The predecessor either unlinks or upgrades.
301 // In the second case, it could or could not know my "in use" flag - need to check
302 tmp = tricky_pointer::compare_and_swap<tbb::release>(&my_prev, pred, tricky_pointer(pred) | FLAG );
303 if( !(uintptr_t(tmp) & FLAG) ) {
304 // Wait for the predecessor to change my_prev (e.g. during unlink)
306 // Now owner of pred is waiting for _us_ to release its lock
307 pred->release_internal_lock();
308 }
309 // else the "in use" flag is back -> the predecessor didn't get it and will release itself; nothing to do
310
311 tmp = NULL;
312 goto retry;
313 }
314 __TBB_ASSERT(pred && pred->my_internal_lock==ACQUIRED, "predecessor's lock is not acquired");
317
318 __TBB_store_with_release(pred->my_next,static_cast<scoped_lock *>(NULL));
319
320 if( !__TBB_load_relaxed(my_next) && this != my_mutex->q_tail.compare_and_swap<tbb::release>(pred, this) ) {
321 spin_wait_while_eq( my_next, (void*)NULL );
322 }
323 __TBB_ASSERT( !get_flag(__TBB_load_relaxed(my_next)), "use of corrupted pointer" );
324
325 // ensure acquire semantics of reading 'my_next'
326 if( scoped_lock *const l_next = __TBB_load_with_acquire(my_next) ) { // I->next != nil, TODO: rename to n after clearing up and adapting the n in the comment two lines below
327 // Equivalent to I->next->prev = I->prev but protected against (prev[n]&FLAG)!=0
328 tmp = tricky_pointer::fetch_and_store<tbb::release>(&(l_next->my_prev), pred);
329 // I->prev->next = I->next;
331 __TBB_store_with_release(pred->my_next, my_next);
332 }
333 // Safe to release in the order opposite to acquiring which makes the code simpler
334 pred->release_internal_lock();
335
336 } else { // No predecessor when we looked
337 acquire_internal_lock(); // "exclusiveLock(&I->EL)"
339 if( !n ) {
340 if( this != my_mutex->q_tail.compare_and_swap<tbb::release>(NULL, this) ) {
343 } else {
344 goto unlock_self;
345 }
346 }
347 __TBB_store_relaxed(n->my_going, 2); // protect next queue node from being destroyed too early
348 tmp = tricky_pointer::fetch_and_store<tbb::release>(&(n->my_prev), NULL);
349 __TBB_store_with_release(n->my_going,1);
350 }
351unlock_self:
353 }
354done:
356
357 initialize();
358}
const unsigned char ACQUIRED
uintptr_t get_flag(queuing_rw_mutex::scoped_lock *ptr)
void acquire_internal_lock()
Acquire the internal lock.
void unblock_or_wait_on_internal_lock(uintptr_t)
A helper function.

References __TBB_ASSERT, tbb::internal::__TBB_load_relaxed(), tbb::internal::__TBB_load_with_acquire(), tbb::internal::__TBB_store_relaxed(), tbb::internal::__TBB_store_with_release(), tbb::ACQUIRED, tbb::FLAG, tbb::get_flag(), ITT_NOTIFY, my_going, my_internal_lock, my_next, my_prev, my_state, tbb::release, release_internal_lock(), tbb::internal::spin_wait_while_eq(), tbb::STATE_COMBINED_WAITINGREADER, tbb::STATE_UPGRADE_LOSER, tbb::STATE_UPGRADE_WAITING, tbb::STATE_WRITER, sync_releasing, and try_acquire_internal_lock().

Referenced by ~scoped_lock().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ release_internal_lock()

void tbb::queuing_rw_mutex::scoped_lock::release_internal_lock ( )
inlineprivate

Release the internal lock.

Definition at line 64 of file queuing_rw_mutex.cpp.

References tbb::internal::__TBB_store_with_release(), and tbb::RELEASED.

Referenced by release().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ try_acquire()

bool tbb::queuing_rw_mutex::scoped_lock::try_acquire ( queuing_rw_mutex m,
bool  write = true 
)

Acquire lock on given mutex if free (i.e. non-blocking)

A method to acquire queuing_rw_mutex if it is free.

Definition at line 226 of file queuing_rw_mutex.cpp.

227{
228 __TBB_ASSERT( !my_mutex, "scoped_lock is already holding a mutex");
229
230 if( load<relaxed>(m.q_tail) )
231 return false; // Someone already took the lock
232
233 // Must set all fields before the fetch_and_store, because once the
234 // fetch_and_store executes, *this becomes accessible to other threads.
237 __TBB_store_relaxed(my_going, 0); // TODO: remove dead assignment?
240
241 // The CAS must have release semantics, because we are
242 // "sending" the fields initialized above to other processors.
243 if( m.q_tail.compare_and_swap<tbb::release>(this, NULL) )
244 return false; // Someone already took the lock
245 // Force acquire so that user's critical section receives correct values
246 // from processor that was previously in the user's critical section.
248 my_mutex = &m;
249 ITT_NOTIFY(sync_acquired, my_mutex);
250 return true;
251}

References __TBB_ASSERT, tbb::internal::__TBB_load_with_acquire(), tbb::internal::__TBB_store_relaxed(), ITT_NOTIFY, tbb::queuing_rw_mutex::q_tail, tbb::release, tbb::RELEASED, tbb::STATE_ACTIVEREADER, and tbb::STATE_WRITER.

Here is the call graph for this function:

◆ try_acquire_internal_lock()

bool tbb::queuing_rw_mutex::scoped_lock::try_acquire_internal_lock ( )
inlineprivate

Try to acquire the internal lock.

Returns true if lock was successfully acquired.

Definition at line 50 of file queuing_rw_mutex.cpp.

51{
53}
atomic< T > & as_atomic(T &t)
Definition: atomic.h:572

References tbb::acquire, tbb::ACQUIRED, tbb::internal::as_atomic(), my_internal_lock, and tbb::RELEASED.

Referenced by release().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ unblock_or_wait_on_internal_lock()

void tbb::queuing_rw_mutex::scoped_lock::unblock_or_wait_on_internal_lock ( uintptr_t  flag)
inlineprivate

A helper function.

Definition at line 74 of file queuing_rw_mutex.cpp.

74 {
75 if( flag )
77 else
79}
void wait_for_release_of_internal_lock()
Wait for internal lock to be released.
void release_internal_lock()
Release the internal lock.

◆ upgrade_to_writer()

bool tbb::queuing_rw_mutex::scoped_lock::upgrade_to_writer ( )

Upgrade reader to become a writer.

Returns whether the upgrade happened without releasing and re-acquiring the lock

Definition at line 387 of file queuing_rw_mutex.cpp.

388{
389 if ( my_state == STATE_WRITER ) return true; // Already a writer
390
391 queuing_rw_mutex::scoped_lock * tmp;
392 queuing_rw_mutex::scoped_lock * me = this;
393
396requested:
397 __TBB_ASSERT( !(uintptr_t(__TBB_load_relaxed(my_next)) & FLAG), "use of corrupted pointer!" );
399 if( this != my_mutex->q_tail.compare_and_swap<tbb::release>(tricky_pointer(me)|FLAG, this) ) {
400 spin_wait_while_eq( my_next, (void*)NULL );
401 queuing_rw_mutex::scoped_lock * n;
402 n = tricky_pointer::fetch_and_add<tbb::acquire>(&my_next, FLAG);
403 unsigned short n_state = n->my_state;
404 /* the next reader can be blocked by our state. the best thing to do is to unblock it */
405 if( n_state & STATE_COMBINED_WAITINGREADER )
406 __TBB_store_with_release(n->my_going,1);
407 tmp = tricky_pointer::fetch_and_store<tbb::release>(&(n->my_prev), this);
410 // save n|FLAG for simplicity of following comparisons
411 tmp = tricky_pointer(n)|FLAG;
412 for( atomic_backoff b; __TBB_load_relaxed(my_next)==tmp; b.pause() ) {
416 goto waiting;
417 }
418 }
420 goto requested;
421 } else {
422 __TBB_ASSERT( n_state & (STATE_WRITER | STATE_UPGRADE_WAITING), "unexpected state");
425 }
426 } else {
427 /* We are in the tail; whoever comes next is blocked by q_tail&FLAG */
429 } // if( this != my_mutex->q_tail... )
431
432waiting:
433 __TBB_ASSERT( !( intptr_t(__TBB_load_relaxed(my_next)) & FLAG ), "use of corrupted pointer!" );
434 __TBB_ASSERT( my_state & STATE_COMBINED_UPGRADING, "wrong state at upgrade waiting_retry" );
435 __TBB_ASSERT( me==this, NULL );
436 ITT_NOTIFY(sync_prepare, my_mutex);
437 /* if no one was blocked by the "corrupted" q_tail, turn it back */
438 my_mutex->q_tail.compare_and_swap<tbb::release>( this, tricky_pointer(me)|FLAG );
439 queuing_rw_mutex::scoped_lock * pred;
440 pred = tricky_pointer::fetch_and_add<tbb::acquire>(&my_prev, FLAG);
441 if( pred ) {
442 bool success = pred->try_acquire_internal_lock();
443 pred->my_state.compare_and_swap<tbb::release>(STATE_UPGRADE_WAITING, STATE_UPGRADE_REQUESTED);
444 if( !success ) {
445 tmp = tricky_pointer::compare_and_swap<tbb::release>(&my_prev, pred, tricky_pointer(pred)|FLAG );
446 if( uintptr_t(tmp) & FLAG ) {
449 } else {
451 pred->release_internal_lock();
452 }
453 } else {
455 pred->release_internal_lock();
458 }
459 if( pred )
460 goto waiting;
461 } else {
462 // restore the corrupted my_prev field for possible further use (e.g. if downgrade back to reader)
464 }
465 __TBB_ASSERT( !pred && !__TBB_load_relaxed(my_prev), NULL );
466
467 // additional lifetime issue prevention checks
468 // wait for the successor to finish working with my fields
470 // now wait for the predecessor to finish working with my fields
472
473 // Acquire critical section indirectly from previous owner or directly from predecessor (TODO: not clear).
474 __TBB_control_consistency_helper(); // on either "my_mutex->q_tail" or "my_going" (TODO: not clear)
475
476 bool result = ( my_state != STATE_UPGRADE_LOSER );
479
480 ITT_NOTIFY(sync_acquired, my_mutex);
481 return result;
482}
@ STATE_COMBINED_READER
@ STATE_UPGRADE_REQUESTED
@ STATE_COMBINED_UPGRADING

References __TBB_ASSERT, __TBB_control_consistency_helper, tbb::internal::__TBB_load_relaxed(), tbb::internal::__TBB_load_with_acquire(), tbb::internal::__TBB_store_relaxed(), tbb::internal::__TBB_store_with_release(), tbb::acquire, tbb::FLAG, tbb::get_flag(), ITT_NOTIFY, my_going, my_prev, my_state, tbb::release, tbb::internal::spin_wait_while_eq(), tbb::STATE_COMBINED_READER, tbb::STATE_COMBINED_UPGRADING, tbb::STATE_COMBINED_WAITINGREADER, tbb::STATE_UPGRADE_LOSER, tbb::STATE_UPGRADE_REQUESTED, tbb::STATE_UPGRADE_WAITING, tbb::STATE_WRITER, and sync_releasing.

Here is the call graph for this function:

◆ wait_for_release_of_internal_lock()

void tbb::queuing_rw_mutex::scoped_lock::wait_for_release_of_internal_lock ( )
inlineprivate

Wait for internal lock to be released.

Definition at line 69 of file queuing_rw_mutex.cpp.

References tbb::RELEASED, and tbb::internal::spin_wait_until_eq().

Here is the call graph for this function:

Member Data Documentation

◆ my_going

unsigned char __TBB_atomic tbb::queuing_rw_mutex::scoped_lock::my_going
private

The local spin-wait variable.

Corresponds to "spin" in the pseudocode but inverted for the sake of zero-initialization

Definition at line 112 of file queuing_rw_mutex.h.

Referenced by downgrade_to_reader(), initialize(), release(), and upgrade_to_writer().

◆ my_internal_lock

unsigned char tbb::queuing_rw_mutex::scoped_lock::my_internal_lock
private

A tiny internal lock.

Definition at line 115 of file queuing_rw_mutex.h.

Referenced by initialize(), release(), and try_acquire_internal_lock().

◆ my_mutex

queuing_rw_mutex* tbb::queuing_rw_mutex::scoped_lock::my_mutex
private

The pointer to the mutex owned, or NULL if not holding a mutex.

Definition at line 100 of file queuing_rw_mutex.h.

Referenced by initialize(), and ~scoped_lock().

◆ my_next

scoped_lock* __TBB_atomic * __TBB_atomic tbb::queuing_rw_mutex::scoped_lock::my_next
private

Definition at line 103 of file queuing_rw_mutex.h.

Referenced by acquire(), initialize(), and release().

◆ my_prev

scoped_lock* __TBB_atomic tbb::queuing_rw_mutex::scoped_lock::my_prev
private

The pointer to the previous and next competitors for a mutex.

Definition at line 103 of file queuing_rw_mutex.h.

Referenced by initialize(), release(), and upgrade_to_writer().

◆ my_state

atomic<state_t> tbb::queuing_rw_mutex::scoped_lock::my_state
private

State of the request: reader, writer, active reader, other service states.

Definition at line 108 of file queuing_rw_mutex.h.

Referenced by acquire(), downgrade_to_reader(), initialize(), release(), and upgrade_to_writer().


The documentation for this class was generated from the following files:

Copyright © 2005-2020 Intel Corporation. All Rights Reserved.

Intel, Pentium, Intel Xeon, Itanium, Intel XScale and VTune are registered trademarks or trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

* Other names and brands may be claimed as the property of others.