X-Git-Url: http://pilppa.org/gitweb/gitweb.cgi?a=blobdiff_plain;f=Documentation%2Fatomic_ops.txt;h=4ef245010457fc9301f21f6e78f2b1fc16c1d014;hb=1b52467243c7167b3a267ddbcbb14d550f28eb4a;hp=d46306fea230cda4a21ee19da45337ab39898918;hpb=fb9fc395174138983a49f2da982ed14caabbe741;p=linux-2.6-omap-h63xx.git diff --git a/Documentation/atomic_ops.txt b/Documentation/atomic_ops.txt index d46306fea23..4ef24501045 100644 --- a/Documentation/atomic_ops.txt +++ b/Documentation/atomic_ops.txt @@ -186,7 +186,8 @@ If the atomic value v is not equal to u, this function adds a to v, and returns non zero. If v is equal to u then it returns zero. This is done as an atomic operation. -atomic_add_unless requires explicit memory barriers around the operation. +atomic_add_unless requires explicit memory barriers around the operation +unless it fails (returns 0). atomic_inc_not_zero, equivalent to atomic_add_unless(v, 1, 0) @@ -418,6 +419,20 @@ brothers: */ smp_mb__after_clear_bit(); +There are two special bitops with lock barrier semantics (acquire/release, +same as spinlocks). These operate in the same way as their non-_lock/unlock +postfixed variants, except that they are to provide acquire/release semantics, +respectively. This means they can be used for bit_spin_trylock and +bit_spin_unlock type operations without specifying any more barriers. + + int test_and_set_bit_lock(unsigned long nr, unsigned long *addr); + void clear_bit_unlock(unsigned long nr, unsigned long *addr); + void __clear_bit_unlock(unsigned long nr, unsigned long *addr); + +The __clear_bit_unlock version is non-atomic, however it still implements +unlock barrier semantics. This can be useful if the lock itself is protecting +the other bits in the word. + Finally, there are non-atomic versions of the bitmask operations provided. They are used in contexts where some other higher-level SMP locking scheme is being used to protect the bitmask, and thus less