CLH lock is a List-Based Queuing Lock that avoids network contention by having threads spin and on locally accessible memory locations. The main properties of this mechanism are:
- guarantees FIFO ordering of lock acquisitions;
- spins on locally-accessible flag variables only;
- requires a small constant amount of space per lock; and
- works equally well (requiring only O(1) network transactions per lock acquisition) on machines with and without coherent caches.
- avoids the "handshake" runtime overhead between the lock holder and its successor during lock hand-off.
This algorithm was indenpendently introduced by Craig and Magnussen, Landin, and Hagersten papers.
It is noteworthy to mention that spinlocks are usually not what you want.
The majority of use cases are well covered by OS-based mutexes like
std::sync::Mutex
, parking_lot::Mutex
. These implementations will
notify the system that the waiting thread should be parked, freeing the
processor to work on something else.
Spinlocks are only efficient in very few circunstances where the overhead of context switching or process rescheduling are greater than busy waiting for very short periods. Spinlocks can be useful inside operating-system kernels, on embedded systems or even complement other locking designs.
Run the following Cargo command in your project directory:
cargo add clhlock
Or add a entry under the [dependencies]
section in your Cargo.toml
:
# Cargo.toml
[dependencies]
# Available features: `yield`.
clhlock = { version = "0.2", features = ["yield"] }
This project documentation is hosted at docs.rs. Or you can build it locally with the following command:
RUSTDOCFLAGS="--cfg docsrs" cargo +nightly doc --all-features --open
Queue nodes are allocated in the heap, and their ownership is transparently moved from the lock holding thread to its successor. Allocating the nodes directly in the stack is not possible since the CLH lock protocol does not guarantee that a predecessor thread will be live by the time a successor access its associated locking node. Locking operations require taking ownership over node handles that manage the heap allocations. Therefore, this crate requires linking with Rust's core alloc library.
This implementation operates under FIFO. Raw locking APIs require taking
ownership over node handles that manage the heap allocated queue nodes. These
node handles are represented by the raw::MutexNode
type. This implementation
is no_std
compatible. See the raw
module for more information.
use std::sync::Arc;
use std::thread;
// `spins::Mutex` simply spins during contention.
use clhlock::raw::{spins::Mutex, MutexNode};
fn main() {
let mutex = Arc::new(Mutex::new(0));
let c_mutex = Arc::clone(&mutex);
thread::spawn(move || {
// A queue node handle must be consumed.
let node = MutexNode::new();
*c_mutex.lock_with(node) = 10;
})
.join().expect("thread::spawn failed");
// A queue node handle must be consumed.
let node = MutexNode::new();
assert_eq!(*mutex.lock_with(node), 10);
}
This crate dos not provide any default features. Features that can be enabled are:
The yield
feature requires linking to the standard library, so it is not
suitable for no_std
environments. By enabling the yield
feature, instead
of busy-waiting during lock acquisitions and releases, this will call
std::thread::yield_now
, which cooperatively gives up a timeslice to the
OS scheduler. This may cause a context switch, so you may not want to enable
this feature if your intention is to to actually do optimistic spinning. The
default implementation calls core::hint::spin_loop
, which does in fact
just simply busy-waits. This feature is not not_std
compatible.
This crate is guaranteed to compile on a Minimum Supported Rust Version (MSRV) of 1.65.0 and above. This version will not be changed without a minor version bump.
Licensed under either of
- Apache License, Version 2.0 (LICENSE-APACHE or http://apache.org/licenses/LICENSE-2.0)
- MIT license (LICENSE-MIT or http://opensource.org/licenses/MIT)
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.
It is recommended to always use cargo-crev to verify the trustworthiness of each of your dependencies, including this one.