Skip to content

Commit

Permalink
feat(api)!: add new thread_local locking design
Browse files Browse the repository at this point in the history
  • Loading branch information
pedromfedricci authored Apr 9, 2024
1 parent ecea470 commit 563e5f9
Show file tree
Hide file tree
Showing 10 changed files with 959 additions and 1,175 deletions.
71 changes: 37 additions & 34 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,62 +95,65 @@ fn main() {
}
```

## Barging MCS lock
## Thread local MCS queue nodes

This implementation will have non-waiting threads race for the lock against
the front of the waiting queue thread, which means this it is an unfair lock.
This implementation is suitable for `no_std` environments, and the locking
APIs are compatible with the [lock_api] crate. See [`barging`] and [`lock_api`]
modules for more information.
Enables [`raw::Mutex`] locking APIs that operate over queue nodes that are
stored at the thread local storage. These locking APIs require a static
reference to a [`LocalMutexNode`] key. Keys must be generated by the
[`thread_local_node!`] macro. Thread local nodes are not `no_std` compatible
and can be enabled through the `thread_local` feature.

```rust
use std::sync::Arc;
use std::thread;

// Requires `barging` feature.
use mcslock::barging::spins::Mutex;
use mcslock::raw::spins::Mutex;

// Requires `thread_local` feature.
mcslock::thread_local_node!(static NODE);

fn main() {
let mutex = Arc::new(Mutex::new(0));
let c_mutex = Arc::clone(&mutex);

thread::spawn(move || {
*c_mutex.lock() = 10;
// Local nodes handles are provided by reference.
// Critical section must be defined as closure.
c_mutex.lock_with_local(&NODE, |mut guard| *guard = 10);
})
.join().expect("thread::spawn failed");

assert_eq!(*mutex.try_lock().unwrap(), 10);
// Local nodes handles are provided by reference.
// Critical section must be defined as closure.
assert_eq!(mutex.try_lock_with_local(&NODE, |g| *g.unwrap()), 10);
}
```

## Thread local MCS lock
## Barging MCS lock

This implementation also operates under FIFO. The locking APIs provided
by this module do not require user-side node allocation, critical
sections must be provided as closures and at most one lock can be held at
any time within a thread. It is not `no_std` compatible and can be enabled
through the `thread_local` feature. See [`thread_local`] module for more
information.
This implementation will have non-waiting threads race for the lock against
the front of the waiting queue thread, which means this it is an unfair lock.
This implementation is suitable for `no_std` environments, and the locking
APIs are compatible with the [lock_api] crate. See [`barging`] and [`lock_api`]
modules for more information.

```rust
use std::sync::Arc;
use std::thread;

// Requires `thread_local` feature.
use mcslock::thread_local::spins::Mutex;
// Requires `barging` feature.
use mcslock::barging::spins::Mutex;

fn main() {
let mutex = Arc::new(Mutex::new(0));
let c_mutex = Arc::clone(&mutex);

thread::spawn(move || {
// Critical section must be defined as closure.
c_mutex.lock_with(|mut guard| *guard = 10);
*c_mutex.lock() = 10;
})
.join().expect("thread::spawn failed");

// Critical section must be defined as closure.
assert_eq!(mutex.try_lock_with(|guard| *guard.unwrap()), 10);
assert_eq!(*mutex.try_lock().unwrap(), 10);
}
```

Expand All @@ -168,7 +171,14 @@ of busy-waiting during lock acquisitions and releases, this will call
OS scheduler. This may cause a context switch, so you may not want to enable
this feature if your intention is to to actually do optimistic spinning. The
default implementation calls [`core::hint::spin_loop`], which does in fact
just simply busy-waits.
just simply busy-waits. This feature is not `no_std` compatible.

### thread_local

The `thread_local` feature enables [`raw::Mutex`] locking APIs that operate
over queue nodes that are stored at the thread local storage. These locking APIs
require a static reference to a [`LocalMutexNode`] key. Keys must be generated
by the [`thread_local_node!`] macro. This feature is not `no_std` compatible.

### barging

Expand All @@ -178,15 +188,6 @@ and it is suitable for `no_std` environments. This implementation is not
fair (does not guarantee FIFO), but can improve throughput when the lock
is heavily contended.

### thread_local

The `thread_local` feature provides locking APIs that do not require user-side
node allocation, but critical sections must be provided as closures. This
implementation handles the queue's nodes transparently, by storing them in
the thread local storage of the waiting threads. This locking implementation
will panic if more than one guard is alive within a single thread. Not
`no_std` compatible.

### lock_api

This feature implements the [`RawMutex`] trait from the [lock_api] crate for
Expand Down Expand Up @@ -245,11 +246,13 @@ each of your dependencies, including this one.
[cargo-crev]: https://github.com/crev-dev/cargo-crev

[`MutexNode`]: https://docs.rs/mcslock/latest/mcslock/raw/struct.MutexNode.html
[`LocalMutexNode`]: https://docs.rs/mcslock/latest/mcslock/raw/struct.LocalMutexNode.html
[`raw::Mutex`]: https://docs.rs/mcslock/latest/mcslock/raw/struct.Mutex.html
[`barging::Mutex`]: https://docs.rs/mcslock/latest/mcslock/barging/struct.Mutex.html
[`raw`]: https://docs.rs/mcslock/latest/mcslock/raw/index.html
[`barging`]: https://docs.rs/mcslock/latest/mcslock/barging/index.html
[`lock_api`]: https://docs.rs/mcslock/latest/mcslock/lock_api/index.html
[`thread_local`]: https://docs.rs/mcslock/latest/mcslock/thread_local/index.html
[`thread_local_node!`]: https://docs.rs/mcslock/latest/mcslock/macro.thread_local_node.html
[`std::sync::Mutex`]: https://doc.rust-lang.org/std/sync/struct.Mutex.html
[`parking_lot::Mutex`]: https://docs.rs/parking_lot/latest/parking_lot/type.Mutex.html
[`RawMutex`]: https://docs.rs/lock_api/latest/lock_api/trait.RawMutex.html
Expand Down
14 changes: 11 additions & 3 deletions examples/thread_local.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,16 @@ use std::sync::mpsc::channel;
use std::sync::Arc;
use std::thread;

use mcslock::raw::spins::Mutex;

// Requires that the `thread_local` feature is enabled.
use mcslock::thread_local::spins::Mutex;
mcslock::thread_local_node! {
// * Allows multiple static definitions, must be separated with semicolons.
// * Visibility is optional (private by default).
// * Requires `static` keyword and a UPPER_SNAKE_CASE name.
pub static NODE;
static UNUSED_NODE;
}

const N: usize = 10;

Expand All @@ -27,7 +35,7 @@ fn main() {
// threads to ever fail while holding the lock.
//
// Data is exclusively accessed by the guard argument.
data.lock_with(|mut data| {
data.lock_with_local(&NODE, |mut data| {
*data += 1;
if *data == N {
tx.send(()).unwrap();
Expand All @@ -38,6 +46,6 @@ fn main() {
}
let _message = rx.recv().unwrap();

let count = data.lock_with(|guard| *guard);
let count = data.lock_with_local(&NODE, |guard| *guard);
assert_eq!(count, N);
}
16 changes: 7 additions & 9 deletions src/barging/mutex.rs
Original file line number Diff line number Diff line change
Expand Up @@ -128,9 +128,7 @@ impl<T: ?Sized, R: Relax> Mutex<T, R> {
/// This function will block the local thread until it is available to acquire
/// the mutex. Upon returning, the thread is the only thread with the lock
/// held. An RAII guard is returned to allow scoped unlock of the lock. When
/// the guard goes out of scope, the mutex will be unlocked. To acquire a MCS
/// lock, it's also required a mutably borrowed queue node, which is a record
/// that keeps a link for forming the queue, see [`MutexNode`].
/// the guard goes out of scope, the mutex will be unlocked.
///
/// This function will block if the lock is unavailable.
///
Expand Down Expand Up @@ -203,7 +201,8 @@ impl<T: ?Sized, R: Relax> Mutex<T, R> {
/// assert_eq!(mutex.lock_with(|guard| *guard), 10);
/// ```
///
/// Borrows of the guard or its data cannot escape the given closure.
/// Compile fail: borrows of the guard or its data cannot escape the given
/// closure:
///
/// ```compile_fail,E0515
/// use mcslock::barging::spins::Mutex;
Expand All @@ -225,9 +224,7 @@ impl<T: ?Sized, R> Mutex<T, R> {
///
/// If the lock could not be acquired at this time, then [`None`] is returned.
/// Otherwise, an RAII guard is returned. The lock will be unlocked when the
/// guard is dropped. To acquire a MCS lock, it's also required a mutably
/// borrowed queue node, which is a record that keeps a link for forming the
/// queue, see [`MutexNode`].
/// guard is dropped.
///
/// This function does not block.
///
Expand Down Expand Up @@ -293,7 +290,7 @@ impl<T: ?Sized, R> Mutex<T, R> {
/// if let Some(mut guard) = guard {
/// *guard = 10;
/// } else {
/// println!("try_lock failed");
/// println!("try_lock_with failed");
/// }
/// });
/// })
Expand All @@ -302,7 +299,8 @@ impl<T: ?Sized, R> Mutex<T, R> {
/// assert_eq!(mutex.lock_with(|guard| *guard), 10);
/// ```
///
/// Borrows of the guard or its data cannot escape the given closure.
/// Compile fail: borrows of the guard or its data cannot escape the given
/// closure:
///
/// ```compile_fail,E0515
/// use mcslock::barging::spins::Mutex;
Expand Down
9 changes: 7 additions & 2 deletions src/cfg.rs
Original file line number Diff line number Diff line change
Expand Up @@ -70,11 +70,16 @@ pub mod hint {
pub use loom::hint::spin_loop;
}

#[cfg(any(feature = "yield", test))]
pub mod thread {
#[cfg(not(all(loom, test)))]
#[cfg(all(any(feature = "yield", test), not(all(loom, test))))]
pub use std::thread::yield_now;

#[cfg(all(loom, test))]
pub use loom::thread::yield_now;

#[cfg(all(feature = "thread_local", not(all(loom, test))))]
pub use std::thread::LocalKey;

#[cfg(all(feature = "thread_local", loom, test))]
pub use loom::thread::LocalKey;
}
80 changes: 44 additions & 36 deletions src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -60,63 +60,71 @@
//! assert_eq!(*mutex.try_lock(&mut node).unwrap(), 10);
//! ```
//!
//! ## Barging MCS lock
//! ## Thread local MCS queue nodes
//!
//! This implementation will have non-waiting threads race for the lock against
//! the front of the waiting queue thread, which means this it is an unfair lock.
//! This implementation can be enabled through the `barging` feature, it is
//! suitable for `no_std` environments, and the locking APIs are compatible with
//! the `lock_api` crate. See [`mod@barging`] and [`mod@lock_api`] modules for
//! more information.
//! Enables [`raw::Mutex`] locking APIs that operate over queue nodes that are
//! stored at the thread local storage. These locking APIs require a static
//! reference to a [`LocalMutexNode`] key. Keys must be generated by the
//! [`thread_local_node!`] macro. Thread local nodes are not `no_std` compatible
//! and can be enabled through the `thread_local` feature.
//!
//! ```
//! # #[cfg(feature = "thread_local")]
//! # {
//! use std::sync::Arc;
//! use std::thread;
//!
//! use mcslock::barging::spins::Mutex;
//! use mcslock::raw::spins::Mutex;
//!
//! // Requires `thread_local` feature.
//! mcslock::thread_local_node!(static NODE);
//!
//! let mutex = Arc::new(Mutex::new(0));
//! let c_mutex = Arc::clone(&mutex);
//!
//! thread::spawn(move || {
//! *c_mutex.lock() = 10;
//! // Local nodes handles are provided by reference.
//! // Critical section must be defined as closure.
//! c_mutex.lock_with_local(&NODE, |mut guard| *guard = 10);
//! })
//! .join().expect("thread::spawn failed");
//!
//! assert_eq!(*mutex.try_lock().unwrap(), 10);
//! // Local nodes handles are provided by reference.
//! // Critical section must be defined as closure.
//! assert_eq!(mutex.try_lock_with_local(&NODE, |g| *g.unwrap()), 10);
//! # }
//! # #[cfg(not(feature = "thread_local"))]
//! # fn main() {}
//! ```
//!
//! ## Thread local MCS lock
//! ## Barging MCS lock
//!
//! This implementation also operates under FIFO. The locking APIs provided
//! by this module do not require user-side node allocation, critical sections
//! must be provided as closures and at most one lock can be held at any time
//! within a thread. It is not `no_std` compatible and can be enabled through
//! the `thread_local` feature. See [`mod@thread_local`] module for more
//! information.
//! This implementation will have non-waiting threads race for the lock against
//! the front of the waiting queue thread, which means this it is an unfair lock.
//! This implementation can be enabled through the `barging` feature, it is
//! suitable for `no_std` environments, and the locking APIs are compatible with
//! the `lock_api` crate. See [`mod@barging`] and [`mod@lock_api`] modules for
//! more information.
//!
//! ```
//! # #[cfg(feature = "thread_local")]
//! # #[cfg(feature = "barging")]
//! # {
//! use std::sync::Arc;
//! use std::thread;
//!
//! // Requires `thread_local` feature.
//! use mcslock::thread_local::spins::Mutex;
//! use mcslock::barging::spins::Mutex;
//!
//! let mutex = Arc::new(Mutex::new(0));
//! let c_mutex = Arc::clone(&mutex);
//!
//! thread::spawn(move || {
//! // Critical section must be defined as closure.
//! c_mutex.lock_with(|mut guard| *guard = 10);
//! *c_mutex.lock() = 10;
//! })
//! .join().expect("thread::spawn failed");
//!
//! // Critical section must be defined as closure.
//! assert_eq!(mutex.try_lock_with(|guard| *guard.unwrap()), 10);
//! assert_eq!(*mutex.try_lock().unwrap(), 10);
//! # }
//! # #[cfg(not(feature = "thread_local"))]
//! # #[cfg(not(feature = "barging"))]
//! # fn main() {}
//! ```
//!
Expand All @@ -134,7 +142,14 @@
//! OS scheduler. This may cause a context switch, so you may not want to enable
//! this feature if your intention is to to actually do optimistic spinning. The
//! default implementation calls [`core::hint::spin_loop`], which does in fact
//! just simply busy-waits.
//! just simply busy-waits. This feature is not `not_std` compatible.
//!
//! ### thread_local
//!
//! The `thread_local` feature enables [`raw::Mutex`] locking APIs that operate
//! over queue nodes that are stored at the thread local storage. These locking APIs
//! require a static reference to a [`LocalMutexNode`] key. Keys must be generated
//! by the [`thread_local_node!`] macro. This feature is not `no_std` compatible.
//!
//! ### barging
//!
Expand All @@ -144,15 +159,6 @@
//! fair (does not guarantee FIFO), but can improve throughput when the lock
//! is heavily contended.
//!
//! ### thread_local
//!
//! The `thread_local` feature provides locking APIs that do not require user-side
//! node allocation, but critical sections must be provided as closures. This
//! implementation handles the queue's nodes transparently, by storing them in
//! the thread local storage of the waiting threads. This locking implementation
//! will panic if more than one guard is alive within a single thread. Not
//! `no_std` compatible.
//!
//! ### lock_api
//!
//! This feature implements the [`RawMutex`] trait from the [lock_api]
Expand All @@ -169,6 +175,8 @@
//! - libmcs: <https://github.com/topecongiro/libmcs>
//!
//! [`MutexNode`]: raw::MutexNode
//! [`LocalMutexNode`]: raw::LocalMutexNode
//! [`thread_local_node!`]: crate::thread_local_node
//! [`std::sync::Mutex`]: https://doc.rust-lang.org/std/sync/struct.Mutex.html
//! [`parking_lot::Mutex`]: https://docs.rs/parking_lot/latest/parking_lot/type.Mutex.html
//! [`RawMutex`]: https://docs.rs/lock_api/latest/lock_api/trait.RawMutex.html
Expand Down Expand Up @@ -206,7 +214,7 @@ pub mod lock_api;

#[cfg(feature = "thread_local")]
#[cfg_attr(docsrs, doc(cfg(feature = "thread_local")))]
pub mod thread_local;
mod thread_local;

pub(crate) mod cfg;

Expand Down
3 changes: 3 additions & 0 deletions src/raw/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,9 @@
mod mutex;
pub use mutex::{Mutex, MutexGuard, MutexNode};

#[cfg(feature = "thread_local")]
pub use crate::thread_local::LocalMutexNode;

/// A `raw` MCS lock alias that signals the processor that it is running a
/// busy-wait spin-loop during lock contention.
pub mod spins {
Expand Down
Loading

0 comments on commit 563e5f9

Please sign in to comment.