Skip to content

Commit

Permalink
Update README
Browse files Browse the repository at this point in the history
  • Loading branch information
fernandoenzo committed Mar 8, 2024
1 parent 2bf2c3d commit f32a2f5
Showing 1 changed file with 111 additions and 64 deletions.
175 changes: 111 additions & 64 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,10 @@
[![GitHub last commit](https://img.shields.io/github/last-commit/fernandoenzo/parallel-utils)](https://github.com/fernandoenzo/parallel-utils)
[![Build Status](https://img.shields.io/travis/com/fernandoenzo/parallel-utils?label=tests)](https://travis-ci.com/fernandoenzo/parallel-utils)

This library implements a class [**Monitor**](https://en.wikipedia.org/wiki/Monitor_(synchronization)), as defined by [**Per
This library implements a [**Monitor**](https://en.wikipedia.org/wiki/Monitor_(synchronization)) class, as defined by [**Per
Brinch Hansen**](https://en.wikipedia.org/wiki/Per_Brinch_Hansen) and [**C.A.R. Hoare**](https://en.wikipedia.org/wiki/Tony_Hoare),
for **synchronization and concurrent management of threads and processes in Python**. It also provides **other
functions to ease the creation and collection of results for both threads and processes**.
for **synchronization and concurrent management of threads and processes in Python**. It also provides **additional
functions to facilitate the creation and collection of results for both threads and processes**.

## Table of contents

Expand All @@ -24,7 +24,7 @@ This library implements a class [**Monitor**](https://en.wikipedia.org/wiki/Moni
* [@synchronized](#synchronized)
* [Second example](#second-example)
* [@synchronized_priority](#synchronized_priority)
* [StaticMonitor class](#staticmonitor-class)
* [StaticMonitor](#staticmonitor)
* [Launching threads and processes](#launching-threads-and-processes)
* [Contributing](#contributing)
* [License](#license)
Expand All @@ -35,7 +35,7 @@ This library implements a class [**Monitor**](https://en.wikipedia.org/wiki/Moni
Use the package manager [**pip**](https://pip.pypa.io/en/stable/) to install **parallel-utils**.

```bash
pip3 install parallel-utils
pip install parallel-utils
```
## Usage

Expand All @@ -44,14 +44,14 @@ pip3 install parallel-utils
There are two implementations of the `Monitor` class: one is located in the `thread` module and the other in the
`process` module of `parallel-utils`.

Although it's safe to always use the `Monitor` class located in the `process` module, even if you're working only with
threads, you will get slightly better performance when using the one located in the `thread` module. Therefore, it is
recommended to use each one for what it is intended for.
Although it's safe to always use the `Monitor` class located in the `process` module, even if you're only working with
threads, you will achieve slightly better performance when using the one located in the `thread` module. Therefore, it is
recommended to use each one for its intended purpose.

From now on, for ease of reading, every time we say _thread_ we will also be including _process_ unless stated otherwise.
For ease of reading, every time we mention a _thread_, we will also be including a _process_ unless stated otherwise..

Also, from now until the end of this section, when we say _function_, we will not only refer to "whole" functions but we
will also include pieces of code contained within a function.
Also, from now until the end of this section, when we mention a _function_, we will be referring not only to whole
functions but also to pieces of code contained within a function.

A monitor essentially does two things:
1. It controls the maximum number of threads that can simultaneously access a function.
Expand All @@ -63,21 +63,21 @@ A monitor essentially does two things:
#### First example
> 1. It allows controlling the maximum number of threads that can simultaneously access a function.
To achieve this first goal, the Monitor class includes the following couple of functions:
To achieve this first goal, the `Monitor` class includes the following pair of functions:

```python
def lock_code(self, uid: Union[str, int], max_threads: int = 1)
def lock_code(self, uid: str | int, max_threads: int = 1)

def unlock_code(self, uid: Union[str, int])
def unlock_code(self, uid: str | int)
```

The first one, `lock_code`, must be called at the beginning of the piece of code for which we want to control the maximum
The first one, `lock_code`, must be called at the beginning of the piece of code for which we want to control the maximum
number of threads that can access it simultaneously.

The `unlock_code` function sets the limit of the scope of the `lock_code` function.

To do this, both functions must share a same unique identifier (`uid`), that can be either a string or
an integer number. Let's see an example:
To do this, both functions must share the same unique identifier (`uid`), which can be either a string or an integer number.
Lets see an example:

```python
import concurrent.futures
Expand All @@ -88,7 +88,7 @@ m = Monitor()

def print_and_sleep():
print('Hello')
m.lock_code('example', max_threads=1)
m.lock_code(uid='example', max_threads=1)
sleep(2)
m.unlock_code('example')
print('Goodbye')
Expand All @@ -100,36 +100,52 @@ th3 = create_thread(print_and_sleep)
concurrent.futures.wait([th1, th2, th3])
```

The example shown above takes 6 seconds to finalize its execution, since we have a `lock_code` that only allows one thread
each time to execute the `sleep` function, and we are launching three threads.
The example shown above takes 6 seconds to complete its execution, since we have a `lock_code` that only allows one thread
at a time to execute the `sleep` function, and we are launching three threads.

If we set the `lock_code` to allow up to three threads at the same time, then the code only needs 2 seconds to finalize
its execution, since all the three threads can make the `sleep` blocking call at the same time.
If we set the `lock_code` to allow up to three threads at the same time, then the code only needs 2 seconds to complete
its execution, since all three threads can make the `sleep` blocking call at the same time.

We'll se more about the `create_thread` and `create_process` functions later.

The last line, [`concurrent.futures.wait`](https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.wait)
is a blocking call that waits until all the three threads finish running.
The last line, [`concurrent.futures.wait`](https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.wait),
is a blocking call that waits until all three threads finish running.

For security and convenience, a context manager, `synchronized`, has been implemented that eliminates the need to explicitly
use `lock_code` and `unlock_code`. This context manager automatically handles the locking and unlocking of code sections,
simplifying the writing of code and improving readability.

In that case, the function above could be rewritten like this:

```python
m = Monitor()

def print_and_sleep():
print('Hello')
with m.synchronized(uid='example', max_threads=1):
sleep(2)
print('Goodbye')
```
##
#### @synchronized

In the previous example, we just were protecting the piece of code wrapping the `sleep` call. But, what if we want to wrap
In the previous example, we were only protecting the piece of code wrapping the `sleep` call. But, what if we want to wrap
the entire function?

Of course we could call `lock_code` on the first line, and `unlock_code` on the last one, and that would work
just fine. Like this:

Of course, we could use the context manager to wrap the whole code and that would work just fine. Like this:

```python
m = Monitor()

def print_and_sleep():
m.lock_code('example', 1)
print('Hello')
sleep(2)
print('Goodbye')
m.unlock_code('example')
with m.synchronized(uid='example', max_threads=1):
print('Hello')
sleep(2)
print('Goodbye')
```

But, to simplify life for the programmer and for improving readability, there's some syntactic sugar we could use. And this is
where the `@synchronized` decorator comes in and turns the above code into this:
But to simplify life for the programmer and improve readability, theres some syntactic sugar we could use. And this is
where the `@synchronized` decorator comes in and turns the above code into this:

```python
from parallel_utils.thread import synchronized
Expand All @@ -147,8 +163,8 @@ Let's see the decorator prototype:
@synchronized(max_threads: int = 1)
```

As you can see, the `@synchronized` decorator doesn't need an identifier, and by default only allows one thread to enter the
function at the same time. However, we can override that default behavior with the optional `max_threads` argument.
As you can see, the `@synchronized` decorator doesnt need an identifier, and by default only allows one thread to enter
the function at the same time. However, we can override that default behavior with the optional `max_threads` argument.

If we want, for example, to allow up to two threads to enter the function at the same time, we only need to write:

Expand All @@ -160,24 +176,24 @@ def print_and_sleep():
print('Goodbye')
```

Note that **this decorator has its own namespace for uids**, which is completely independent of the namespace of the
`lock_code` and `unlock_code` functions.
Note that **this decorator has its own namespace for uids**, which is completely independent of the namespace of any
`Monitor` class you instantiate.

#### Second example

>  2. It organizes a set of functions so that they follow a strict order in their execution, regardless of the thread
> 2. It organizes a set of functions so that they follow a strict order in their execution, regardless of the thread
from which they are called.

To achieve the second goal, the Monitor class includes the following couple of functions:
To achieve the second goal, the `Monitor` class includes the following couple of functions:

```python
def lock_priority_code(uid: Union[str, int], order: int = 1, total: int = 1)
def lock_priority_code(uid: str | int, order: int = 1, total: int = 1)

def unlock_code(uid: Union[str, int], order: int)
def unlock_code(uid: str | int, order: int)
```

Yes, the `unlock_code` function is the same as before. And these two functions work quite similarly to the previous example,
wrapping the code snippet that we want to control and sharing the same `uid` between them.
Yes, the `unlock_code` function is the same as before. And these two functions work quite similarly to the previous
example, wrapping the code snippet that we want to control and sharing the same `uid` between them.

The main difference is that, in this case, we have to specify the `order` in which the code snippet will run and the `total`
number of functions to sync with the supplied `uid`.
Expand Down Expand Up @@ -207,7 +223,7 @@ sleep(3)
create_process(say_hello, 'Peter')
```

this example will always print:
This example will always print:

```
Entering goodbye
Expand All @@ -217,27 +233,47 @@ Hello Peter!
Goodbye Peter!
```

even if you start the `say_goodbye` function long before the `say_hello` function. This is because the snippet in `say_goobye` has
not the first turn, but the second, so it will make a blocking call and wait until `say_hello` calls `unlock_code`.
even if you start the `say_goodbye` function long before the `say_hello` function. This is because the snippet in
`say_goobye` does not have the first turn, but the second, so it will make a blocking call and wait until `say_hello`
calls `unlock_code`.

The `total` argument must be supplied **at least once** in any of the calls to `lock_priority_code`.

With these two functions, you can sort the execution of as many code snippets as you need.

Note that `lock_code` and `lock_priority_code` share the same namespace for uids.
Note that `lock_code` and `lock_priority_code` share the same namespace for uids, as they are methods of the same
instantiated Monitor, `m`.

There is also a context manager implemented in this case, similar to before, called `synchronized_priority`, that can
rewrite the above code to look like this:

```python
m = Monitor()

def say_hello(name):
print('Entering hello')
with m.synchronized_priority('id1', order=1, total=2):
print(f'Hello {name}!')

def say_goodbye(name):
print('Entering goodbye')
with m.synchronized_priority('id1', order=2):
print(f'Goodbye {name}!')
```

##
#### @synchronized_priority

Just like before, there is a decorator that we can use to wrap a complete function and set its relative order of execution
Similar to before, there is a decorator that we can use to wrap an entire function and set its relative order of execution
with respect to others.

This is its prototype:

```python
@synchronized_priority(uid: Union[str, int], order: int = 1, total: int = None)
@synchronized_priority(uid: str | int, order: int = 1, total: int = None)
```

With it, the last example would look like this:
With it, the previous example would look like this:

```python
from parallel_utils.process import create_process, synchronized_priority
Expand All @@ -257,8 +293,7 @@ sleep(3)
create_process(say_hello, 'Peter')
```

Note that this time we've provided the `total` argument in the second call instead of the first one. Never mind. You
could even supply it in both.
This time, we’ve provided the `total` argument in the second call instead of the first one. You could even supply it in both.

The above example will always print:

Expand All @@ -273,13 +308,14 @@ Goodbye Peter!
Note that **this decorator has its own namespace for uids**, which is completely independent of the namespace of the
`lock_priority_code` and `unlock_code` functions.

### StaticMonitor class
### StaticMonitor

The `StaticMonitor` class has exactly the same methods as the `Monitor` class. And it also has two implementations: one can
be imported from `parallel_utils.process` and the other can be imported from `parallel_utils.thread`.
For the convenience of programmers, a `Monitor` has already been instantiated and named `StaticMonitor`. Actually, there
are two of them, as usual: one can be imported from `parallel_utils.process` and the other can be imported from
`parallel_utils.thread`.

This class saves you the need to instantiate a `Monitor`, store it in a variable, and then use it. Instead of that, you can
just call its methods like this:
This object saves you the need to instantiate a `Monitor`, store it in a variable, and then use it. Instead, you can just
call its methods like this:

```python
from parallel_utils.process import StaticMonitor
Expand All @@ -291,23 +327,34 @@ def say_hello(name):
StaticMonitor.unlock_code('id1')
```

Note that this class, since it is static, has a unique namespace for uids that is shared among all calls to its methods.
or better:

```python
from parallel_utils.process import StaticMonitor

def say_hello(name):
print('Entering hello')
with StaticMonitor.synchronized_priority('id1', order=1, total=2):
print(f'Hello {name}!')
```

Note that this object has a unique namespace for uids that is shared among all calls to its methods.

### Launching threads and processes

This library includes two very useful functions to quickly start processes and threads, and retrieve their results, which we
have already seen in our examples:
This library includes two very useful functions to quickly start processes and threads, and retrieve their results, which
we have already seen in our examples:

```python
def create_thread(func: Callable, *args: Any, **kwargs: Any) -> Future

def create_process(func: Callable, *args: Any, **kwargs: Any) -> Future
```

Like the rest of classes and objects in this library, they are located in `parallel_utils.thread` and
Like the rest of the classes and objects in this library, they are located in `parallel_utils.thread` and
`parallel_utils.process` respectively.

Their first argument is a Callable that, in turn, is called with `*args` and `**kwargs`.
Their first argument is a `Callable` that, in turn, is called with `*args` and `**kwargs`.

They both return a [`Future`](https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.Future)
object, which encapsulates the asynchronous execution of a callable.
Expand Down

0 comments on commit f32a2f5

Please sign in to comment.