Skip to content

Commit

Permalink
Merge pull request #63 from 0xPolygonMiden/next
Browse files Browse the repository at this point in the history
v0.1.2 release
  • Loading branch information
bobbinth authored Feb 17, 2023
2 parents 398af59 + 3c9a523 commit 822c52a
Show file tree
Hide file tree
Showing 14 changed files with 925 additions and 552 deletions.
8 changes: 7 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,9 @@
## 0.1.2 (2023-02-17)

- Fixed `Rpo256::hash` pad that was panicking on input (#44)
- Added `MerklePath` wrapper to encapsulate Merkle opening verification and root computation (#53)
- Added `NodeIndex` Merkle wrapper to encapsulate Merkle tree traversal and mappings (#54)

## 0.1.1 (2023-02-06)

- Introduced `merge_in_domain` for the RPO hash function, to allow using a specified domain value in the second capacity register when hashing two digests together.
Expand All @@ -8,6 +14,6 @@

- Initial release on crates.io containing the cryptographic primitives used in Miden VM and the Miden Rollup.
- Hash module with the BLAKE3 and Rescue Prime Optimized hash functions.
- BLAKE3 is implemented with 256-bit, 192-bit, or 160-bit output.
- BLAKE3 is implemented with 256-bit, 192-bit, or 160-bit output.
- RPO is implemented with 256-bit output.
- Merkle module, with a set of data structures related to Merkle trees, implemented using the RPO hash function.
2 changes: 1 addition & 1 deletion Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[package]
name = "miden-crypto"
version = "0.1.1"
version = "0.1.2"
description="Miden Cryptographic primitives"
authors = ["miden contributors"]
readme="README.md"
Expand Down
22 changes: 22 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,14 @@ For performance benchmarks of these hash functions and their comparison to other
[Merkle module](./src/merkle/) provides a set of data structures related to Merkle trees. All these data structures are implemented using the RPO hash function described above. The data structures are:

* `MerkleTree`: a regular fully-balanced binary Merkle tree. The depth of this tree can be at most 64.
* `SimpleSmt`: a Sparse Merkle Tree, mapping 63-bit keys to 4-element leaf values.
* `MerklePathSet`: a collection of Merkle authentication paths all resolving to the same root. The length of the paths can be at most 64.

The module also contains additional supporting components such as `NodeIndex`, `MerklePath`, and `MerkleError` to assist with tree indexation, opening proofs, and reporting inconsistent arguments/state.

## Extra
[Root module](./src/lib.rs) provides a set of constants, types, aliases, and utils required to use the primitives of this library.

## Crate features
This crate can be compiled with the following features:

Expand All @@ -25,5 +31,21 @@ Both of these features imply the use of [alloc](https://doc.rust-lang.org/alloc/

To compile with `no_std`, disable default features via `--no-default-features` flag.

## Testing

You can use cargo defaults to test the library:

```shell
cargo test
```

However, some of the functions are heavy and might take a while for the tests to complete. In order to test in release mode, we have to replicate the test conditions of the development mode so all debug assertions can be verified.

We do that by enabling some special [flags](https://doc.rust-lang.org/cargo/reference/profiles.html) for the compilation.

```shell
RUSTFLAGS="-C debug-assertions -C overflow-checks -C debuginfo=2" cargo test --release
```

## License
This project is [MIT licensed](./LICENSE).
83 changes: 43 additions & 40 deletions src/hash/rpo/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -94,61 +94,64 @@ impl Hasher for Rpo256 {
type Digest = RpoDigest;

fn hash(bytes: &[u8]) -> Self::Digest {
// compute the number of elements required to represent the string; we will be processing
// the string in BINARY_CHUNK_SIZE-byte chunks, thus the number of elements will be equal
// to the number of such chunks (including a potential partial chunk at the end).
let num_elements = if bytes.len() % BINARY_CHUNK_SIZE == 0 {
bytes.len() / BINARY_CHUNK_SIZE
} else {
bytes.len() / BINARY_CHUNK_SIZE + 1
};

// initialize state to all zeros, except for the first element of the capacity part, which
// is set to the number of elements to be hashed. this is done so that adding zero elements
// at the end of the list always results in a different hash.
// initialize the state with zeroes
let mut state = [ZERO; STATE_WIDTH];
state[CAPACITY_RANGE.start] = Felt::new(num_elements as u64);

// break the string into BINARY_CHUNK_SIZE-byte chunks, convert each chunk into a field
// element, and absorb the element into the rate portion of the state. we use
// BINARY_CHUNK_SIZE-byte chunks because every BINARY_CHUNK_SIZE-byte chunk is guaranteed
// to map to some field element.
let mut i = 0;
// set the capacity (first element) to a flag on whether or not the input length is evenly
// divided by the rate. this will prevent collisions between padded and non-padded inputs,
// and will rule out the need to perform an extra permutation in case of evenly divided
// inputs.
let is_rate_multiple = bytes.len() % RATE_WIDTH == 0;
if !is_rate_multiple {
state[CAPACITY_RANGE.start] = ONE;
}

// initialize a buffer to receive the little-endian elements.
let mut buf = [0_u8; 8];
for chunk in bytes.chunks(BINARY_CHUNK_SIZE) {
if i < num_elements - 1 {

// iterate the chunks of bytes, creating a field element from each chunk and copying it
// into the state.
//
// every time the rate range is filled, a permutation is performed. if the final value of
// `i` is not zero, then the chunks count wasn't enough to fill the state range, and an
// additional permutation must be performed.
let i = bytes.chunks(BINARY_CHUNK_SIZE).fold(0, |i, chunk| {
// the last element of the iteration may or may not be a full chunk. if it's not, then
// we need to pad the remainder bytes of the chunk with zeroes, separated by a `1`.
// this will avoid collisions.
if chunk.len() == BINARY_CHUNK_SIZE {
buf[..BINARY_CHUNK_SIZE].copy_from_slice(chunk);
} else {
// if we are dealing with the last chunk, it may be smaller than BINARY_CHUNK_SIZE
// bytes long, so we need to handle it slightly differently. We also append a byte
// with value 1 to the end of the string; this pads the string in such a way that
// adding trailing zeros results in different hash
let chunk_len = chunk.len();
buf = [0_u8; 8];
buf[..chunk_len].copy_from_slice(chunk);
buf[chunk_len] = 1;
buf.fill(0);
buf[..chunk.len()].copy_from_slice(chunk);
buf[chunk.len()] = 1;
}

// convert the bytes into a field element and absorb it into the rate portion of the
// state; if the rate is filled up, apply the Rescue permutation and start absorbing
// again from zero index.
// set the current rate element to the input. since we take at most 7 bytes, we are
// guaranteed that the inputs data will fit into a single field element.
state[RATE_RANGE.start + i] = Felt::new(u64::from_le_bytes(buf));
i += 1;
if i % RATE_WIDTH == 0 {

// proceed filling the range. if it's full, then we apply a permutation and reset the
// counter to the beginning of the range.
if i == RATE_WIDTH - 1 {
Self::apply_permutation(&mut state);
i = 0;
0
} else {
i + 1
}
}
});

// if we absorbed some elements but didn't apply a permutation to them (would happen when
// the number of elements is not a multiple of RATE_WIDTH), apply the RPO permutation.
// we don't need to apply any extra padding because we injected total number of elements
// in the input list into the capacity portion of the state during initialization.
if i > 0 {
// the number of elements is not a multiple of RATE_WIDTH), apply the RPO permutation. we
// don't need to apply any extra padding because the first capacity element containts a
// flag indicating whether the input is evenly divisible by the rate.
if i != 0 {
state[RATE_RANGE.start + i..RATE_RANGE.end].fill(ZERO);
state[RATE_RANGE.start + i] = ONE;
Self::apply_permutation(&mut state);
}

// return the first 4 elements of the state as hash result
// return the first 4 elements of the rate as hash result.
RpoDigest::new(state[DIGEST_RANGE].try_into().unwrap())
}

Expand Down
39 changes: 39 additions & 0 deletions src/hash/rpo/tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,9 @@ use super::{
Felt, FieldElement, Hasher, Rpo256, RpoDigest, StarkField, ALPHA, INV_ALPHA, ONE, STATE_WIDTH,
ZERO,
};
use crate::utils::collections::{BTreeSet, Vec};
use core::convert::TryInto;
use proptest::prelude::*;
use rand_utils::rand_value;

#[test]
Expand Down Expand Up @@ -193,6 +195,43 @@ fn hash_test_vectors() {
}
}

#[test]
fn sponge_bytes_with_remainder_length_wont_panic() {
// this test targets to assert that no panic will happen with the edge case of having an inputs
// with length that is not divisible by the used binary chunk size. 113 is a non-negligible
// input length that is prime; hence guaranteed to not be divisible by any choice of chunk
// size.
//
// this is a preliminary test to the fuzzy-stress of proptest.
Rpo256::hash(&vec![0; 113]);
}

#[test]
fn sponge_collision_for_wrapped_field_element() {
let a = Rpo256::hash(&[0; 8]);
let b = Rpo256::hash(&Felt::MODULUS.to_le_bytes());
assert_ne!(a, b);
}

#[test]
fn sponge_zeroes_collision() {
let mut zeroes = Vec::with_capacity(255);
let mut set = BTreeSet::new();
(0..255).for_each(|_| {
let hash = Rpo256::hash(&zeroes);
zeroes.push(0);
// panic if a collision was found
assert!(set.insert(hash));
});
}

proptest! {
#[test]
fn rpo256_wont_panic_with_arbitrary_input(ref vec in any::<Vec<u8>>()) {
Rpo256::hash(&vec);
}
}

const EXPECTED: [[Felt; 4]; 19] = [
[
Felt::new(1502364727743950833),
Expand Down
29 changes: 29 additions & 0 deletions src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -38,3 +38,32 @@ pub const ZERO: Felt = Felt::ZERO;

/// Field element representing ONE in the Miden base filed.
pub const ONE: Felt = Felt::ONE;

// TESTS
// ================================================================================================

#[test]
#[should_panic]
fn debug_assert_is_checked() {
// enforce the release checks to always have `RUSTFLAGS="-C debug-assertions".
//
// some upstream tests are performed with `debug_assert`, and we want to assert its correctness
// downstream.
//
// for reference, check
// https://github.com/0xPolygonMiden/miden-vm/issues/433
debug_assert!(false);
}

#[test]
#[should_panic]
#[allow(arithmetic_overflow)]
fn overflow_panics_for_test() {
// overflows might be disabled if tests are performed in release mode. these are critical,
// mandatory checks as overflows might be attack vectors.
//
// to enable overflow checks in release mode, ensure `RUSTFLAGS="-C overflow-checks"`
let a = 1_u64;
let b = 64;
assert_ne!(a << b, 0);
}
126 changes: 126 additions & 0 deletions src/merkle/index.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,126 @@
use super::{Felt, MerkleError, RpoDigest, StarkField};

// NODE INDEX
// ================================================================================================

/// A Merkle tree address to an arbitrary node.
#[derive(Debug, Default, Copy, Clone, Eq, PartialEq, PartialOrd, Ord, Hash)]
pub struct NodeIndex {
depth: u8,
value: u64,
}

impl NodeIndex {
// CONSTRUCTORS
// --------------------------------------------------------------------------------------------

/// Creates a new node index.
pub const fn new(depth: u8, value: u64) -> Self {
Self { depth, value }
}

/// Creates a node index from a pair of field elements representing the depth and value.
///
/// # Errors
///
/// Will error if the `u64` representation of the depth doesn't fit a `u8`.
pub fn from_elements(depth: &Felt, value: &Felt) -> Result<Self, MerkleError> {
let depth = depth.as_int();
let depth = u8::try_from(depth).map_err(|_| MerkleError::DepthTooBig(depth))?;
let value = value.as_int();
Ok(Self::new(depth, value))
}

/// Creates a new node index pointing to the root of the tree.
pub const fn root() -> Self {
Self { depth: 0, value: 0 }
}

/// Mutates the instance and returns it, replacing the depth.
pub const fn with_depth(mut self, depth: u8) -> Self {
self.depth = depth;
self
}

/// Computes the value of the sibling of the current node.
pub fn sibling(mut self) -> Self {
self.value ^= 1;
self
}

// PROVIDERS
// --------------------------------------------------------------------------------------------

/// Builds a node to be used as input of a hash function when computing a Merkle path.
///
/// Will evaluate the parity of the current instance to define the result.
pub const fn build_node(&self, slf: RpoDigest, sibling: RpoDigest) -> [RpoDigest; 2] {
if self.is_value_odd() {
[sibling, slf]
} else {
[slf, sibling]
}
}

/// Returns the scalar representation of the depth/value pair.
///
/// It is computed as `2^depth + value`.
pub const fn to_scalar_index(&self) -> u64 {
(1 << self.depth as u64) + self.value
}

/// Returns the depth of the current instance.
pub const fn depth(&self) -> u8 {
self.depth
}

/// Returns the value of the current depth.
pub const fn value(&self) -> u64 {
self.value
}

/// Returns true if the current value fits the current depth for a binary tree.
pub const fn is_valid(&self) -> bool {
self.value < (1 << self.depth as u64)
}

/// Returns true if the current instance points to a right sibling node.
pub const fn is_value_odd(&self) -> bool {
(self.value & 1) == 1
}

/// Returns `true` if the depth is `0`.
pub const fn is_root(&self) -> bool {
self.depth == 0
}

// STATE MUTATORS
// --------------------------------------------------------------------------------------------

/// Traverse one level towards the root, decrementing the depth by `1`.
pub fn move_up(&mut self) -> &mut Self {
self.depth = self.depth.saturating_sub(1);
self.value >>= 1;
self
}
}

#[cfg(test)]
mod tests {
use super::*;
use proptest::prelude::*;

proptest! {
#[test]
fn arbitrary_index_wont_panic_on_move_up(
depth in prop::num::u8::ANY,
value in prop::num::u64::ANY,
count in prop::num::u8::ANY,
) {
let mut index = NodeIndex::new(depth, value);
for _ in 0..count {
index.move_up();
}
}
}
}
Loading

0 comments on commit 822c52a

Please sign in to comment.