Skip to content

Latest commit

 

History

History
166 lines (126 loc) · 10.5 KB

README.md

File metadata and controls

166 lines (126 loc) · 10.5 KB

merklize-sha

SYCL accelerated Binary Merklization using SHA1, SHA2 & SHA3 ( along with keccak256 )

Motivation

After implementing BLAKE3 using SYCL, I decided to accelerate 2-to-1 hash implementation of all variants of SHA1, SHA2 & SHA3 families of cryptographic hash functions ( along with keccak256 ). BLAKE3 lends itself pretty well to parallelization efforts, due to its inherent data parallel friendly algorithmic construction, where each 1024 -bytes chunk can be compressed independently ( read parallelly ) and finally it's a binary merklization problem with compressed chunks as leaf nodes of binary merkle tree. But none of SHA1, SHA2 & SHA3 ( or keccak256 ) families of cryptographic hash functions are data parallel, requiring to process each message block ( can be 512 -bit/ 1024 -bit or padded to 1600 -bit in case of SHA3 family ) sequentially, which is why I only concentrated on accelerating Binary Merklization where SHA1/ SHA2/ SHA3 families of cryptographic ( 2-to-1 ) hash functions are used for computing all intermediate nodes of tree when N -many leaf nodes are provided, where N = 2 ^ i | i = {1, 2, 3 ...}. Each of these N -many leaf nodes are respective hash digests --- for example, when using SHA2-256 variant for computing all intermediate nodes of binary merkle tree, each of provided leaf node is 32 -bytes wide, representing a SHA2-256 digest. Now, N -many leaf digests are merged into N/ 2 -many digests which are intermediate nodes, living just above leaf nodes. Then in next phase, those N/ 2 -many intermediates are used for computing N/ 4 -many of intermediates which are living just above them. This process continues until root of merkle tree is computed. Notice, that in each level of tree, each consecutive pair of digests can be hashed independently --- and that's the scope of parallelism I'd like to make use of during binary merklization. In following depiction, when N ( = 4 ) nodes are provided as input, two intermediates can be computed in parallel and once they're computed root of tree can be computed as a single task.

  ((a, b), (c, d))          < --- [Level 1] [Root]
     /       \
    /         \
 (a, b)      (c, d)         < --- [Level 2] [Intermediates]
  / \        /  \
 /   \      /    \
a     b     c     d         < --- [Level 3] [Leaves]

I'd also like you to note that, computation of nodes of level-i of tree are data dependent on level-(i + 1).

When N is power of 2 and those many nodes are provided as input, (N - 1) -many intermediates to be computed. For that reason, size of allocated memory for output is of same size as input is. That means, very first few bytes ( = digest size of hash function in use ) of output memory allocation will be empty. To be more specific, if SHA2-224 is our choice of hash function, then first 28 -bytes of output memory allocation will not be of interest, but skipping that next 28 -bytes chunk should have root of tree, once offloaded computation finishes its execution.

input   = [a, b, c, d]
output  = [0, ((a, b), (c, d)), (a, b), (c, d)]

Here in this repository, I'm keeping binary merklization kernels, implemented in SYCL, while using SHA1/ SHA2/ SHA3 variants as 2-to-1 hash function ( along with keccak256 ), which one to use is compile-time choice using pre-processor directive.

If you happen to be interested in Binary Merklization using Rescue Prime Hash/ BLAKE3, consider seeing following links.

During SHA1, SHA2 implementations, I've followed Secure Hash Standard specification.

During SHA3 implementations, I've followed SHA-3 Standard specification.

During Keccak256 implementation, I took some inspiration from here; though note that, keccak256 & sha3-256 are very much similar, except input message padding rule; see #10 PR description.

I'm also keeping an alternative implementation of keccak256 2-to-1 hash, where each 64 -bit lane of keccak-p[1600, 24] state array is represented in terms of two 32 -bit unsigned integers ( in bit interleaved form ) and applying rounds involve only using 32 -bit bitwise operations. This implementation takes motivation from section 2.1 of this document. If interested see keccak256 2-to-1 hash implementation using 32 -bit word size, here. This same implementation guided me while writing keccak256 2-to-1 hash function in Polygon Miden Assembly, see PR.

Using SHA1 for binary merklization may not be a good choice these days, see here. But still I'm keeping SHA1 implementation, just as a reference.

Prerequisites

  • I'm using
$ lsb_release -d

Description:    Ubuntu 20.04.3 LTS
  • You should have Intel's DPCPP compiler, which is an open-source llvm-based SYCL specification's implementation; see here
$ dpcpp --version

Intel(R) oneAPI DPC++/C++ Compiler 2022.0.0 (2022.0.0.20211123)
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /opt/intel/oneapi/compiler/2022.0.2/linux/bin-llvm
  • If you're planning to target Nvidia GPU, I suggest you compile aforementioned toolchain from source; see here
$ clang++ --version

clang version 14.0.0 (https://github.com/intel/llvm c690ac8d771e8bb1a1be651872b782f4044d936c)
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /home/ubuntu/sycl_workspace/llvm/build/bin
  • You will also need to have make utility for easily running compilation flow, along with that clang-format for source formatting can be helpful.
  • Another useful tool to have is sycl-info, for quickly checking available SYCL implementation related details; see here

Usage

If you happen to be interested in 2-to-1 hash implementation of

where two digests of respective hash functions are input, in byte concatenated form, to hash( ... ) function, consider taking a look at above hyperlinked examples.

Compile above examples using dpcpp -fsycl example/<file>.cpp -I./include

You will probably like to see how binary merklization kernels use these 2-to-1 hash functions; see here

Tests

I've accompanied each hash function implementation along with binary merklization using them, with test cases which can be executed as

bash run.sh

Benchmarks

For benchmarking binary merklization, I'm taking randomly generated N -many leaf nodes as input, which are explicitly transferred to accelerator's memory; computing all (N - 1) -many intermediate nodes; finally transferring them back to host memory. This flow is executed 8 times, before taking average of kernel execution/ host <-> device data tx time, for some N.

I'm keeping binary merklization benchmark results of

obtained after executing them on multiple accelerators.