Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prepare final release 1 #32

Merged
merged 6 commits into from
Mar 6, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
56 changes: 34 additions & 22 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

[![CI](https://github.com/HerodotusDev/hdp/actions/workflows/ci.yml/badge.svg)](https://github.com/HerodotusDev/hdp/actions/workflows/ci.yml)

HDP stands for Herodotus Data Processor, which able to process range of block data and retrieve valid value from proving ZK-STARK proof. CLI is mainly used for process human readable request to Cairo-Program acceptable format file. Additionally some useful features supported for develop.
HDP stands for Herodotus Data Processor, which is able to process a range of block data and retrieve valid values from proving ZK-STARK proof. CLI is mainly used for processing human-readable requests to Cairo-Program acceptable format files. Additionally, some useful features are supported for development.

## Supported Features

Expand All @@ -20,20 +20,11 @@ HDP stands for Herodotus Data Processor, which able to process range of block da
- [x] Compile datalake 1: Fetch relevant header data and proofs from Herodotus Indexer
- [x] Compile datalake 2: Fetch relevant account and storage data and proofs from RPC provider
- [x] Compute aggregated function (ex. `SUM`, `AVG`) over compiled datalake result
- [x] Return general ( human readable ) and cairo formatted ( all chunked with felt size ) file

## HDP Support

Note : `SUM` and `AVG` expect to get number as input.

| | SUM | AVG |
| ---------------------------- | --- | --- |
| account.nonce | ✅ | ✅ |
| account.balance | ✅ | ✅ |
| account.storage_root | - | - |
| account.code_hash | - | - |
| storage.key ( value is num ) | ✅ | ✅ |
| storage.key (value is hash ) | - | - |
- [x] Return general ( human-readable ) and Cairo formatted ( all chunked with felt size ) file
- [x] Support multi tasks process, with [Standard Merkle Tree](https://github.com/rkdud007/alloy-merkle-tree/blob/main/src/standard_binary_tree.rs) aggregation
- [ ] Support more datalake types: DynamicLayoutDatalake, TransactionsBySenderDatalake ... etc
- [ ] Multichain support
- [ ] Support More ZKVM as a backend option ([CAIRO](https://eprint.iacr.org/2021/1063), [RISC0](https://github.com/risc0/risc0), [SP1](https://github.com/succinctlabs/sp1)... etc)

## Install HDP

Expand All @@ -54,7 +45,7 @@ Note : `SUM` and `AVG` expect to get number as input.
❯ git clone https://github.com/HerodotusDev/hdp.git

# install hdp
❯ cargo install --path cli
❯ cargo install --path cli -f

# Run the HDP
❯ hdp run --help
Expand Down Expand Up @@ -88,7 +79,7 @@ Support passing argument as env variable or as arguments.
hdp run

# run herodotus data processing
hdp run 0x0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000000000800000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000018000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000060617667000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000006073756d00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000040000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000606d696e00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000040000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000606d6178000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000000000000 0x00000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000008000000000000000000000000000000000000000000000000000000000000001800000000000000000000000000000000000000000000000000000000000000280000000000000000000000000000000000000000000000000000000000000038000000000000000000000000000000000000000000000000000000000000000e0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000009eb0f600000000000000000000000000000000000000000000000000000000009eb100000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000a00000000000000000000000000000000000000000000000000000000000000002010f00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000e0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000009eb0f600000000000000000000000000000000000000000000000000000000009eb100000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000a00000000000000000000000000000000000000000000000000000000000000002010f00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000e0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000009eb0f600000000000000000000000000000000000000000000000000000000009eb100000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000a00000000000000000000000000000000000000000000000000000000000000002010f00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000e0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000009eb0f600000000000000000000000000000000000000000000000000000000009eb100000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000a00000000000000000000000000000000000000000000000000000000000000002010f000000000000000000000000000000000000000000000000000000000000 https://eth-goerli.g.alchemy.com/v2/wTjM2yJBF9bitPNwk5ZGvSkwIKWtuuqm
hdp run 0x000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000006073756d000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000000000000 0x00000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000e0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000004b902400000000000000000000000000000000000000000000000000000000004b9027000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000a00000000000000000000000000000000000000000000000000000000000000016027f2c6f930306d3aa736b3a6c6a98f512f74036d40000000000000000000000 ${Input your RPC Provider -- this example is Etherum Sepolia}

```

Expand Down Expand Up @@ -119,12 +110,33 @@ Options:
-V, --version Print version
```

Generate encoded task and datalake for testing purpose. The format is same as what smart contract emits (consider as batched tasks and datalakes).
Generate encoded tasks and datalakes for testing purposes. The format is the same as what smart contract emits (considered as batched tasks and datalakes).

### Encode

```bash
# e.g. hdp encode "avg" -b 10399900 10400000 "header.base_fee_per_gas" 1
some examples:

Header value with `AVG`

```
hdp encode "avg" -b 4952100 4952110 "header.base_fee_per_gas" 1
```

Account value with `SUM`

```
hdp encode "sum" -b 4952100 4952110 "account.0x7f2c6f930306d3aa736b3a6c6a98f512f74036d4.nonce" 2
```

Storage value with `AVG`

```
hdp encode "avg" -b 5382810 5382820 "storage.0x75CeC1db9dCeb703200EAa6595f66885C962B920.0x0000000000000000000000000000000000000000000000000000000000000002" 1
```

Check out the encode command for how to generate the encoded value of the targeted task and its corresponding datalake:

```console
❯ hdp help encode
Encode the task and data lake in batched format test purposes

Expand All @@ -144,7 +156,7 @@ Options:

### Decode

```bash
```console
❯ hdp help decode
Decode batch tasks and data lakes

Expand All @@ -166,7 +178,7 @@ Options:

### Decode non-batched format

```bash
```console
❯ hdp help decode-one
Decode one task and one data lake (not batched format)

Expand Down
107 changes: 73 additions & 34 deletions cli/src/main.rs
Original file line number Diff line number Diff line change
Expand Up @@ -25,19 +25,32 @@ struct Cli {

#[derive(Debug, Subcommand)]
enum Commands {
/// Encode the task and data lake in batched format test purposes
/// Encode the task and datalake in batched format test purposes
#[command(arg_required_else_help = true)]
Encode {
/// Decide if want to run evaluator as follow step or not (default: false)
#[arg(short, long, action = clap::ArgAction::SetTrue)]
allow_run: bool,

/// The aggregate function id e.g. "sum", "min", "avg"
aggregate_fn_id: String,
/// The aggregate function context. It depends on the aggregate function
aggregate_fn_ctx: Option<String>,
#[command(subcommand)]
command: DataLakeCommands,

/// The RPC URL to fetch the data
rpc_url: Option<String>,
/// Path to the file to save the output result
#[arg(short, long)]
output_file: Option<String>,
/// Path to the file to save the input.json in cairo format
#[arg(short, long)]
cairo_input: Option<String>,
},
/// Decode batch tasks and data lakes
/// Decode batch tasks and datalakes
///
/// Note: Batch tasks and data lakes should be encoded in bytes[] format
/// Note: Batch tasks and datalakes should be encoded in bytes[] format
#[command(arg_required_else_help = true)]
Decode {
/// Batched tasks bytes
Expand All @@ -46,7 +59,7 @@ enum Commands {
datalakes: String,
},

/// Decode one task and one data lake (not batched format)
/// Decode one task and one datalake (not batched format)
#[command(arg_required_else_help = true)]
DecodeOne { task: String, datalake: String },
/// Run the evaluator
Expand Down Expand Up @@ -82,13 +95,55 @@ enum DataLakeCommands {
},
}

async fn handle_run(
tasks: Option<String>,
datalakes: Option<String>,
rpc_url: Option<String>,
output_file: Option<String>,
cairo_input: Option<String>,
) {
let start_run = std::time::Instant::now();
let config = Config::init(rpc_url, datalakes, tasks).await;
let abstract_fetcher = AbstractFetcher::new(config.rpc_url.clone());
let tasks = tasks_decoder(config.tasks.clone()).unwrap();
let datalakes = datalakes_decoder(config.datalakes.clone()).unwrap();

println!("tasks: \n{:?}\n", tasks);
println!("datalakes: \n{:?}\n", datalakes);

if tasks.len() != datalakes.len() {
panic!("Tasks and datalakes must have the same length");
}

let res = evaluator(
tasks,
Some(datalakes),
Arc::new(RwLock::new(abstract_fetcher)),
)
.await
.unwrap();

let duration_run = start_run.elapsed();
println!("Time elapsed in run evaluator is: {:?}", duration_run);

if let Some(output_file) = output_file {
res.save_to_file(&output_file, false).unwrap();
}
if let Some(cairo_input) = cairo_input {
res.save_to_file(&cairo_input, true).unwrap();
}
}

#[tokio::main]
async fn main() {
let start = std::time::Instant::now();
let cli = Cli::parse();
dotenv::dotenv().ok();
match cli.command {
Commands::Encode {
allow_run,
rpc_url,
output_file,
cairo_input,
aggregate_fn_id,
aggregate_fn_ctx,
command,
Expand Down Expand Up @@ -118,6 +173,18 @@ async fn main() {
println!("Original task: \n{:?}\n", tasks);
let encoded_task = tasks_encoder(vec![tasks]).unwrap();
println!("Encoded task: \n{}\n", encoded_task);

// if allow_run is true, then run the evaluator
if allow_run {
handle_run(
Some(encoded_task),
Some(encoded_datalake),
rpc_url,
output_file,
cairo_input,
)
.await;
}
}
Commands::Decode { tasks, datalakes } => {
let datalakes = datalakes_decoder(datalakes.clone()).unwrap();
Expand All @@ -144,35 +211,7 @@ async fn main() {
output_file,
cairo_input,
} => {
let config = Config::init(rpc_url, datalakes, tasks).await;
let abstract_fetcher = AbstractFetcher::new(config.rpc_url.clone());
let tasks = tasks_decoder(config.tasks.clone()).unwrap();
let datalakes = datalakes_decoder(config.datalakes.clone()).unwrap();

println!("tasks: \n{:?}\n", tasks);
println!("datalakes: \n{:?}\n", datalakes);

if tasks.len() != datalakes.len() {
panic!("Tasks and datalakes must have the same length");
}

let res = evaluator(
tasks,
Some(datalakes),
Arc::new(RwLock::new(abstract_fetcher)),
)
.await
.unwrap();

let duration = start.elapsed();
println!("Time elapsed in main() is: {:?}", duration);

if let Some(output_file) = output_file {
res.save_to_file(&output_file, false).unwrap();
}
if let Some(cairo_input) = cairo_input {
res.save_to_file(&cairo_input, true).unwrap();
}
handle_run(tasks, datalakes, rpc_url, output_file, cairo_input).await;
}
}
}
8 changes: 3 additions & 5 deletions crates/common/src/codec.rs
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ use crate::{
task::ComputationalTask,
};
use alloy_dyn_abi::{DynSolType, DynSolValue};
use alloy_primitives::hex::{self, FromHex};
use alloy_primitives::hex::FromHex;
use anyhow::{bail, Ok, Result};

/// Decode a batch of tasks
Expand Down Expand Up @@ -100,8 +100,7 @@ pub fn datalakes_encoder(datalakes: Vec<Datalake>) -> Result<String> {

let array_encoded_datalakes = DynSolValue::Array(encoded_datalakes);
let encoded_datalakes = array_encoded_datalakes.abi_encode();
let hex_string = hex::encode(encoded_datalakes);
Ok(format!("0x{}", hex_string))
Ok(bytes_to_hex_string(&encoded_datalakes))
}

/// Encode batch of tasks
Expand All @@ -116,6 +115,5 @@ pub fn tasks_encoder(tasks: Vec<ComputationalTask>) -> Result<String> {

let array_encoded_tasks = DynSolValue::Array(encoded_tasks);
let encoded_tasks = array_encoded_tasks.abi_encode();
let hex_string = hex::encode(encoded_tasks);
Ok(format!("0x{}", hex_string))
Ok(bytes_to_hex_string(&encoded_tasks))
}
2 changes: 1 addition & 1 deletion crates/common/src/compiler/block_sampled.rs
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ pub async fn compile_block_sampled_datalake(
block_range_end: u64,
sampled_property: &str,
increment: u64,
fetcher: Arc<RwLock<AbstractFetcher>>,
fetcher: &Arc<RwLock<AbstractFetcher>>,
) -> Result<DatalakeResult> {
let mut abstract_fetcher = fetcher.write().await;
let property_parts: Vec<&str> = sampled_property.split('.').collect();
Expand Down
Loading