Warning: This codebase is experimental and not audited. Use at your own risk.
The HDP Solidity contracts interface with the Herodotus Data Processor (HDP) to authenticate and store processed results securely on-chain. These contracts facilitate complex data processing tasks and result in validation using cryptographic proofs. For more, visit our documentation.
HdpExecutionStore
is the main contract in this project. It manages the execution and verification of computational tasks on various datalakes like block-sampled, transactions in block datalake. The contract integrates multiple functionalities:
- Task Scheduling and Result Caching: Allows scheduling of tasks and caching of intermediate and final results.
- Merkle Proof Verification: Utilizes Merkle proofs to ensure the integrity of task results and batch inclusions.
- Integration with External Fact Registries and Aggregators: Verifies task computations against a set of pre-registered facts using the SHARP Facts Registry and coordinates with data aggregators.
requestExecutionOfTaskWithBlockSampledDatalake()
: Schedules datalake task using block-sampled data lake.requestExecutionOfTaskWithTransactionsInBlockDatalake()
: Schedules datalake tasks using transactions-in-block data lake.requestExecutionOfModuleTask()
: schedule module task.authenticateTaskExecution()
: Verifies and finalizes the execution of computational tasks by validating Merkle proofs and registered facts.getFinalizedTaskResult()
: Retrieves results of finalized tasks.
- FactsRegistry: Manages facts for task verification. More info
- SharpFactsAggregator: Aggregates jobs More info
- AggregatorsFactory: Factory pattern to create data aggregators. More info
-
ComputationalTask:
- Defines tasks that perform aggregate functions on the data retrieved from datalakes.
- Encoded and committed using
ComputationalTaskCodecs
, ensuring that tasks are securely and efficiently processed. - Supported functions include average, sum, min, max, count, and Merkle proof aggregation, with various operators for conditional processing.
-
BlockSampledDatalake:
BlockSampledDatalake datalake = BlockSampledDatalake({
chainId: 11155111,
blockRangeStart: 5858987,
blockRangeEnd: 5858997,
increment: 2,
sampledProperty: BlockSampledDatalakeCodecs.encodeSampledPropertyForHeaderProp(uint8(18))
});
ComputationalTask computationalTask = ComputationalTask({
aggregateFnId: AggregateFn.COUNT,
operatorId: Operator.GT,
valueToCompare: uint256(10000000)
});
-
Structure used for defining data samples over a range of blocks.
-
Encoded through
BlockSampledDatalakeCodecs
which manages the serialization and commitment of the data structures. -
commit()
function creates a hash of the encoded datalake, used for verifying integrity and registering tasks. -
TransactionsInBlockDatalake:
TransactionsInBlockDatalake datalake = TransactionsInBlockDatalake({
chainId: 11155111,
targetBlock: uint256(5605816),
startIndex: uint256(12),
endIndex: uint256(53),
increment: uint256(1),
includedTypes: uint256(0x00000101),
sampledProperty: TransactionsInBlockDatalakeCodecs.encodeSampledPropertyFortxReceipt(uint8(0))
});
ComputationalTask computationalTask =
ComputationalTask({aggregateFnId: AggregateFn.COUNT, operatorId: Operator.GT, valueToCompare: uint256(50)});
- Structure used for defining transactions included in the target block.
- Encoded through
TransactionsInBlockDatalakeCodecs
which manages the serialization and commitment of the data structures. commit()
function creates a hash of the encoded datalake, used for verifying integrity and registering tasks.
- Define program hash of the target module and corresponding inputs as array.
bytes32[] memory moduleInputs = new bytes32[](2);
moduleInputs[0] = bytes32(uint256(5382820));
moduleInputs[1] = bytes32(uint256(113007187165825507614120510246167695609561346261));
ModuleTask memory moduleTask = ModuleTask({
programHash: bytes32(0x064041a339b1edd10de83cf031cfa938645450f971d2527c90d4c2ce68d7d412),
inputs: moduleInputs
});
encode()
: Serializes data structures for transmission or storage.commit()
: Generates cryptographic commitments of data, essential for task verification and integrity checks.decode()
: Converts serialized data back into structured formats.
Pre-requisites:
- Solidity (with solc >= 0.8.4)
- Foundry
- pnpm
Make sure to have a .env
file configured with the variables defined in .env.example
, then run:
source .env; forge script script/HdpExecutionStore.s.sol:HdpExecutionStoreDeployer --rpc-url $DEPLOY_RPC_URL --broadcast --verify -vvvv --via-ir
For one time hdp
binary installation:
make hdp-install
For one time local Cairo environment:
make cairo-install && make cairo1-install
To get Cairo PIE that is used in testing, run:
make cairo-run
Now can run the test from the setup above:
# Install submodules
forge install
# Build contracts
forge build
# Test
forge test
To test with different version of cairo program, compile it and locate it in build/compiled_cairo/.
Make sure to do this to generate corresponding PIE from modified cairo program
make cairo-run
And run the test for modified program:
# Test
forge test
Utilize command:
make hdp-run
If want to fetch different input, generate input.json
and output.json
using hdp cli or you can get them from hdp-test fixtures.
Modify input and output files that are located in build/compiled_cairo/
. Also, in the test file, construct thecorresponding datalake and task instance before initiating.
And run the test for modified request:
# Test
forge test
hdp-solidity
is licensed under the GNU General Public License v3.0.
Herodotus Dev Ltd - 2024