Skip to content

Commit

Permalink
Merge pull request #638 from SRI-International/master
Browse files Browse the repository at this point in the history
Merge latest from master
  • Loading branch information
rtvuser1 authored Oct 1, 2024
2 parents 00ba47f + 6ef6304 commit e1d84cd
Show file tree
Hide file tree
Showing 71 changed files with 52,304 additions and 2,451 deletions.
11 changes: 9 additions & 2 deletions .github/workflows/on_push.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,11 +18,18 @@ jobs:
with:
python-version: ${{ matrix.python-version }}

- name: Install dependencies
- name: Install hydrogen lattice dependencies
run: pip install -r hydrogen-lattice/qiskit/requirements.txt
working-directory: ./

- name: Run tests
- name: Install hamiltonian simulation dependencies
run: pip install -r hamiltonian-simulation/qiskit/requirements.txt
working-directory: ./

- name: Run hydrogen lattice tests
run: pytest
working-directory: ./hydrogen-lattice/qiskit

- name: Run hamiltonian simulation tests
run: pytest
working-directory: ./hamiltonian-simulation/qiskit
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# Python
# Byte-compiled / optimized / DLL files
__pycache__/
downloaded_hamlib_files/
*.py[cod]
*$py.class

Expand Down
19 changes: 12 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,18 +8,22 @@ The repository is maintained by members of the Quantum Economic Development Cons

A variety of "reference applications" are provided. At the current stage in the evolution of quantum computing hardware, some applications will perform better on one hardware target, while a completely different set may execute better on another target. They are designed to provide users a quantum "jump start", so to speak, eliminating the need to develop for themselves uniform code patterns that facilitate quick development, deployment, and experimentation.

The QED-C committee that developed these benchmarks released a paper (Oct 2021) describing the theory and methodology supporting this work at
The QED-C committee released its first paper (Oct 2021) describing the theory and methodology supporting this work at

    [Application-Oriented Performance Benchmarks for Quantum Computing](https://arxiv.org/abs/2110.03137)

The QED-C committee released a second paper (Feb 2023) describing the addition of combinatorial optimization problems as advanced application-oriented benchmarks at

    [Optimization Applications as Quantum Performance Benchmarks](https://arxiv.org/abs/2302.02278)

Recently, the group recently another paper (Feb 2024) with additional benchmark programs and improvements to the framework at
The group added another paper (Feb 2024) with additional benchmark programs and improvements to the framework at

    [Quantum Algorithm Exploration using Application-Oriented Performance Benchmarks](https://arxiv.org/abs/2402.08985)

Recently, the group released a fourth paper (Sep 2024) with a deep focus on measuring performance of Quantum Hamiltonians Simulations at

    [A Comprehensive Cross-Model Framework for Benchmarking the Performance of Quantum Hamiltonian Simulations](https://arxiv.org/abs/2409.06919)

See the [Implementation Status](#implementation-status) section below for the latest report on benchmarks implemented to date.

## Notes on Repository Organization
Expand All @@ -33,6 +37,7 @@ The directory names and the currently supported environments are:
qiskit -- IBM Qiskit
cirq -- Google Cirq
braket -- Amazon Braket
cudaq -- NVIDIA CUDA-Q (WIP)
ocean -- D-Wave Ocean
```
The goal has been to make the implementation of each algorithm identical across the different target environments, with the processing and reporting of results as similar as possible. Each application directory includes a README file with information specific to that application or algorithm. Below we list the benchmarks we have implemented with a suggested order of approach; the benchmarks in levels 1 and 2 are simpler and a good place to start for beginners, while levels 3 and 4 are more complicated and might build off of intuition and reasoning developed in earlier algorithms. Level 5 includes newly released benchmarks based on iterative execution done within hybrid algorithms.
Expand All @@ -43,7 +48,7 @@ Complexity of Benchmark Algorithms (Increasing Difficulty)
1: Deutsch-Jozsa, Bernstein-Vazirani, Hidden Shift
2: Quantum Fourier Transform, Grover's Search
3: Phase Estimation, Amplitude Estimation, HHL Linear Solver
4: Monte Carlo, Hamiltonian Simulation, Variational Quantum Eigensolver, Shor's Order Finding
4: Monte Carlo, Hamiltonian (and HamLib) Simulation, Variational Quantum Eigensolver, Shor's Order Finding Algorithm
5: MaxCut, Hydrogen-Lattice
```

Expand All @@ -59,7 +64,7 @@ In addition to the application directories at the highest level, there are sever

## Setup and Configuration

The prototype benchmark applications are easy to run and contain few dependencies. The primary dependency is on the Python packages needed for the target environment in which you would like to execute the examples.
The benchmark applications are easy to run and contain few dependencies. The primary dependency is on the Python packages needed for the target environment in which you would like to execute the examples.

In the [`Preparing to Run Benchmarks`](./_setup/) section you will find a subdirectory for each of the target environments that contains a README with everything you need to know to install and configure the specific environment in which you would like to run.

Expand Down Expand Up @@ -115,7 +120,7 @@ The second cell of the Jupyter Notebook contains commented code with references

Applications are often deployed into Container Management Frameworks such as Docker, Kubernetes, and the like.

The Prototype Benchmarks repository includes support for the creation of a unique *'container image'* for each of the supported API environments. You can find the instructions and all the necessary build files in a folder at the top level named [**`_containerbuildfiles`**](./_containerbuildfiles/).
The Application-Oriented Benchmarks repository includes support for the creation of a unique *'container image'* for each of the supported API environments. You can find the instructions and all the necessary build files in a folder at the top level named [**`_containerbuildfiles`**](./_containerbuildfiles/).
The benchmark program image can be deployed into a container management framework and executed as any other application in that framework.

Once built, deployed, and launched, the container process invokes a Jupyter Notebook from which you can run all the available benchmarks.
Expand All @@ -129,9 +134,9 @@ Once built, deployed, and launched, the container process invokes a Jupyter Note

## Implementation Status

Below is a table showing the degree to which the benchmarks have been implemented in each of the target platforms (as of the last update to this branch):
Below is a table showing the degree to which the benchmarks have been implemented in each of the target frameworks (as of the last update to this branch):

![Prototype Benchmarks - Implementation Status](./_doc/images/proto_benchmarks_status.png)
![Application-Oriented Benchmarks - Implementation Status](./_doc/images/proto_benchmarks_status.png)



Expand Down
101 changes: 101 additions & 0 deletions _common/braket/execute.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,19 @@

verbose = False

# Print additional time metrics for each stage of execution
verbose_time = False

import logging
# logger for this module
logger = logging.getLogger(__name__)

# Option to compute normalized depth during execution (can disable to reduce overhead in large circuits)
use_normalized_depth = True

# Option to perform explicit transpile to collect depth metrics
do_transpile_metrics = True

# Special object class to hold job information and used as a dict key
class Job:
pass
Expand All @@ -65,6 +78,8 @@ def init_execution(handler):
active_circuits.clear()
result_handler = handler

# On initialize, always set trnaspilation for metrics and execute to True
set_tranpilation_flags(do_transpile_metrics=True, do_transpile_for_execute=True)

# Set the backend for execution
def set_execution_target(backend_id='simulator'):
Expand Down Expand Up @@ -134,6 +149,37 @@ def execute_circuit(batched_circuit):

# Initiate execution (currently, waits for completion)
job = Job()
circuit = batched_circuit["qc"]

# obtain initial circuit metrics
qc_depth, qc_size, qc_count_ops = get_circuit_metrics(circuit)

# default the normalized transpiled metrics to the same, in case exec fails
qc_tr_depth = qc_depth
qc_tr_size = qc_size
qc_tr_count_ops = qc_count_ops
#print(f"... before tp: {qc_depth} {qc_size} {qc_count_ops}")

try:
# transpile the circuit to obtain size metrics using normalized basis
if do_transpile_metrics and use_normalized_depth:
qc_tr_depth, qc_tr_size, qc_tr_count_ops = transpile_for_metrics(circuit)

# we want to ignore elapsed time contribution of transpile for metrics (normalized depth)
active_circuit["launch_time"] = time.time()

except Exception as e:
print(f'ERROR: Failed to execute circuit {active_circuit["group"]} {active_circuit["circuit"]}')
print(f"... exception = {e}")
return

# store circuit dimensional metrics
metrics.store_metric(active_circuit["group"], active_circuit["circuit"], 'depth', qc_depth)
metrics.store_metric(active_circuit["group"], active_circuit["circuit"], 'size', qc_size)

metrics.store_metric(active_circuit["group"], active_circuit["circuit"], 'tr_depth', qc_tr_depth)
metrics.store_metric(active_circuit["group"], active_circuit["circuit"], 'tr_size', qc_tr_size)

job.result = braket_execute(batched_circuit["qc"], batched_circuit["shots"])

# put job into the active circuits with circuit info
Expand Down Expand Up @@ -276,3 +322,58 @@ def braket_execute(qc, shots=100):
# Test circuit execution
def test_execution():
pass

###############
# Get circuit metrics fom the circuit passed in
def get_circuit_metrics(qc):

logger.info('Entering get_circuit_metrics')
# print(qc)

# obtain initial circuit size metrics
qc_depth = qc.depth
qc_size = len(qc.instructions) # total gate operations
qc_count_ops = count_operations(qc)

return qc_depth, qc_size, qc_count_ops


def count_operations(circuit):
operation_counts = {}

for instruction in circuit.instructions:
operation = instruction.operator.name
if operation in operation_counts:
operation_counts[operation] += 1
else:
operation_counts[operation] = 1

return operation_counts


######
# Set the state of the transpilation flags
def set_tranpilation_flags(do_transpile_metrics = True, do_transpile_for_execute = True):
globals()['do_transpile_metrics'] = do_transpile_metrics
globals()['do_transpile_for_execute'] = do_transpile_for_execute

######
# Transpile the circuit to obtain normalized size metrics against a common basis gate set
def transpile_for_metrics(qc):

logger.info('Entering transpile_for_metrics')
#print("*** Before transpile ...")
#print(qc)
st = time.time()

#kept Transpiled Depth and Algorithmic Depth same, to get the Volumetric Positioning Plot
qc_tr_depth = qc.depth
qc_tr_size =len(qc.instructions) # total gate operations
qc_tr_count_ops = count_operations(qc)
# print(f"*** after transpile: 'qc_tr_depth' {qc_tr_depth} 'qc_tr_size' {qc_tr_size} 'qc_tr_count_ops' {qc_tr_count_ops}\n")


logger.info(f'transpile_for_metrics - {round(time.time() - st, 5)} (ms)')
if verbose_time: print(f" *** transpile_for_metrics() time = {round(time.time() - st, 5)}")

return qc_tr_depth, qc_tr_size, qc_tr_count_ops
Loading

0 comments on commit e1d84cd

Please sign in to comment.