Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge latest master into upgrade-1.2 branch #632

Merged
merged 30 commits into from
Oct 1, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
84e816a
Add mergemetrics option
rtvuser1 Sep 27, 2024
56bad0c
Merge pull request #622 from SRI-International/compiler-opt-changes-TL
rtvuser1 Sep 27, 2024
26822c5
Modify name of qft circuit to qft_ for qiskit 1.2 issue 13174
rtvuser1 Sep 27, 2024
7fbe9c4
Merge pull request #625 from SRI-International/compiler-opt-changes-TL
rtvuser1 Sep 27, 2024
47e62ec
Convert spaces to tabs
rtvuser1 Sep 27, 2024
f27fe6e
Update README.md
rtvuser1 Sep 28, 2024
cc9cfb6
Update Implementation status
rtvuser1 Sep 28, 2024
5428f1c
Make Implementation status a little smaller (720 wide)
rtvuser1 Sep 28, 2024
d151802
Make Impmementation status image smaller (640)
rtvuser1 Sep 28, 2024
0309ca1
Update README.md
rtvuser1 Sep 28, 2024
98614fc
Update README.md
rtvuser1 Sep 28, 2024
627c9ca
Merge pull request #628 from SRI-International/master
rtvuser1 Sep 28, 2024
5e4c5ad
Refine the top-level cuda-q benchmark notebook to contain 3 working b…
rtvuser1 Sep 28, 2024
20a899c
Show how to add a unique suffix to the backend_id
rtvuser1 Sep 28, 2024
268f719
Identify execution target when it is set at start of each benchmark
rtvuser1 Sep 28, 2024
ae09fd1
Fix bad import in _common.custom.custom_qiskit_noise_model
rtvuser1 Sep 28, 2024
0d32900
Fix mistake in check for valid backend_id
rtvuser1 Sep 28, 2024
46c0c97
For cudaq, set the default noise model a depolarization of 0.04
rtvuser1 Sep 28, 2024
562966e
Add noise model examples to the dudaq benchmarks notebook
rtvuser1 Sep 28, 2024
87d7380
Reduce limit for circuit drawing to 6 qubits instead of 9
rtvuser1 Sep 28, 2024
8d5fd23
Cleanup import references to metrics and executes
rtvuser1 Sep 28, 2024
c7ed6e1
Fix error in setting of paths for metrics and execute
rtvuser1 Sep 29, 2024
b5a12ae
Merge pull request #629 from SRI-International/compiler-opt-changes-TL
rtvuser1 Sep 29, 2024
7322318
Modify how sys paths are configure for the new benchmarks to avoid du…
rtvuser1 Sep 29, 2024
7d74cfb
Go back to import metrics and execute approach
rtvuser1 Sep 29, 2024
16d8875
Merge pull request #630 from SRI-International/compiler-opt-changes-TL
rtvuser1 Sep 29, 2024
f99b40c
Remove dashes in the QFT benchmark name
rtvuser1 Sep 29, 2024
81ead69
Merge pull request #631 from SRI-International/compiler-opt-changes-TL
rtvuser1 Sep 29, 2024
39f431d
replace a tab with spaces to permit merging of another PR
rtvuser1 Oct 1, 2024
e9e3823
Merge pull request #633 from SRI-International/compiler-opt-changes-TL
rtvuser1 Oct 1, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 12 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,18 +8,22 @@ The repository is maintained by members of the Quantum Economic Development Cons

A variety of "reference applications" are provided. At the current stage in the evolution of quantum computing hardware, some applications will perform better on one hardware target, while a completely different set may execute better on another target. They are designed to provide users a quantum "jump start", so to speak, eliminating the need to develop for themselves uniform code patterns that facilitate quick development, deployment, and experimentation.

The QED-C committee that developed these benchmarks released a paper (Oct 2021) describing the theory and methodology supporting this work at
The QED-C committee released its first paper (Oct 2021) describing the theory and methodology supporting this work at

    [Application-Oriented Performance Benchmarks for Quantum Computing](https://arxiv.org/abs/2110.03137)

The QED-C committee released a second paper (Feb 2023) describing the addition of combinatorial optimization problems as advanced application-oriented benchmarks at

    [Optimization Applications as Quantum Performance Benchmarks](https://arxiv.org/abs/2302.02278)

Recently, the group recently another paper (Feb 2024) with additional benchmark programs and improvements to the framework at
The group added another paper (Feb 2024) with additional benchmark programs and improvements to the framework at

    [Quantum Algorithm Exploration using Application-Oriented Performance Benchmarks](https://arxiv.org/abs/2402.08985)

Recently, the group released a fourth paper (Sep 2024) with a deep focus on measuring performance of Quantum Hamiltonians Simulations at

    [A Comprehensive Cross-Model Framework for Benchmarking the Performance of Quantum Hamiltonian Simulations](https://arxiv.org/abs/2409.06919)

See the [Implementation Status](#implementation-status) section below for the latest report on benchmarks implemented to date.

## Notes on Repository Organization
Expand All @@ -33,6 +37,7 @@ The directory names and the currently supported environments are:
qiskit -- IBM Qiskit
cirq -- Google Cirq
braket -- Amazon Braket
cudaq -- NVIDIA CUDA-Q (WIP)
ocean -- D-Wave Ocean
```
The goal has been to make the implementation of each algorithm identical across the different target environments, with the processing and reporting of results as similar as possible. Each application directory includes a README file with information specific to that application or algorithm. Below we list the benchmarks we have implemented with a suggested order of approach; the benchmarks in levels 1 and 2 are simpler and a good place to start for beginners, while levels 3 and 4 are more complicated and might build off of intuition and reasoning developed in earlier algorithms. Level 5 includes newly released benchmarks based on iterative execution done within hybrid algorithms.
Expand All @@ -43,7 +48,7 @@ Complexity of Benchmark Algorithms (Increasing Difficulty)
1: Deutsch-Jozsa, Bernstein-Vazirani, Hidden Shift
2: Quantum Fourier Transform, Grover's Search
3: Phase Estimation, Amplitude Estimation, HHL Linear Solver
4: Monte Carlo, Hamiltonian Simulation, Variational Quantum Eigensolver, Shor's Order Finding
4: Monte Carlo, Hamiltonian (and HamLib) Simulation, Variational Quantum Eigensolver, Shor's Order Finding Algorithm
5: MaxCut, Hydrogen-Lattice
```

Expand All @@ -59,7 +64,7 @@ In addition to the application directories at the highest level, there are sever

## Setup and Configuration

The prototype benchmark applications are easy to run and contain few dependencies. The primary dependency is on the Python packages needed for the target environment in which you would like to execute the examples.
The benchmark applications are easy to run and contain few dependencies. The primary dependency is on the Python packages needed for the target environment in which you would like to execute the examples.

In the [`Preparing to Run Benchmarks`](./_setup/) section you will find a subdirectory for each of the target environments that contains a README with everything you need to know to install and configure the specific environment in which you would like to run.

Expand Down Expand Up @@ -115,7 +120,7 @@ The second cell of the Jupyter Notebook contains commented code with references

Applications are often deployed into Container Management Frameworks such as Docker, Kubernetes, and the like.

The Prototype Benchmarks repository includes support for the creation of a unique *'container image'* for each of the supported API environments. You can find the instructions and all the necessary build files in a folder at the top level named [**`_containerbuildfiles`**](./_containerbuildfiles/).
The Application-Oriented Benchmarks repository includes support for the creation of a unique *'container image'* for each of the supported API environments. You can find the instructions and all the necessary build files in a folder at the top level named [**`_containerbuildfiles`**](./_containerbuildfiles/).
The benchmark program image can be deployed into a container management framework and executed as any other application in that framework.

Once built, deployed, and launched, the container process invokes a Jupyter Notebook from which you can run all the available benchmarks.
Expand All @@ -129,9 +134,9 @@ Once built, deployed, and launched, the container process invokes a Jupyter Note

## Implementation Status

Below is a table showing the degree to which the benchmarks have been implemented in each of the target platforms (as of the last update to this branch):
Below is a table showing the degree to which the benchmarks have been implemented in each of the target frameworks (as of the last update to this branch):

![Prototype Benchmarks - Implementation Status](./_doc/images/proto_benchmarks_status.png)
![Application-Oriented Benchmarks - Implementation Status](./_doc/images/proto_benchmarks_status.png)



Expand Down
Loading
Loading