Skip to content

Commit

Permalink
Merge pull request #516 from toddytharmonkey/compiler-opt-changes-hamsim
Browse files Browse the repository at this point in the history
Seperate exact calculation, fix precalculated data generation typo/bug
  • Loading branch information
rtvuser1 authored Jun 26, 2024
2 parents b635497 + 659cf97 commit 4b22558
Show file tree
Hide file tree
Showing 5 changed files with 10,789 additions and 10,937 deletions.
50 changes: 36 additions & 14 deletions hamiltonian-simulation/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,12 @@
# Hamiltonian Simulation - Prototype Benchmark Program

Simulation of quantum systems is one of the most promising applications for quantum computers [[1]](#references). In the current version of this benchmark, we compare the quantum simulation against a classical simultion of the same circuit in order to report our fidelity. This works well for small circuit sizes, but is not scalable past a certain number of qubits. Additionally, we can now only run specific circuits, rather than random ones, limiting the ability of this benchmark to completely characterize real uses cases of the Hamiltonian simulation algorithm. We have some suggested modifications to this benchmark which we outline and explain why we did not chose to use them at this time in the section [Proposed Improvements](#Proposed-Improvements).
Simulation of quantum systems is one of the most promising applications for quantum computers [1](#references). In the current version of this benchmark, we have two main strategies for calculating fidelities.

In the first strategy, we compare the quantum simulation against a classical simultion in order to report our fidelity. This works well for small circuit sizes, but is not scalable past a certain number of qubits.

In the second strategy, we use the mirror circuits method developed by Sandia Labratories [2](#references). This is scalable to all qubit sizes.

We can now only run specific circuits, rather than random ones, limiting the ability of this benchmark to completely characterize real uses cases of the Hamiltonian simulation algorithm. We have some suggested modifications to this benchmark which we outline and explain why we did not chose to use them at this time in the section [Proposed Improvements](#Proposed-Improvements).

## Problem outline

Expand Down Expand Up @@ -37,18 +43,40 @@ $$

Where <img align=center src="https://latex.codecogs.com/svg.latex?\pagecolor{white}J"/> is the coupling strength between neighboring spins, and <img align=center src="https://latex.codecogs.com/svg.latex?\pagecolor{white}h"/> is the strength of the transverse field.

In our benchmarks, currently both $J=1$ and $h=.2$.
In our benchmarks, currently both $J=1$ and $h=1$.

## Benchmarking
The Hamiltonian Simulation algorithm is benchmarked by running **just a single circuit**. This circuit is repeated a number of times denoted by `num_shots`. We then run the algorithm circuit for numbers of qubits between `min_qubits` and `max_qubits`, inclusive. The test returns the averages of the circuit creation times, average execution times, fidelities, and circuit depths, like all of the other algorithms.

After running the Hamiltonian Simulation circuit, there are two options for how to produce the fidelity metric, both of which use a precalculated distribution to compare to the Hamiltonian Simulation results. Method = 1 uses a distribution from a noiseless simulation of the trotterized quantum circuit. For this method, if the benchmark is also run on a noiseless simulation, with a high number of shots, the fidelity should be high. Method = 2 uses a classical matrix technique to simulate the evolution of the hamiltonian directly. Wheras method = 1 more directly tests the hardware, method = 2 also tests the accuracy of the hamiltonian simulation algorithm itself.
There are currently three methods for how to produce the fidelity metric. All three methods evolve a state, and create a metric based on how well the state evolved.

The first two methods evolve an initial state a time $t$, and compare the final state against a precalculated distribution. Method = 1 creates the precalculated distribution from a noiseless simulation of the Hamiltonian Simulation quantum circuit. Method = 2 uses a classical matrix technique to simulate the evolution of the hamiltonian directly. Wheras method = 1 only tests the performance of the hardware, method = 2 also tests the accuracy of the Hamiltonian simulation itself.

We calculate these precalculated distributions in the jupyter notebook `precalculated_data.ipynb`, which stores the results for up to 20 qubits in the `precalculated_data.json` data file. The python code then imports the distributions from the `json` file. This is a less than ideal fidelity calculation as it does not scale to any size of qubits. It requires the classical simulation of matrix products, which requires resources exponential in number of qubits.

In the `precalculated_data.ipnyb`, we set the trotterization steps (k) to 5 and the time to .2. For the Heisenberg Hamiltonian, $w$ is set to 1 but $J$ is hard-coded to 1. For TFIM, the Hamiltonian variables are both hard-coded to $J=1$ and $h=1$ respectively.

Method = 3 uses a mirror circuit based off of the Hamiltonian Simulation circuit, designed so that the target distribution is trivial. It evolves an initial state forwards a time $t$, then backwards in time $t$, so that the final state should be the (trivial) initial state. Because this doesn't utilize classical resources to generate a comparison metric, this scales to any size of qubits.

In either case, we compare the resultant distribution using our [noise-normalized fidelity calculation](../_doc/POLARIZATION_FIDELITY.md).

We calculate these expected distributions in the jupyter notebook `precalculated_data.ipynb`, which stores the results for up to 20 qubits in the `precalculated_data.json` data file. The python code then imports the distributions from the `json` file. This is a less than ideal fidelity calculation as it does not scale to any size of qubits. It requires the classical simulation of matrix products, which requires resources exponential in number of qubits.
In all cases, we compare the resultant distribution using our [noise-normalized fidelity calculation](../_doc/POLARIZATION_FIDELITY.md).

In the `precalculated_data.ipnyb`, we set the trotterization steps (k) to 5 and the time to .2. For the Heisenberg Hamiltonian, $w$ is set to 1 but $J$ is hard-coded to 1. For TFIM, the Hamiltonian variables are both hard-coded to $J=1$ and $h=.2$ respectively.
In the run() method for the benchmark, there are a number of optional arguments that can be specified. Some of the key arguments are as follows:

```
Parameters
----
min_qubits (int): Minimum number of qubits (smallest circuit is 2 qubits).
max_qubits (int): Maximum number of qubits.
max_circuits (int): Maximum number of circuits to execute per group.
num_shots (int): Number of shots for each circuit execution.
hamiltonian (str): Which hamiltonian to run. "heisenberg" by default but can also choose "TFIM".
method (int): Method for fidelity checking (1 for noiseless trotterized quantum, 2 for exact classical, 3 for mirror circuit.)
use_XX_YY_ZZ_gates (bool): Flag to use unoptimized XX, YY, ZZ gates.
random_pauli_flag (bool): Flag to activate more sophisticated mirror circuit formation that utilizes a layer of random paulis seperating the mirror circuits in addition to using a quasi-inverse rather than an inverse.
init_state (str): The desired initial state. Choices are "checkerboard" or "ghz".
```

## Classical algorithm

Expand Down Expand Up @@ -125,19 +153,13 @@ There are two options of circuit creation for this simulation:
</p>

*Fig 6. Naive <img align=center src="https://latex.codecogs.com/svg.latex?\pagecolor{white}e^{it(YY)}"/> gate.*

## Proposed Improvements

Our suggested method of modifying this benchmark involves a topic taken from condensed matter called *many-body localization* [[2]](#references) [[5]](#references). The essence of this phenomenon is that for certain closed quantum systems, some excited states fail to thermalize as expected.

The parameters that we use for the current Hamiltonian simulation benchmark already put us in the many-body localization regime. After evolving the system for a suitibly long time, with <img align=center src="https://latex.codecogs.com/svg.latex?\pagecolor{white}t\approx{1}">, we can then take a look at a parameter called *imbalance* to characterize this property. The imbalance compares the percentage of measured 1's on the even versus odd sites. We see that for our Heisenberg Hamiltonian, when averaging over many random choices for <img align=center src="https://latex.codecogs.com/svg.latex?\pagecolor{white}h_{x,i}"> and <img align=center src="https://latex.codecogs.com/svg.latex?\pagecolor{white}h_{z,i}">, the imbalance will converge to a constant value for increasing circuit width. We could then set our fidelity calculation as the deviation from the expected imbalance value. This benchmark is outlined in the `mbl_benchmark.py` file in the qiskit `WIP_benchmarks` folder.

While we have already coded this benchmark, we are not using it as a benchmark because some of the parameters required are not feasible on current computers. Because of our comparatively large value of <img align=center src="https://latex.codecogs.com/svg.latex?\pagecolor{white}t">, we need to have a large number of Trotter steps <img align=center src="https://latex.codecogs.com/svg.latex?\pagecolor{white}k"> so our simulation is close to the actual evolution of the Hamiltonian. This results in circuits that are thousands of gates long. Then, because the imbalance converging is a feature that occurs *on average*, we need to average over more than 100 different circuits for the fidelity calculation to capture just the hardware error, not the algorithmic error. All of these factors make it such that running the benchmark is difficult on simulators, and exceedingly expensive and/or time-intensive to run on hardware. In the future, when hardware is more accessible, these downsides might not be as big of an issue, and many-body-localization can be utilized as an effective benchmark.

## References

[1] Feynman, RP. (1982) Simulating physics with computers. Int J Theor Phys 21:467–488.

[2] Proctor, T., Rudinger, K., Young, K. et al. Measuring the capabilities of quantum computers. Nat. Phys. 18, 75–79 (2022). https://doi.org/10.1038/s41567-021-01409-7

[2] Andrew M. Childs, Dmitri Maslov, Yunseong Nam, Neil J. Ross, Yuan Su. (2017).
Toward the first quantum simulation with quantum speedup.
[`arXiv:1711.10980`](https://arxiv.org/pdf/1711.10980.pdf)
Expand Down
146 changes: 146 additions & 0 deletions hamiltonian-simulation/_common/hamiltonian_simulation_exact.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,146 @@
import sys

sys.path[1:1] = ["_common", "_common/qiskit"]
sys.path[1:1] = ["../../_common", "../../_common/qiskit"]
sys.path[1:1] = ["../qiskit"]
from qiskit import QuantumCircuit
from qiskit.quantum_info import SparsePauliOp
from qiskit_algorithms import TimeEvolutionProblem, SciPyRealEvolver
import numpy as np
np.random.seed(0)
import execute as ex
import metrics as metrics

from hamiltonian_simulation_kernel import initial_state

def construct_TFIM_hamiltonian(n_spins: int) -> SparsePauliOp:
"""
Construct the Transverse Field Ising Model (TFIM) Hamiltonian.
Args:
n_spins (int): Number of spins (qubits).
Returns:
SparsePauliOp: The Hamiltonian represented as a sparse Pauli operator.
"""
pauli_strings = []
coefficients = []
g = 1 # Strength of the transverse field

# Pauli spin vector product terms
for i in range(n_spins):
x_term = 'I' * i + 'X' + 'I' * (n_spins - i - 1)
pauli_strings.append(x_term)
coefficients.append(g)

identity_string = ['I'] * n_spins

# ZZ operation on each pair of qubits in a linear chain
for j in range(2):
for i in range(j % 2, n_spins - 1, 2):
zz_term = identity_string.copy()
zz_term[i] = 'Z'
zz_term[(i + 1) % n_spins] = 'Z'
zz_term = ''.join(zz_term)
pauli_strings.append(zz_term)
coefficients.append(1.0)

return SparsePauliOp.from_list(zip(pauli_strings, coefficients))

def construct_heisenberg_hamiltonian(n_spins: int, w: int, hx: list[float], hz: list[float]) -> SparsePauliOp:
"""
Construct the Heisenberg Hamiltonian with disorder.
Args:
n_spins (int): Number of spins (qubits).
w (float): Strength of two-qubit interactions for heisenberg hamiltonian.
hx (list[float]): Strength of internal disorder parameter for heisenberg hamiltonian.
hz (list[float]): Strength of internal disorder parameter for heisenberg hamiltonian.
Returns:
SparsePauliOp: The Hamiltonian represented as a sparse Pauli operator.
"""

pauli_strings = []
coefficients = []

# Disorder terms
for i in range(n_spins):
x_term = 'I' * i + 'X' + 'I' * (n_spins - i - 1)
z_term = 'I' * i + 'Z' + 'I' * (n_spins - i - 1)
pauli_strings.append(x_term)
coefficients.append(w * hx[i])
pauli_strings.append(z_term)
coefficients.append(w * hz[i])

identity_string = ['I'] * n_spins

# Interaction terms
for j in range(2):
for i in range(j % 2, n_spins - 1, 2):
xx_term = identity_string.copy()
yy_term = identity_string.copy()
zz_term = identity_string.copy()

xx_term[i] = 'X'
xx_term[(i + 1) % n_spins] = 'X'

yy_term[i] = 'Y'
yy_term[(i + 1) % n_spins] = 'Y'

zz_term[i] = 'Z'
zz_term[(i + 1) % n_spins] = 'Z'

pauli_strings.append(''.join(xx_term))
coefficients.append(1.0)
pauli_strings.append(''.join(yy_term))
coefficients.append(1.0)
pauli_strings.append(''.join(zz_term))
coefficients.append(1.0)

return SparsePauliOp.from_list(zip(pauli_strings, coefficients))

def construct_hamiltonian(n_spins: int, hamiltonian: str, w: float, hx : list[float], hz: list[float]) -> SparsePauliOp:
"""
Construct the Hamiltonian based on the specified method.
Args:
n_spins (int): Number of spins (qubits).
hamiltonian (str): Which hamiltonian to run. "Heisenberg" by default but can also choose "TFIM".
w (float): Strength of two-qubit interactions for heisenberg hamiltonian.
hx (list[float]): Strength of internal disorder parameter for heisenberg hamiltonian.
hz (list[float]): Strength of internal disorder parameter for heisenberg hamiltonian.
Returns:
SparsePauliOp: The constructed Hamiltonian.
"""

hamiltonian = hamiltonian.strip().lower()

if hamiltonian == "heisenberg":
return construct_heisenberg_hamiltonian(n_spins, w, hx, hz)
elif hamiltonian == "tfim":
return construct_TFIM_hamiltonian(n_spins)
else:
raise ValueError("Invalid Hamiltonian specification.")

def HamiltonianSimulationExact(n_spins: int, t: float, init_state: str, hamiltonian: str, w: float, hx: list[float], hz: list[float]) -> dict:
"""
Perform exact Hamiltonian simulation using classical matrix evolution.
Args:
n_spins (int): Number of spins (qubits).
t (float): Duration of simulation.
init_state (str): The chosen initial state. By default applies the checkerboard state, but can also be set to "ghz", the GHZ state.
hamiltonian (str): Which hamiltonian to run. "Heisenberg" by default but can also choose "TFIM".
w (float): Strength of two-qubit interactions for heisenberg hamiltonian.
hx (list[float]): Strength of internal disorder parameter for heisenberg hamiltonian.
hz (list[float]): Strength of internal disorder parameter for heisenberg hamiltonian.
Returns:
dict: The distribution of the evolved state.
"""
hamiltonian_sparse = construct_hamiltonian(n_spins, hamiltonian, w, hx, hz)
time_problem = TimeEvolutionProblem(hamiltonian_sparse, t, initial_state=initial_state(n_spins, init_state))
result = SciPyRealEvolver(num_timesteps=1).evolve(time_problem)
return result.evolved_state.probabilities_dict()
Loading

0 comments on commit 4b22558

Please sign in to comment.