Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement benchmarks for broadcasting QO types #411

Merged
merged 15 commits into from
Sep 18, 2024
64 changes: 64 additions & 0 deletions .github/workflows/benchmark-comment.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
# To workaroud https://github.com/actions/first-interaction/issues/10 in a secure way,
# we take the following steps to generate and comment a performance benchmark result:
# 1. first "performance tracking" workflow will generate the benchmark results in an unprivileged environment triggered on `pull_request` event
# 2. then this "performance tracking (comment)" workflow will show the result to us as a PR comment in a privileged environment
# Note that this workflow can only be modifed by getting checked-in to the default branch
# and thus is secure even though this workflow is granted with write permissions, etc.
# xref: https://securitylab.github.com/research/github-actions-preventing-pwn-requests/

name: Performance tracking (comment)

on:
workflow_run:
workflows:
- performance tracking
types:
- completed

jobs:
comment:
runs-on: ubuntu-latest
#runs-on: self-hosted
if: >
${{ github.event.workflow_run.event == 'pull_request' &&
github.event.workflow_run.conclusion == 'success' }}
steps:
- uses: actions/checkout@v4

# restore records from the artifacts
- uses: dawidd6/action-download-artifact@v6
with:
workflow: benchmark.yml
name: performance-tracking
workflow_conclusion: success
- name: output benchmark result
id: output-result-markdown
run: |
echo ::set-output name=body::$(cat ./benchmark-result.artifact)
- name: output pull request number
id: output-pull-request-number
run: |
echo ::set-output name=body::$(cat ./pull-request-number.artifact)
# check if the previous comment exists
- name: find comment
uses: peter-evans/find-comment@v3
id: fc
with:
issue-number: ${{ steps.output-pull-request-number.outputs.body }}
comment-author: 'github-actions[bot]'
body-includes: Benchmark Result

# create/update comment
- name: create comment
if: ${{ steps.fc.outputs.comment-id == 0 }}
uses: peter-evans/create-or-update-comment@v4
with:
issue-number: ${{ steps.output-pull-request-number.outputs.body }}
body: ${{ steps.output-result-markdown.outputs.body }}
- name: update comment
if: ${{ steps.fc.outputs.comment-id != 0 }}
uses: peter-evans/create-or-update-comment@v4
with:
comment-id: ${{ steps.fc.outputs.comment-id }}
body: ${{ steps.output-result-markdown.outputs.body }}
57 changes: 57 additions & 0 deletions .github/workflows/benchmark.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
name: Performance tracking
on:
pull_request:

env:
PYTHON: ~

jobs:
performance-tracking:
runs-on: ubuntu-latest
#runs-on: self-hosted
steps:
# setup
- uses: actions/checkout@v4
- uses: julia-actions/setup-julia@latest
with:
version: '1.10'
- uses: julia-actions/julia-buildpkg@latest
- name: install dependencies
run: julia -e 'using Pkg; pkg"add PkgBenchmark BenchmarkCI@0.1"'

# run the benchmark suite
- name: run benchmarks
run: |
julia -e '
using BenchmarkCI
BenchmarkCI.judge()
BenchmarkCI.displayjudgement()
'
# generate and record the benchmark result as markdown
- name: generate benchmark result
run: |
body=$(julia -e '
using BenchmarkCI
let
judgement = BenchmarkCI._loadjudge(BenchmarkCI.DEFAULT_WORKSPACE)
title = "Benchmark Result"
ciresult = BenchmarkCI.CIResult(; judgement, title)
BenchmarkCI.printcommentmd(stdout::IO, ciresult)
end
')
body="${body//'%'/'%25'}"
body="${body//$'\n'/'%0A'}"
body="${body//$'\r'/'%0D'}"
echo $body > ./benchmark-result.artifact
# record the pull request number
- name: record pull request number
run: echo ${{ github.event.pull_request.number }} > ./pull-request-number.artifact

# save as artifacts (performance tracking (comment) workflow will use it)
- uses: actions/upload-artifact@v4
with:
name: performance-tracking
path: ./*.artifact
3 changes: 2 additions & 1 deletion Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ version = "1.1.1"
Arpack = "7d9fca2a-8960-54d3-9f78-7d1dccf2cb97"
DiffEqBase = "2b5f629d-d688-5b77-993f-72d75c75574e"
DiffEqCallbacks = "459566f4-90b8-5000-8ac3-15dfb0a30def"
DiffEqNoiseProcess = "77a26b50-5914-5dd7-bc55-306e6241c503"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this requires a compat bound (I am surprised Aqua did not catch this... oh, we do not have Aqua in QuantumOptics.jl...)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😅 I'll submit an issue...

FFTW = "7a1cc6ca-52ef-59f5-83cd-3a7055c09341"
ForwardDiff = "f6369f11-7733-5829-9624-2563aa707210"
IterativeSolvers = "42fd0dbc-a981-5370-80f2-aaf504508153"
Expand Down Expand Up @@ -34,7 +35,7 @@ OrdinaryDiffEq = "5, 6"
QuantumOpticsBase = "0.3, 0.4, 0.5"
RecursiveArrayTools = "2, 3"
Reexport = "0.2, 1.0"
StochasticDiffEq = "6"
StochasticDiffEq = "6, 6.68.0"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should probably just be one single entry 6.68 if you want to refuse 6.67

WignerSymbols = "1, 2"
julia = "1.10"

Expand Down
7 changes: 7 additions & 0 deletions benchmark/Project.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
[deps]
BenchmarkTools = "6e4b80f9-dd63-53aa-95a3-0cdb28fa8baf"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
OrdinaryDiffEq = "1dea7af3-3e70-54e6-95c3-0bf5283fa5ed"
PkgBenchmark = "32113eaa-f34f-5b0d-bd6c-c81e245fc73d"
QuantumOptics = "6e0679c1-51ea-5a7c-ac74-d61b76210b0c"
StochasticDiffEq = "789caeaf-c7a9-5a7d-9973-96adeb23e2a0"
113 changes: 113 additions & 0 deletions benchmark/benchmarks.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
using BenchmarkTools
using QuantumOptics
using OrdinaryDiffEq
using StochasticDiffEq
using LinearAlgebra
using PkgBenchmark


const SUITE = BenchmarkGroup()

prob_list = ("schroedinger", "master", "stochastic_schroedinger", "stochastic_master")
for prob in prob_list
SUITE[prob] = BenchmarkGroup([prob])
for type in ("pure", "custom")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May we use "QO types" and "Base array types" instead of pure and custom? These confused me for a second

SUITE[prob][type] = BenchmarkGroup()
end
end

function bench_schroedinger(dim; pure=true)
b = SpinBasis(dim)
t₀, t₁ = (0.0, pi)
H = sigmax(b)
psi0 = spindown(b)
if pure
obj = psi0.data
Hobj = H.data
else
obj = psi0
Hobj = H
end
schroed!(dpsi, psi, p, t) = timeevolution.dschroedinger!(dpsi, Hobj, psi)
prob = ODEProblem(schroed!, obj, (t₀, t₁))
end
function bench_master(dim; pure=true)
b = SpinBasis(dim)
t₀, t₁ = (0.0, pi)
H = sigmax(b)
psi0 = spindown(b)
J = sigmam(b)
rho0 = dm(psi0)
rates = [0.3]
if pure
obj = rho0.data
Jobj, Jdag = (J.data, dagger(J).data)
Hobj = H.data
else
obj = rho0
Jobj, Jdag = (J, dagger(J))
Hobj = H
end
master!(drho, rho, p, t) = timeevolution.dmaster_h!(drho, Hobj, [Jobj], [Jdag], rates, rho, copy(obj))
prob = ODEProblem(master!, obj, (t₀, t₁))
end
function bench_stochastic_schroedinger(dim; pure=true)
b = SpinBasis(dim)
t₀, t₁ = (0.0, pi)
H = sigmax(b)
Hs = sigmay(b)
psi0 = spindown(b)
if pure
obj = psi0.data
Hobj = H.data
Hsobj = Hs.data
else
obj = psi0
Hobj = H
Hsobj = Hs
end
schroed!(dpsi, psi, p, t) = timeevolution.dschroedinger!(dpsi, Hobj, psi)
stoch_schroed!(dpsi, psi, p, t) = timeevolution.dschroedinger!(dpsi, Hsobj, psi)
prob = SDEProblem(schroed!, stoch_schroed!, obj, (t₀, t₁))
end
function bench_stochastic_master(dim; pure=true)
b = SpinBasis(dim)
t₀, t₁ = (0.0, pi)
H = sigmax(b)
Hs = sigmay(b)
psi0 = spindown(b)
J = sigmam(b)
rho0 = dm(psi0)
rates = [0.3]
if pure
obj = rho0.data
Jobj, Jdag = (J.data, dagger(J).data)
Hobj = H.data
Hsobj = Hs.data
else
obj = rho0
Jobj, Jdag = (J, dagger(J))
Hobj = H
Hsobj = Hs
end
master!(drho, rho, p, t) = timeevolution.dmaster_h!(drho, Hobj, [Jobj], [Jdag], rates, rho, copy(obj))
stoch_master!(drho, rho, p, t) = timeevolution.dmaster_h!(drho, Hsobj, [Jobj], [Jdag], rates, rho, copy(obj))
prob = SDEProblem(master!, stoch_master!, obj, (t₀, t₁))
end

for dim in (1//2, 20//1, 50//1, 100//1)
for prob in zip(("schroedinger", "master"), (:(bench_schroedinger), :(bench_master)))
name, bench = (prob[1], prob[2])
# benchmark solving ODE problems on data of QO types
SUITE[name]["pure"][string(dim)] = @benchmarkable solve(eval($bench)($dim; pure=true), DP5(); save_everystep=false)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am a bit worried that eval is part of the benchmark. Is there any change if instead you do solve(prob) setup=(prob=eval(...))?

# benchmark solving ODE problems on custom QO types
SUITE[name]["custom"][string(dim)] = @benchmarkable solve(eval($bench)($dim; pure=false), DP5(); save_everystep=false)
end
for prob in zip(("stochastic_schroedinger", "stochastic_master"), (:(bench_stochastic_schroedinger), :(bench_stochastic_master)))
name, bench = (prob[1], prob[2])
# benchmark solving ODE problems on data of QO types
SUITE[name]["pure"][string(dim)] = @benchmarkable solve(eval($bench)($dim; pure=true), EM(), dt=1/100; save_everystep=false)
# benchmark solving ODE problems on custom QO types
SUITE[name]["custom"][string(dim)] = @benchmarkable solve(eval($bench)($dim; pure=false), EM(), dt=1/100; save_everystep=false)
end
end
5 changes: 4 additions & 1 deletion src/stochastic_base.jl
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
using QuantumOpticsBase
using QuantumOpticsBase: check_samebases, check_multiplicable
using Random: AbstractRNG, randn!
import ..timeevolution: recast!, QO_CHECKS, pure_inference, as_vector

import DiffEqCallbacks, StochasticDiffEq, OrdinaryDiffEq
import DiffEqCallbacks, StochasticDiffEq, OrdinaryDiffEq, DiffEqNoiseProcess

"""
integrate_stoch(tspan, df::Function, dg{Function}, x0{ComplexF64},
Expand Down Expand Up @@ -104,3 +105,5 @@
end
nothing
end

DiffEqNoiseProcess.wiener_randn!(rng::AbstractRNG,rand_vec::T) where {T<:Union{Bra,Ket,Operator}} = randn!(rng,rand_vec.data)

Check warning on line 109 in src/stochastic_base.jl

View check run for this annotation

Codecov / codecov/patch

src/stochastic_base.jl#L109

Added line #L109 was not covered by tests
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a bugfix for something?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I had to define this method to enable SDEs to be solved on QO types. I didn't see an easy PR fix to submit to SciML so I defined it here. If this is too sketchy to you, let me know and I can do some more digging to work around this problem.

Loading