Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: The rigetti.qvm and rigetti.qpu device now support parallel processing. #148

Draft
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

MarquessV
Copy link
Contributor

@MarquessV MarquessV commented Nov 18, 2023

This adds support for parallel processing batches, which can greatly improve performance.

[sc-54134]

Copy link

codecov bot commented Dec 5, 2023

Codecov Report

All modified and coverable lines are covered by tests ✅

❗ No coverage uploaded for pull request base (master@ad74b16). Click here to learn what that means.

Additional details and impacted files
@@            Coverage Diff            @@
##             master     #148   +/-   ##
=========================================
  Coverage          ?   91.70%           
=========================================
  Files             ?       10           
  Lines             ?      663           
  Branches          ?        0           
=========================================
  Hits              ?      608           
  Misses            ?       55           
  Partials          ?        0           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Comment on lines +9 to +13
* The `rigetti.qvm` and `rigetti.qpu` device can now be initialized
with a `parallel` and `max_threads` parameter. When `parallel` is
set to True, jobs will be executed in parallel using a `ThreadPool`.
This can be used in conjunction with `max_threads` to set the
maximum number of worker threads to use.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* The `rigetti.qvm` and `rigetti.qpu` device can now be initialized
with a `parallel` and `max_threads` parameter. When `parallel` is
set to True, jobs will be executed in parallel using a `ThreadPool`.
This can be used in conjunction with `max_threads` to set the
maximum number of worker threads to use.
* The `rigetti.qvm` and `rigetti.qpu` device can now be initialized
with a `parallel` and `max_threads` parameter. When `parallel` is
set to True, jobs will be executed in parallel using a `ThreadPool`.
This can be used in conjunction with `max_threads` to set the
maximum number of worker threads to use.
[(#146)](https://github.com/PennyLaneAI/pennylane-rigetti/pull/148)

self._parameter_reference_map[parameter_string] = current_ref

# Store the values bound to the symbolic parameter
self._batched_parameter_map[parameter_string] = operation.data[0]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if it's a Rot gate? They might have more than one parameter. Maybe just only allow parameters with one parameter here?

par.append(self._parameter_reference_map[parameter_string])
else:
for param in operation.data:
if getattr(param, "requires_grad", False) and operation.name != "BasisState":
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless this is a "backprop" device, all parameters sent to the device will be purely numpy, not autograd. So I'm trying to figure out whether or not this block would ever be True in a full pennylane workflow, or just when someone is using the device in isolation.

@trbromley
Copy link
Contributor

Hi @MarquessV! Let us know if there is anything we can help with on this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants