Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

added check for conv implementation #1155

Open
wants to merge 10 commits into
base: main
Choose a base branch
from

Conversation

jicampos
Copy link
Contributor

@jicampos jicampos commented Dec 17, 2024

Description

This change adds a check in the Vivado backend to skip instruction calculations for CONV1D and CONV2D when using linebuffer implementations. Since linebuffers do not require calculating instructions, min_height, or min_width, performing these calculations is redundant.

Note: Please delete options that are not relevant.

  • Bug fix (non-breaking change that fixes an issue)
  • New feature (non-breaking change which adds functionality)

Tests

Should not break current tests

Checklist

  • I have read the guidelines for contributing.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have made corresponding changes to the documentation.
  • My changes generate no new warnings.
  • I have installed and run pre-commit on the files I edited or added.
  • I have added tests that prove my fix is effective or that my feature works.

@jmitrevs
Copy link
Contributor

I will just add that the reason for this change was because in some cases the instructions became huge and made compilation more difficult, even though they were then unused.

@jmitrevs jmitrevs added the please test Trigger testing by creating local PR branch label Dec 17, 2024
@jmitrevs
Copy link
Contributor

It does seem to cause some test failures that should be investigated.

@@ -18,7 +18,7 @@ def transform(self, model, node):
raise Exception(f'Cannot generate instructions for node {node.name} ({node_class})')

def _generate_1d_instructions(self, node):
if node.model.config.get_config_value('IOType') == 'io_stream':
if node.model.config.get_config_value('IOType') == 'io_stream' and node.get_attr('conv_implementation') == 'Encoded':
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One should maybe make this check case-insensitive.

@bo3z
Copy link
Contributor

bo3z commented Dec 17, 2024 via email

@jmitrevs
Copy link
Contributor

jmitrevs commented Dec 17, 2024

I think "Instructions" implies encoded to me, but maybe not to everyone. But I do like the idea of moving this to the match. There may need to be some changes in the parameter creation (in convolution_templates.py for Vivado and Catapult) to make sure unused variables aren't implicitly required, but that should be fairly easy to fix. Only Vivado and Catapult implement encoded, I believe. Vitis, Quartus, and oneAPI do not, if I remember correctly.

@jicampos
Copy link
Contributor Author

Would it make sense to add this check in the match() method rather than the transform() method? That way the optimizer will never even be executed. Also, since this only applies to encoded implementation, should we rename this optimizer to reflect that?

We would still need to run the optimizer to add dummy weights in the case of LineBuffer implementations but we can skip generating instructions with the added condition in the if statement.

@jmitrevs
Copy link
Contributor

I think the suggestion from @bo3z is the better approach. Can we change it so that the check is on the match? There may be some downstream minor fixes needed with that approach, though, mainly allowing for undefined instructions and min widths/heights.

@jicampos
Copy link
Contributor Author

I think the suggestion from @bo3z is the better approach. Can we change it so that the check is on the match? There may be some downstream minor fixes needed with that approach, though, mainly allowing for undefined instructions and min widths/heights.

I've updated the optimizer match() to check the IOType and implementation. The LineBuffer default values are now added to Conv2DConfigTemplate and Conv1DConfigTemplate.

@jicampos jicampos requested a review from jmitrevs January 15, 2025 05:41
@jmitrevs jmitrevs added please test Trigger testing by creating local PR branch and removed please test Trigger testing by creating local PR branch labels Jan 17, 2025
@jmitrevs
Copy link
Contributor

We should wait to make sure all the pytests pass first before merging. (Maybe got ahead of myself approving.)

@jicampos
Copy link
Contributor Author

We should wait to make sure all the pytests pass first before merging. (Maybe got ahead of myself approving.)

It seems the tests are failing because of a connection issue in the qkeras tests, and is unrelated to the PR.

Before rerunning any tests, should we update the catapult backend with the same changes?

@jmitrevs
Copy link
Contributor

I agree that the test failure is unrelated. If you can add the same to Catpult, that would be great. (No other backend has encoded.)

@jicampos
Copy link
Contributor Author

I agree that the test failure is unrelated. If you can add the same to Catpult, that would be great. (No other backend has encoded.)

That sounds good, the catapult backend has been updated.

@jmitrevs jmitrevs added please test Trigger testing by creating local PR branch and removed please test Trigger testing by creating local PR branch labels Jan 17, 2025
@jmitrevs jmitrevs added please test Trigger testing by creating local PR branch and removed please test Trigger testing by creating local PR branch labels Jan 17, 2025
@@ -94,6 +94,9 @@ def format(self, node):
else:
params['fill_fn'] = 'FillConv1DBuffer'

params.setdefault('min_width', node.get_attr('in_width'))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think these can be replaced with just params['min_witdth'] = node.get_attr('in_width') and similar. Note that params is created on line 82, so params['min_witdth'] is not set. The setdefaults has the same effect as params['min_witdth'] = node.get_attr('in_width'), so it's better to just use the more straightforward form.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
please test Trigger testing by creating local PR branch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants