Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: transform function to support proper batch inference #125

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

taepd
Copy link

@taepd taepd commented Jun 4, 2023

Issue #, if available:
This PR is related to #108 and #123.

Description of changes:
As mentioned in #123, the batch inference provided by torchserve literally provides batch inference in a 'batch' format. However, the batch inference implementation in #108 simply runs a single inference through a loop. This is not a correct implementation of batch inference. TorchServe's documentation on batch inference, shows an example where the developer handles this logic and feeds the entire input batch to the model.

If I understand correctly, keeping the batch inference implementation in its current state would be deceptive to the user.

To make batch inference work correctly, we've modified it so that a list of requests can be sent to _transform_fn() in list format.

However, the current implementation requires modifications to related functions such as default_input_fn() and associated documentation, examples, etc. As far as I know, there is no better alternative, so it would be good to review and discuss this PR before proceeding with modifications to other functions.

Testing done:
yes

Merge Checklist

Put an x in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before merging your pull request.

General

Tests

  • I have added tests that prove my fix is effective or that my feature works (if appropriate)

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

@chen3933
Copy link

@nskool Is code change going to impact customer who use transformer and implemented their own input_fn, predict_fn or output_fn which can not handle len(data) > 1?

@sagemaker-bot
Copy link
Collaborator

AWS CodeBuild CI Report

  • CodeBuild project: sagemaker-inference-toolkit-pr
  • Commit ID: 4cf728c
  • Result: SUCCEEDED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

@nikhil-sk
Copy link
Contributor

@chen3933 It does not seem common to implement a predict_fn, input_fn, output_fn that handles only len(data)==1, but if customer has implemented to process only 1 request e.g., any assert check to test the length of the input, then it seems that the customer will have to change the logic.

We can go ahead and document this behavior change, if we merge this PR in.

While the predict_fn is mandatory for customer to provide (https://github.com/aws/sagemaker-inference-toolkit/blob/master/src/sagemaker_inference/default_inference_handler.py#L71), the input_fn, and output_fn are not. So it maybe easy for customer to change predict_fn. However, tt should be tested further if the default input_fn/output_fn functions can process batch input, specifically encode/decode method here - https://github.com/aws/sagemaker-inference-toolkit/blob/master/src/sagemaker_inference/default_inference_handler.py#L71 and https://github.com/aws/sagemaker-inference-toolkit/blob/master/src/sagemaker_inference/encoder.py#L93.

If the default_input_fn and default_output_fn (https://github.com/aws/sagemaker-inference-toolkit/blob/master/src/sagemaker_inference/default_inference_handler.py) cannot handle batch_size > 1, then this will break lot of scenarios.

@taepd Thank you for creating this PR, can you confirm if you tested the default input fn/output fn with batch_size > 1?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants