Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update token_handler.py #774

Closed
wants to merge 1 commit into from
Closed

Update token_handler.py #774

wants to merge 1 commit into from

Conversation

DedyKredo
Copy link

@DedyKredo DedyKredo commented Mar 11, 2024

Type

bug_fix, enhancement


Description

  • Introduced error handling in _get_system_user_tokens to return -1 when an exception occurs, enhancing the robustness of token calculations.
  • Minor formatting fix in count_tokens method without altering its functionality.

Changes walkthrough

Relevant files
Enhancement
token_handler.py
Enhance Token Calculation Robustness and Error Handling   

pr_agent/algo/token_handler.py

  • Wrapped token calculation in _get_system_user_tokens with a try-except
    block.
  • Returns -1 if an exception occurs during token calculation.
  • No change in the count_tokens method functionality, just formatting
    adjustment.
  • +10/-7   

    PR-Agent usage:
    Comment /help on the PR to get a list of all available PR-Agent tools and their descriptions

    @codiumai-pr-agent-pro codiumai-pr-agent-pro bot added enhancement New feature or request bug_fix labels Mar 11, 2024
    Copy link
    Contributor

    PR Description updated to latest commit (a461c2a)

    Copy link
    Contributor

    PR Review

    ⏱️ Estimated effort to review [1-5]

    2, because the changes are straightforward and localized to a single file with clear intentions. The modifications include adding a try-except block and a minor formatting adjustment, which are easy to understand and review.

    🏅 Score

    85

    🧪 Relevant tests

    No

    🎫 Relevant ticket

    No

    🔍 Possible issues

    Error Handling: Returning -1 in case of an exception might not be the best approach for all use cases. It could be beneficial to either log the specific error or throw a custom exception that calling functions can handle more gracefully.

    Exception Type: The catch-all except: statement might suppress unexpected errors, making debugging more difficult. It's generally a good practice to catch specific exceptions.

    🔒 Security concerns

    No


    ✨ Review tool usage guide:

    Overview:
    The review tool scans the PR code changes, and generates a PR review. The tool can be triggered automatically every time a new PR is opened, or can be invoked manually by commenting on any PR.
    When commenting, to edit configurations related to the review tool (pr_reviewer section), use the following template:

    /review --pr_reviewer.some_config1=... --pr_reviewer.some_config2=...
    

    With a configuration file, use the following template:

    [pr_reviewer]
    some_config1=...
    some_config2=...
    
    Utilizing extra instructions

    The review tool can be configured with extra instructions, which can be used to guide the model to a feedback tailored to the needs of your project.

    Be specific, clear, and concise in the instructions. With extra instructions, you are the prompter. Specify the relevant sub-tool, and the relevant aspects of the PR that you want to emphasize.

    Examples for extra instructions:

    [pr_reviewer] # /review #
    extra_instructions="""
    In the 'possible issues' section, emphasize the following:
    - Does the code logic cover relevant edge cases?
    - Is the code logic clear and easy to understand?
    - Is the code logic efficient?
    ...
    """
    

    Use triple quotes to write multi-line instructions. Use bullet points to make the instructions more readable.

    How to enable\disable automation
    • When you first install PR-Agent app, the default mode for the review tool is:
    pr_commands = ["/review", ...]
    

    meaning the review tool will run automatically on every PR, with the default configuration.
    Edit this field to enable/disable the tool, or to change the used configurations

    Auto-labels

    The review tool can auto-generate two specific types of labels for a PR:

    • a possible security issue label, that detects possible security issues (enable_review_labels_security flag)
    • a Review effort [1-5]: x label, where x is the estimated effort to review the PR (enable_review_labels_effort flag)
    Extra sub-tools

    The review tool provides a collection of possible feedbacks about a PR.
    It is recommended to review the possible options, and choose the ones relevant for your use case.
    Some of the feature that are disabled by default are quite useful, and should be considered for enabling. For example:
    require_score_review, require_soc2_ticket, and more.

    Auto-approve PRs

    By invoking:

    /review auto_approve
    

    The tool will automatically approve the PR, and add a comment with the approval.

    To ensure safety, the auto-approval feature is disabled by default. To enable auto-approval, you need to actively set in a pre-defined configuration file the following:

    [pr_reviewer]
    enable_auto_approval = true
    

    (this specific flag cannot be set with a command line argument, only in the configuration file, committed to the repository)

    You can also enable auto-approval only if the PR meets certain requirements, such as that the estimated_review_effort is equal or below a certain threshold, by adjusting the flag:

    [pr_reviewer]
    maximal_review_effort = 5
    
    More PR-Agent commands

    To invoke the PR-Agent, add a comment using one of the following commands:

    • /review: Request a review of your Pull Request.
    • /describe: Update the PR title and description based on the contents of the PR.
    • /improve [--extended]: Suggest code improvements. Extended mode provides a higher quality feedback.
    • /ask <QUESTION>: Ask a question about the PR.
    • /update_changelog: Update the changelog based on the PR's contents.
    • /add_docs 💎: Generate docstring for new components introduced in the PR.
    • /generate_labels 💎: Generate labels for the PR based on the PR's contents.
    • /analyze 💎: Automatically analyzes the PR, and presents changes walkthrough for each component.

    See the tools guide for more details.
    To list the possible configuration parameters, add a /config comment.

    See the review usage page for a comprehensive guide on using this tool.

    Copy link
    Contributor

    codiumai-pr-agent-pro bot commented Mar 11, 2024

    PR Code Suggestions

    CategorySuggestions                                                                                                                                                       
    Best practice
    Replace the bare except: with except Exception: to avoid catching system-exiting exceptions.

    The bare except: clause will catch all exceptions, including system-exiting exceptions and
    those that you might not want to catch. This can make debugging harder and might hide
    underlying bugs. It's better to catch specific exceptions. If you're unsure which specific
    exceptions might be raised by the code in the try block, you can start by catching
    Exception, which will not catch system-exiting exceptions.

    pr_agent/algo/token_handler.py [52-60]

     try:
         environment = Environment(undefined=StrictUndefined)
         system_prompt = environment.from_string(system).render(vars)
         user_prompt = environment.from_string(user).render(vars)
         system_prompt_tokens = len(encoder.encode(system_prompt))
         user_prompt_tokens = len(encoder.encode(user_prompt))
         return system_prompt_tokens + user_prompt_tokens
    -except:
    +except Exception as e:
    +    # Consider logging the exception here
         return -1
     
    Replace returning a magic number with raising a custom exception for error handling.

    Returning a magic number (-1) to indicate an error is not a best practice in Python, as it
    can be confusing and might lead to bugs if not properly handled by the calling code. It's
    better to raise a custom exception that clearly indicates what went wrong. This approach
    makes the code more readable and easier to debug.

    pr_agent/algo/token_handler.py [59-60]

    -except:
    -    return -1
    +except Exception as e:
    +    raise ValueError("Failed to calculate token counts due to an error: " + str(e))
     
    Performance
    Store the results of encoder.encode calls in variables to improve performance.

    The encoder.encode method is called twice with the same parameters (system_prompt and
    user_prompt). To improve performance, especially if encoding is resource-intensive,
    consider storing the results of these calls in variables and reusing them.

    pr_agent/algo/token_handler.py [56-57]

    -system_prompt_tokens = len(encoder.encode(system_prompt))
    -user_prompt_tokens = len(encoder.encode(user_prompt))
    +system_encoded = encoder.encode(system_prompt)
    +user_encoded = encoder.encode(user_prompt)
    +system_prompt_tokens = len(system_encoded)
    +user_prompt_tokens = len(user_encoded)
     
    Enhancement
    Omit redundant arguments for cleaner code.

    The disallowed_special=() argument in the encoder.encode method call is redundant if
    you're not specifying any disallowed special tokens. If this is the default behavior, you
    can omit this argument for cleaner code.

    pr_agent/algo/token_handler.py [72]

    -return len(self.encoder.encode(patch, disallowed_special=()))
    +return len(self.encoder.encode(patch))
     

    ✨ Improve tool usage guide:

    Overview:
    The improve tool scans the PR code changes, and automatically generates suggestions for improving the PR code. The tool can be triggered automatically every time a new PR is opened, or can be invoked manually by commenting on a PR.
    When commenting, to edit configurations related to the improve tool (pr_code_suggestions section), use the following template:

    /improve --pr_code_suggestions.some_config1=... --pr_code_suggestions.some_config2=...
    

    With a configuration file, use the following template:

    [pr_code_suggestions]
    some_config1=...
    some_config2=...
    
    Enabling\disabling automation

    When you first install the app, the default mode for the improve tool is:

    pr_commands = ["/improve --pr_code_suggestions.summarize=true", ...]
    

    meaning the improve tool will run automatically on every PR, with summarization enabled. Delete this line to disable the tool from running automatically.

    Utilizing extra instructions

    Extra instructions are very important for the improve tool, since they enable to guide the model to suggestions that are more relevant to the specific needs of the project.

    Be specific, clear, and concise in the instructions. With extra instructions, you are the prompter. Specify relevant aspects that you want the model to focus on.

    Examples for extra instructions:

    [pr_code_suggestions] # /improve #
    extra_instructions="""
    Emphasize the following aspects:
    - Does the code logic cover relevant edge cases?
    - Is the code logic clear and easy to understand?
    - Is the code logic efficient?
    ...
    """
    

    Use triple quotes to write multi-line instructions. Use bullet points to make the instructions more readable.

    A note on code suggestions quality
    • While the current AI for code is getting better and better (GPT-4), it's not flawless. Not all the suggestions will be perfect, and a user should not accept all of them automatically.
    • Suggestions are not meant to be simplistic. Instead, they aim to give deep feedback and raise questions, ideas and thoughts to the user, who can then use his judgment, experience, and understanding of the code base.
    • Recommended to use the 'extra_instructions' field to guide the model to suggestions that are more relevant to the specific needs of the project, or use the custom suggestions 💎 tool
    • With large PRs, best quality will be obtained by using 'improve --extended' mode.
    More PR-Agent commands

    To invoke the PR-Agent, add a comment using one of the following commands:

    • /review: Request a review of your Pull Request.
    • /describe: Update the PR title and description based on the contents of the PR.
    • /improve [--extended]: Suggest code improvements. Extended mode provides a higher quality feedback.
    • /ask <QUESTION>: Ask a question about the PR.
    • /update_changelog: Update the changelog based on the PR's contents.
    • /add_docs 💎: Generate docstring for new components introduced in the PR.
    • /generate_labels 💎: Generate labels for the PR based on the PR's contents.
    • /analyze 💎: Automatically analyzes the PR, and presents changes walkthrough for each component.

    See the tools guide for more details.
    To list the possible configuration parameters, add a /config comment.

    See the improve usage page for a more comprehensive guide on using this tool.

    @DedyKredo
    Copy link
    Author

    /help

    Copy link
    Contributor

    codiumai-pr-agent-pro bot commented Mar 11, 2024

    PR Agent Walkthrough

    🤖 Welcome to the PR Agent, an AI-powered tool for automated pull request analysis, feedback, suggestions and more.

    Here is a list of tools you can use to interact with the PR Agent:

    ToolDescriptionInvoke Interactively 💎

    DESCRIBE

    Generates PR description - title, type, summary, code walkthrough and labels
    • Run

    REVIEW

    Adjustable feedback about the PR, possible issues, security concerns, review effort and more
    • Run

    IMPROVE

    Code suggestions for improving the PR.
    • Run

    ANALYZE 💎

    Identifies code components that changed in the PR, and enables to interactively generate tests, docs, and code suggestions for each component.
    • Run

    UPDATE CHANGELOG

    Automatically updates the changelog.
    • Run

    ADD DOCUMENTATION 💎

    Generates documentation to methods/functions/classes that changed in the PR.
    • Run

    ASK

    Answering free-text questions about the PR.

    [*]

    GENERATE CUSTOM LABELS

    Generates custom labels for the PR, based on specific guidelines defined by the user

    [*]

    TEST 💎

    Generates unit tests for a specific component, based on the PR code change.

    [*]

    CI FEEDBACK 💎

    Generates feedback and analysis for a failed CI job.

    [*]

    CUSTOM SUGGESTIONS 💎

    Generates custom suggestions for improving the PR code, based on specific guidelines defined by the user.

    [*]

    SIMILAR ISSUE

    Automatically retrieves and presents similar issues.

    [*]

    (1) Note that each tool be triggered automatically when a new PR is opened, or called manually by commenting on a PR.

    (2) Tools marked with [*] require additional parameters to be passed. For example, to invoke the /ask tool, you need to comment on a PR: /ask "<question content>". See the relevant documentation for each tool for more details.

    Copy link
    Contributor

    codiumai-pr-agent-pro bot commented Mar 11, 2024

    PR Documentation

    Here is a list of the files that were modified in the PR, with docstring for each altered code component:

    token_handler.py                                                                                                                               

      _get_system_user_tokens (method) [+9/-6]                                                                                                       
      Component signature:
      def _get_system_user_tokens(self, pr, encoder, vars: dict, system, user):

      Docstring:

      """
      Calculates the number of tokens in the system and user strings.
      
      Args:
      - pr: The pull request object.
      - encoder: An object of the encoding_for_model class from the tiktoken module.
      - vars: A dictionary of variables.
      - system: The system string.
      - user: The user string.
      
      Returns:
      The sum of the number of tokens in the system and user strings.
      """

    Copy link
    Contributor

    codiumai-pr-agent-pro bot commented Mar 11, 2024

    PR Analysis

    • This screen contains a list of code components that were changed in this PR.
    • You can initiate specific actions for each component, by checking the relevant boxes.
    • After you check a box, the action will be performed automatically by PR-Agent.
    • Results will appear as a comment on the PR, typically after 30-60 seconds.
    fileChanged components
    token_handler.py
    • Test
    • Docs
    • Improve
    • Similar
     
    _get_system_user_tokens
    (method of TokenHandler)
     
    +9/-6
     

    ✨ Usage guide:

    Using static code analysis capabilities, the analyze tool scans the PR code changes and find the code components (methods, functions, classes) that changed in the PR.
    The tool can be triggered automatically every time a new PR is opened, or can be invoked manually by commenting on any PR:

    /analyze
    

    Language that are currently supported: Python, Java, C++, JavaScript, TypeScript.
    See more information about the tool in the docs.

    Copy link
    Contributor

    Changelog updates:

    markdown

    2024-03-11

    Enhanced

    • Introduced error handling in _get_system_user_tokens to return -1 when an exception occurs, enhancing the robustness of token calculations.
    • Minor formatting fix in count_tokens method without altering its functionality.

    to commit the new content to the CHANGELOG.md file, please type:
    '/update_changelog --pr_update_changelog.push_changelog_changes=true'

    Copy link
    Contributor

    codiumai-pr-agent-pro bot commented Mar 11, 2024

    Generated tests for '_get_system_user_tokens'

      _get_system_user_tokens (method) [+9/-6]

      Component signature:

      def _get_system_user_tokens(self, pr, encoder, vars: dict, system, user):


      Tests for code changes in _get_system_user_tokens method:

      [happy path]
      _get_system_user_tokens should correctly calculate the total number of tokens for valid system and user strings

      test_code:

      import pytest
      from pr_agent.algo.token_handler import TokenHandler
      from tiktoken.mocks import MockEncoder
      
      def test_get_system_user_tokens_happy_path():
          # Given a TokenHandler with a mock encoder and valid system and user strings
          encoder = MockEncoder({"system": 5, "user": 3})
          token_handler = TokenHandler()
          vars = {"name": "PR Tester"}
          system = "Hello, {{ name }}!"
          user = "Welcome, {{ name }}!"
      
          # When _get_system_user_tokens is called
          total_tokens = token_handler._get_system_user_tokens(None, encoder, vars, system, user)
      
          # Then the total number of tokens should be correctly calculated
          assert total_tokens == 8
      [happy path]
      _get_system_user_tokens should handle empty strings for system and user without errors

      test_code:

      import pytest
      from pr_agent.algo.token_handler import TokenHandler
      from tiktoken.mocks import MockEncoder
      
      def test_get_system_user_tokens_with_empty_strings():
          # Given a TokenHandler with a mock encoder and empty strings for system and user
          encoder = MockEncoder({"": 0})
          token_handler = TokenHandler()
          vars = {}
      
          # When _get_system_user_tokens is called with empty strings
          total_tokens = token_handler._get_system_user_tokens(None, encoder, vars, "", "")
      
          # Then the total number of tokens should be 0
          assert total_tokens == 0
      [edge case]
      _get_system_user_tokens should return -1 when an exception occurs during token calculation

      test_code:

      import pytest
      from pr_agent.algo.token_handler import TokenHandler
      from tiktoken.mocks import MockEncoder
      
      def test_get_system_user_tokens_with_exception():
          # Given a TokenHandler with a mock encoder that raises an exception and invalid system and user strings
          encoder = MockEncoder({}, should_raise=True)
          token_handler = TokenHandler()
          vars = {"name": "PR Tester"}
          system = "Hello, {{ non_existent_var }}!"
          user = "Welcome, {{ another_non_existent_var }}!"
      
          # When _get_system_user_tokens is called
          total_tokens = token_handler._get_system_user_tokens(None, encoder, vars, system, user)
      
          # Then the method should return -1
          assert total_tokens == -1

      ✨ Test tool usage guide:

      The test tool generate tests for a selected component, based on the PR code changes.
      It can be invoked manually by commenting on any PR:

      /test component_name
      

      where 'component_name' is the name of a specific component in the PR. To get a list of the components that changed in the PR, use the analyze tool.
      Language that are currently supported: Python, Java, C++, JavaScript, TypeScript.

      Configuration options:

      • num_tests: number of tests to generate. Default is 3.
      • testing_framework: the testing framework to use. If not set, for Python it will use pytest, for Java it will use JUnit, for C++ it will use Catch2, and for JavaScript and TypeScript it will use jest.
      • avoid_mocks: if set to true, the tool will try to avoid using mocks in the generated tests. Note that even if this option is set to true, the tool might still use mocks if it cannot generate a test without them. Default is true.
      • extra_instructions: Optional extra instructions to the tool. For example: "use the following mock injection scheme: ...".
      • file: in case there are several components with the same name, you can specify the relevant file.
      • class_name: in case there are several components with the same name in the same file, you can specify the relevant class name.

      See more information about the test tool in the docs.

    @DedyKredo DedyKredo closed this Mar 11, 2024
    @mrT23 mrT23 deleted the DedyKredo-patch-1 branch May 18, 2024 09:28
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Projects
    None yet
    Development

    Successfully merging this pull request may close these issues.

    1 participant