Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update token_handler.py #784

Closed
wants to merge 1 commit into from
Closed

Update token_handler.py #784

wants to merge 1 commit into from

Conversation

DedyKredo
Copy link

@DedyKredo DedyKredo commented Mar 14, 2024

Type

enhancement, bug_fix


Description

  • Introduced error handling in _get_system_user_tokens to return -1 when an exception occurs, enhancing the robustness of token calculation.
  • Minor formatting fix in count_tokens method to maintain code consistency.

Changes walkthrough

Relevant files
Enhancement
token_handler.py
Enhance Token Calculation Robustness and Minor Formatting Fix

pr_agent/algo/token_handler.py

  • Wrapped token calculation logic in _get_system_user_tokens with a
    try-except block.
  • Returns -1 if an exception occurs during token calculation.
  • No change in the logic of count_tokens method, just formatting
    adjustment.
  • +10/-7   

    PR-Agent usage:
    Comment /help on the PR to get a list of all available PR-Agent tools and their descriptions

    @codiumai-pr-agent-pro codiumai-pr-agent-pro bot added enhancement New feature or request bug_fix labels Mar 14, 2024
    Copy link
    Contributor

    PR Description updated to latest commit (a461c2a)

    Copy link
    Contributor

    PR Review

    ⏱️ Estimated effort to review [1-5]

    2, because the changes are straightforward and localized to a single file with clear objectives: enhancing error handling and minor formatting adjustments.

    🏅 Score

    85

    🧪 Relevant tests

    No

    🔍 Possible issues

    Error Handling: The use of a bare except clause in the try-except block could catch exceptions that are not related to the token calculation, potentially masking other issues. It's generally recommended to catch specific exceptions.

    Magic Number: Returning -1 to indicate an error is not self-explanatory. It might be better to raise a custom exception that clearly indicates the nature of the error.

    🔒 Security concerns

    No


    ✨ Review tool usage guide:

    Overview:
    The review tool scans the PR code changes, and generates a PR review. The tool can be triggered automatically every time a new PR is opened, or can be invoked manually by commenting on any PR.
    When commenting, to edit configurations related to the review tool (pr_reviewer section), use the following template:

    /review --pr_reviewer.some_config1=... --pr_reviewer.some_config2=...
    

    With a configuration file, use the following template:

    [pr_reviewer]
    some_config1=...
    some_config2=...
    
    Utilizing extra instructions

    The review tool can be configured with extra instructions, which can be used to guide the model to a feedback tailored to the needs of your project.

    Be specific, clear, and concise in the instructions. With extra instructions, you are the prompter. Specify the relevant sub-tool, and the relevant aspects of the PR that you want to emphasize.

    Examples for extra instructions:

    [pr_reviewer] # /review #
    extra_instructions="""
    In the 'possible issues' section, emphasize the following:
    - Does the code logic cover relevant edge cases?
    - Is the code logic clear and easy to understand?
    - Is the code logic efficient?
    ...
    """
    

    Use triple quotes to write multi-line instructions. Use bullet points to make the instructions more readable.

    How to enable\disable automation
    • When you first install PR-Agent app, the default mode for the review tool is:
    pr_commands = ["/review", ...]
    

    meaning the review tool will run automatically on every PR, with the default configuration.
    Edit this field to enable/disable the tool, or to change the used configurations

    Auto-labels

    The review tool can auto-generate two specific types of labels for a PR:

    • a possible security issue label, that detects possible security issues (enable_review_labels_security flag)
    • a Review effort [1-5]: x label, where x is the estimated effort to review the PR (enable_review_labels_effort flag)
    Extra sub-tools

    The review tool provides a collection of possible feedbacks about a PR.
    It is recommended to review the possible options, and choose the ones relevant for your use case.
    Some of the feature that are disabled by default are quite useful, and should be considered for enabling. For example:
    require_score_review, require_soc2_ticket, and more.

    Auto-approve PRs

    By invoking:

    /review auto_approve
    

    The tool will automatically approve the PR, and add a comment with the approval.

    To ensure safety, the auto-approval feature is disabled by default. To enable auto-approval, you need to actively set in a pre-defined configuration file the following:

    [pr_reviewer]
    enable_auto_approval = true
    

    (this specific flag cannot be set with a command line argument, only in the configuration file, committed to the repository)

    You can also enable auto-approval only if the PR meets certain requirements, such as that the estimated_review_effort is equal or below a certain threshold, by adjusting the flag:

    [pr_reviewer]
    maximal_review_effort = 5
    
    More PR-Agent commands

    To invoke the PR-Agent, add a comment using one of the following commands:

    • /review: Request a review of your Pull Request.
    • /describe: Update the PR title and description based on the contents of the PR.
    • /improve [--extended]: Suggest code improvements. Extended mode provides a higher quality feedback.
    • /ask <QUESTION>: Ask a question about the PR.
    • /update_changelog: Update the changelog based on the PR's contents.
    • /add_docs 💎: Generate docstring for new components introduced in the PR.
    • /generate_labels 💎: Generate labels for the PR based on the PR's contents.
    • /analyze 💎: Automatically analyzes the PR, and presents changes walkthrough for each component.

    See the tools guide for more details.
    To list the possible configuration parameters, add a /config comment.

    See the review usage page for a comprehensive guide on using this tool.

    Copy link
    Contributor

    codiumai-pr-agent-pro bot commented Mar 14, 2024

    PR Code Suggestions

    CategorySuggestions                                                                                                                                                       
    Best practice
    Catch specific exceptions instead of using a bare except clause.

    It's recommended to catch specific exceptions instead of using a bare except: clause. This
    will help in identifying and handling potential errors more accurately. For instance, if
    Environment or encoder.encode methods are expected to raise specific exceptions, catch
    those explicitly.

    pr_agent/algo/token_handler.py [52-60]

     try:
         ...
    -except:
    +except SpecificException:
         return -1
     
    Document the reason behind not disallowing any special tokens in encoder.encode.

    The use of disallowed_special=() in encoder.encode method call seems to imply that no
    special tokens are disallowed. If this is intentional, consider documenting the reason
    behind this choice for future reference and clarity.

    pr_agent/algo/token_handler.py [72]

    +# No special tokens are disallowed in this context due to [reason].
     return len(self.encoder.encode(patch, disallowed_special=()))
     
    Enhancement
    Add logging within the except block for error reporting.

    Consider adding logging or error reporting within the except block to provide insights
    into what error occurred. This can be crucial for debugging and understanding why the
    method returned -1.

    pr_agent/algo/token_handler.py [52-60]

     try:
         ...
    -except:
    +except Exception as e:
    +    logger.error(f"Failed to get system user tokens: {e}")
         return -1
     
    Add error handling to count_tokens method for consistency and robustness.

    The method count_tokens currently does not handle any exceptions. Consider adding error
    handling similar to _get_system_user_tokens to make the behavior consistent across methods
    and to handle potential encoding issues gracefully.

    pr_agent/algo/token_handler.py [72]

    -return len(self.encoder.encode(patch, disallowed_special=()))
    +try:
    +    return len(self.encoder.encode(patch, disallowed_special=()))
    +except SpecificException:
    +    logger.error("Failed to encode patch.")
    +    return -1
     
    Maintainability
    Extract token calculation logic into a separate method.

    For better maintainability and readability, consider extracting the token calculation
    logic into a separate method. This will make _get_system_user_tokens cleaner and adhere to
    the single responsibility principle.

    pr_agent/algo/token_handler.py [56-58]

    -system_prompt_tokens = len(encoder.encode(system_prompt))
    -user_prompt_tokens = len(encoder.encode(user_prompt))
    -return system_prompt_tokens + user_prompt_tokens
    +return self._calculate_tokens(system_prompt) + self._calculate_tokens(user_prompt)
     
    +def _calculate_tokens(self, prompt):
    +    return len(encoder.encode(prompt))
    +

    ✨ Improve tool usage guide:

    Overview:
    The improve tool scans the PR code changes, and automatically generates suggestions for improving the PR code. The tool can be triggered automatically every time a new PR is opened, or can be invoked manually by commenting on a PR.
    When commenting, to edit configurations related to the improve tool (pr_code_suggestions section), use the following template:

    /improve --pr_code_suggestions.some_config1=... --pr_code_suggestions.some_config2=...
    

    With a configuration file, use the following template:

    [pr_code_suggestions]
    some_config1=...
    some_config2=...
    
    Enabling\disabling automation

    When you first install the app, the default mode for the improve tool is:

    pr_commands = ["/improve --pr_code_suggestions.summarize=true", ...]
    

    meaning the improve tool will run automatically on every PR, with summarization enabled. Delete this line to disable the tool from running automatically.

    Utilizing extra instructions

    Extra instructions are very important for the improve tool, since they enable to guide the model to suggestions that are more relevant to the specific needs of the project.

    Be specific, clear, and concise in the instructions. With extra instructions, you are the prompter. Specify relevant aspects that you want the model to focus on.

    Examples for extra instructions:

    [pr_code_suggestions] # /improve #
    extra_instructions="""
    Emphasize the following aspects:
    - Does the code logic cover relevant edge cases?
    - Is the code logic clear and easy to understand?
    - Is the code logic efficient?
    ...
    """
    

    Use triple quotes to write multi-line instructions. Use bullet points to make the instructions more readable.

    A note on code suggestions quality
    • While the current AI for code is getting better and better (GPT-4), it's not flawless. Not all the suggestions will be perfect, and a user should not accept all of them automatically.
    • Suggestions are not meant to be simplistic. Instead, they aim to give deep feedback and raise questions, ideas and thoughts to the user, who can then use his judgment, experience, and understanding of the code base.
    • Recommended to use the 'extra_instructions' field to guide the model to suggestions that are more relevant to the specific needs of the project, or use the custom suggestions 💎 tool
    • With large PRs, best quality will be obtained by using 'improve --extended' mode.
    More PR-Agent commands

    To invoke the PR-Agent, add a comment using one of the following commands:

    • /review: Request a review of your Pull Request.
    • /describe: Update the PR title and description based on the contents of the PR.
    • /improve [--extended]: Suggest code improvements. Extended mode provides a higher quality feedback.
    • /ask <QUESTION>: Ask a question about the PR.
    • /update_changelog: Update the changelog based on the PR's contents.
    • /add_docs 💎: Generate docstring for new components introduced in the PR.
    • /generate_labels 💎: Generate labels for the PR based on the PR's contents.
    • /analyze 💎: Automatically analyzes the PR, and presents changes walkthrough for each component.

    See the tools guide for more details.
    To list the possible configuration parameters, add a /config comment.

    See the improve usage page for a more comprehensive guide on using this tool.

    @DedyKredo
    Copy link
    Author

    /help

    Copy link
    Contributor

    codiumai-pr-agent-pro bot commented Mar 14, 2024

    PR Agent Walkthrough

    🤖 Welcome to the PR Agent, an AI-powered tool for automated pull request analysis, feedback, suggestions and more.

    Here is a list of tools you can use to interact with the PR Agent:

    ToolDescriptionInvoke Interactively 💎

    DESCRIBE

    Generates PR description - title, type, summary, code walkthrough and labels
    • Run

    REVIEW

    Adjustable feedback about the PR, possible issues, security concerns, review effort and more
    • Run

    IMPROVE

    Code suggestions for improving the PR.
    • Run

    ANALYZE 💎

    Identifies code components that changed in the PR, and enables to interactively generate tests, docs, and code suggestions for each component.
    • Run

    UPDATE CHANGELOG

    Automatically updates the changelog.
    • Run

    ADD DOCUMENTATION 💎

    Generates documentation to methods/functions/classes that changed in the PR.
    • Run

    ASK

    Answering free-text questions about the PR.

    [*]

    GENERATE CUSTOM LABELS

    Generates custom labels for the PR, based on specific guidelines defined by the user

    [*]

    TEST 💎

    Generates unit tests for a specific component, based on the PR code change.

    [*]

    CI FEEDBACK 💎

    Generates feedback and analysis for a failed CI job.

    [*]

    CUSTOM SUGGESTIONS 💎

    Generates custom suggestions for improving the PR code, based on specific guidelines defined by the user.

    [*]

    SIMILAR ISSUE

    Automatically retrieves and presents similar issues.

    [*]

    (1) Note that each tool be triggered automatically when a new PR is opened, or called manually by commenting on a PR.

    (2) Tools marked with [*] require additional parameters to be passed. For example, to invoke the /ask tool, you need to comment on a PR: /ask "<question content>". See the relevant documentation for each tool for more details.

    Copy link
    Contributor

    Changelog updates:

    2024-03-14

    Enhanced

    • Introduced error handling in _get_system_user_tokens to return -1 when an exception occurs, enhancing the robustness of token calculation.
    • Minor formatting fix in count_tokens method to maintain code consistency.

    to commit the new content to the CHANGELOG.md file, please type:
    '/update_changelog --pr_update_changelog.push_changelog_changes=true'

    Copy link
    Contributor

    codiumai-pr-agent-pro bot commented Mar 14, 2024

    PR Analysis

    • This screen contains a list of code components that were changed in this PR.
    • You can initiate specific actions for each component, by checking the relevant boxes.
    • After you check a box, the action will be performed automatically by PR-Agent.
    • Results will appear as a comment on the PR, typically after 30-60 seconds.
    fileChanged components
    token_handler.py
    • Test
    • Docs
    • Improve
    • Similar
     
    _get_system_user_tokens
    (method of TokenHandler)
     
    +9/-6
     

    ✨ Usage guide:

    Using static code analysis capabilities, the analyze tool scans the PR code changes and find the code components (methods, functions, classes) that changed in the PR.
    The tool can be triggered automatically every time a new PR is opened, or can be invoked manually by commenting on any PR:

    /analyze
    

    Language that are currently supported: Python, Java, C++, JavaScript, TypeScript.
    See more information about the tool in the docs.

    Copy link
    Contributor

    codiumai-pr-agent-pro bot commented Mar 14, 2024

    Generated tests for '_get_system_user_tokens'

      _get_system_user_tokens (method) [+9/-6]

      Component signature:

      def _get_system_user_tokens(self, pr, encoder, vars: dict, system, user):


      Tests for code changes in _get_system_user_tokens method:

      [happy path]
      _get_system_user_tokens should correctly calculate the total number of tokens for valid system and user strings

      test_code:

      import pytest
      from pr_agent.algo.token_handler import TokenHandler
      from tiktoken.mocks import MockEncoder
      
      def test_get_system_user_tokens_happy_path():
          # Given a TokenHandler instance with a mock encoder and valid system and user strings
          token_handler = TokenHandler()
          mock_encoder = MockEncoder({"system": 5, "user": 3})
          vars = {"name": "PR Tester"}
          system = "Hello, {{ name }}!"
          user = "Welcome, {{ name }}!"
      
          # When _get_system_user_tokens is called with these parameters
          result = token_handler._get_system_user_tokens(None, mock_encoder, vars, system, user)
      
          # Then the result should be the sum of tokens for system and user strings
          assert result == 8
      [happy path]
      _get_system_user_tokens should handle empty strings for system and user without errors

      test_code:

      import pytest
      from pr_agent.algo.token_handler import TokenHandler
      from tiktoken.mocks import MockEncoder
      
      def test_get_system_user_tokens_with_empty_strings():
          # Given a TokenHandler instance with a mock encoder and empty strings for system and user
          token_handler = TokenHandler()
          mock_encoder = MockEncoder({"": 0})
          vars = {}
      
          # When _get_system_user_tokens is called with these parameters
          result = token_handler._get_system_user_tokens(None, mock_encoder, vars, "", "")
      
          # Then the result should be 0 indicating no tokens were found
          assert result == 0
      [edge case]
      _get_system_user_tokens should return -1 when an exception occurs during token calculation

      test_code:

      import pytest
      from pr_agent.algo.token_handler import TokenHandler
      from tiktoken.mocks import MockEncoder
      
      def test_get_system_user_tokens_with_exception():
          # Given a TokenHandler instance with a mock encoder that raises an exception and invalid system and user strings
          token_handler = TokenHandler()
          mock_encoder = MockEncoder(exception=RuntimeError("Encoding failed"))
          vars = {"name": "PR Tester"}
          system = "Hello, {{ non_existent_var }}!"
          user = "Welcome, {{ another_non_existent_var }}!"
      
          # When _get_system_user_tokens is called with these parameters
          result = token_handler._get_system_user_tokens(None, mock_encoder, vars, system, user)
      
          # Then the result should be -1 indicating an error occurred
          assert result == -1

      ✨ Test tool usage guide:

      The test tool generate tests for a selected component, based on the PR code changes.
      It can be invoked manually by commenting on any PR:

      /test component_name
      

      where 'component_name' is the name of a specific component in the PR. To get a list of the components that changed in the PR, use the analyze tool.
      Language that are currently supported: Python, Java, C++, JavaScript, TypeScript.

      Configuration options:

      • num_tests: number of tests to generate. Default is 3.
      • testing_framework: the testing framework to use. If not set, for Python it will use pytest, for Java it will use JUnit, for C++ it will use Catch2, and for JavaScript and TypeScript it will use jest.
      • avoid_mocks: if set to true, the tool will try to avoid using mocks in the generated tests. Note that even if this option is set to true, the tool might still use mocks if it cannot generate a test without them. Default is true.
      • extra_instructions: Optional extra instructions to the tool. For example: "use the following mock injection scheme: ...".
      • file: in case there are several components with the same name, you can specify the relevant file.
      • class_name: in case there are several components with the same name in the same file, you can specify the relevant class name.

      See more information about the test tool in the docs.

    @DedyKredo DedyKredo closed this Mar 14, 2024
    @mrT23 mrT23 deleted the DedyKredo-patch-1 branch May 18, 2024 09:28
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Projects
    None yet
    Development

    Successfully merging this pull request may close these issues.

    1 participant