From f5c600a9f86412393f8ad37f1ad38786938f59e4 Mon Sep 17 00:00:00 2001 From: Slowly-Grokking <61430731+Slowly-Grokking@users.noreply.github.com> Date: Sat, 15 Apr 2023 13:59:42 -0500 Subject: [PATCH 01/92] relocate data_ingestion.py making this work without code change update readme --- README.md | 8 ++++---- autogpt/data_ingestion.py => data_ingestion.py | 0 2 files changed, 4 insertions(+), 4 deletions(-) rename autogpt/data_ingestion.py => data_ingestion.py (100%) diff --git a/README.md b/README.md index cf370f13a928..194040a321d7 100644 --- a/README.md +++ b/README.md @@ -335,7 +335,7 @@ To switch to either, change the `MEMORY_BACKEND` env variable to the value that ## 🧠 Memory pre-seeding ```bash -# python scripts/data_ingestion.py -h +# python data_ingestion.py -h usage: data_ingestion.py [-h] (--file FILE | --dir DIR) [--init] [--overlap OVERLAP] [--max_length MAX_LENGTH] Ingest a file or a directory with multiple files into memory. Make sure to set your .env before running this script. @@ -348,10 +348,10 @@ options: --overlap OVERLAP The overlap size between chunks when ingesting files (default: 200) --max_length MAX_LENGTH The max_length of each chunk when ingesting files (default: 4000 -# python scripts/data_ingestion.py --dir seed_data --init --overlap 200 --max_length 1000 +# python data_ingestion.py --dir --init --overlap 200 --max_length 1000 ``` -This script located at `scripts/data_ingestion.py`, allows you to ingest files into memory and pre-seed it before running Auto-GPT. +This script located at `data_ingestion.py`, allows you to ingest files into memory and pre-seed it before running Auto-GPT. Memory pre-seeding is a technique that involves ingesting relevant documents or data into the AI's memory so that it can use this information to generate more informed and accurate responses. @@ -368,7 +368,7 @@ You could for example download the documentation of an API, a GitHub repository, Memories will be available to the AI immediately as they are ingested, even if ingested while Auto-GPT is running. -In the example above, the script initializes the memory, ingests all files within the `/seed_data` directory into memory with an overlap between chunks of 200 and a maximum length of each chunk of 4000. +In the example above, the script initializes the memory, ingests all files within the `` directory into memory with an overlap between chunks of 200 and a maximum length of each chunk of 4000. Note that you can also use the `--file` argument to ingest a single file into memory and that the script will only ingest files within the `/auto_gpt_workspace` directory. You can adjust the `max_length` and overlap parameters to fine-tune the way the docuents are presented to the AI when it "recall" that memory: diff --git a/autogpt/data_ingestion.py b/data_ingestion.py similarity index 100% rename from autogpt/data_ingestion.py rename to data_ingestion.py From 92c0106e8167b4fe12fc7a0fa2ac911fedefde88 Mon Sep 17 00:00:00 2001 From: Slowly-Grokking <61430731+Slowly-Grokking@users.noreply.github.com> Date: Sat, 15 Apr 2023 15:33:47 -0500 Subject: [PATCH 02/92] Update README.md --- README.md | 18 ++++++------------ 1 file changed, 6 insertions(+), 12 deletions(-) diff --git a/README.md b/README.md index 194040a321d7..ee399db90f0a 100644 --- a/README.md +++ b/README.md @@ -333,6 +333,7 @@ To switch to either, change the `MEMORY_BACKEND` env variable to the value that ## 🧠 Memory pre-seeding +Memory pre-seeding allows you to ingest files into memory and pre-seed it before running Auto-GPT. ```bash # python data_ingestion.py -h @@ -348,19 +349,15 @@ options: --overlap OVERLAP The overlap size between chunks when ingesting files (default: 200) --max_length MAX_LENGTH The max_length of each chunk when ingesting files (default: 4000 -# python data_ingestion.py --dir --init --overlap 200 --max_length 1000 +# python data_ingestion.py --dir DataFolder --init --overlap 100 --max_length 2000 ``` +In the example above, the script initializes the memory, ingests all files within the `Auto-Gpt/autogpt/auto_gpt_workspace/DataFolder` directory into memory with an overlap between chunks of 100 and a maximum length of each chunk of 2000. -This script located at `data_ingestion.py`, allows you to ingest files into memory and pre-seed it before running Auto-GPT. +Note that you can also use the `--file` argument to ingest a single file into memory and that data_ingestion.py will only ingest files within the `/auto_gpt_workspace` directory. -Memory pre-seeding is a technique that involves ingesting relevant documents or data into the AI's memory so that it can use this information to generate more informed and accurate responses. +The DIR path is relative to the auto_gpt_workspace directory, so `python data_ingestion.py --dir . --init` will ingest everything in `auto_gpt_workspace` directory. -To pre-seed the memory, the content of each document is split into chunks of a specified maximum length with a specified overlap between chunks, and then each chunk is added to the memory backend set in the .env file. When the AI is prompted to recall information, it can then access those pre-seeded memories to generate more informed and accurate responses. - -This technique is particularly useful when working with large amounts of data or when there is specific information that the AI needs to be able to access quickly. -By pre-seeding the memory, the AI can retrieve and use this information more efficiently, saving time, API call and improving the accuracy of its responses. - -You could for example download the documentation of an API, a GitHub repository, etc. and ingest it into memory before running Auto-GPT. +Memory pre-seeding is a technique for improving AI accuracy by ingesting relevant data into its memory. Chunks of data are split and added to memory, allowing the AI to access them quickly and generate more accurate responses. It's useful for large datasets or when specific information needs to be accessed quickly. Examples include ingesting API or GitHub documentation before running Auto-GPT. ⚠️ If you use Redis as your memory, make sure to run Auto-GPT with the `WIPE_REDIS_ON_START` set to `False` in your `.env` file. @@ -368,9 +365,6 @@ You could for example download the documentation of an API, a GitHub repository, Memories will be available to the AI immediately as they are ingested, even if ingested while Auto-GPT is running. -In the example above, the script initializes the memory, ingests all files within the `` directory into memory with an overlap between chunks of 200 and a maximum length of each chunk of 4000. -Note that you can also use the `--file` argument to ingest a single file into memory and that the script will only ingest files within the `/auto_gpt_workspace` directory. - You can adjust the `max_length` and overlap parameters to fine-tune the way the docuents are presented to the AI when it "recall" that memory: - Adjusting the overlap value allows the AI to access more contextual information from each chunk when recalling information, but will result in more chunks being created and therefore increase memory backend usage and OpenAI API requests. From 8cbe438ad5c24e4c8fde34fc4c7d4f52abf0f5ab Mon Sep 17 00:00:00 2001 From: "roby.parapat" Date: Sun, 16 Apr 2023 06:33:43 +0700 Subject: [PATCH 03/92] move comment to correct position --- autogpt/commands/execute_code.py | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/autogpt/commands/execute_code.py b/autogpt/commands/execute_code.py index 86d6c177b796..f174b7bca159 100644 --- a/autogpt/commands/execute_code.py +++ b/autogpt/commands/execute_code.py @@ -41,6 +41,9 @@ def execute_python_file(file: str): try: client = docker.from_env() + # You can replace 'python:3.8' with the desired Python image/version + # You can find available Python images on Docker Hub: + # https://hub.docker.com/_/python image_name = "python:3.10" try: client.images.get(image_name) @@ -58,9 +61,6 @@ def execute_python_file(file: str): elif status: print(status) - # You can replace 'python:3.8' with the desired Python image/version - # You can find available Python images on Docker Hub: - # https://hub.docker.com/_/python container = client.containers.run( image_name, f"python {file}", From 08eb2566e41a9b1619b98b517c2dfb217e1f75d1 Mon Sep 17 00:00:00 2001 From: lonrun Date: Sun, 16 Apr 2023 07:37:50 +0800 Subject: [PATCH 04/92] Add run scripts for shell --- run.sh | 9 +++++++++ run_continuous.sh | 3 +++ 2 files changed, 12 insertions(+) create mode 100755 run.sh create mode 100755 run_continuous.sh diff --git a/run.sh b/run.sh new file mode 100755 index 000000000000..edcbc44155b9 --- /dev/null +++ b/run.sh @@ -0,0 +1,9 @@ +#!/bin/bash +python scripts/check_requirements.py requirements.txt +if [ $? -eq 1 ] +then + echo Installing missing packages... + pip install -r requirements.txt +fi +python -m autogpt $@ +read -p "Press any key to continue..." diff --git a/run_continuous.sh b/run_continuous.sh new file mode 100755 index 000000000000..14c9cfd2ab4a --- /dev/null +++ b/run_continuous.sh @@ -0,0 +1,3 @@ +#!/bin/bash +argument="--continuous" +./run.sh "$argument" From 66ee7e1a81ec9c2b35808bcd5b3898ff20dcc290 Mon Sep 17 00:00:00 2001 From: Slowly-Grokking <61430731+Slowly-Grokking@users.noreply.github.com> Date: Sat, 15 Apr 2023 21:33:26 -0500 Subject: [PATCH 05/92] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 37b81b854cb4..047606258a6a 100644 --- a/README.md +++ b/README.md @@ -360,7 +360,7 @@ options: --dir DIR The directory containing the files to ingest. --init Init the memory and wipe its content (default: False) --overlap OVERLAP The overlap size between chunks when ingesting files (default: 200) - --max_length MAX_LENGTH The max_length of each chunk when ingesting files (default: 4000 + --max_length MAX_LENGTH The max_length of each chunk when ingesting files (default: 4000) # python data_ingestion.py --dir DataFolder --init --overlap 100 --max_length 2000 ``` From 93895090172e05b72e17b2ce02fbb9b57f99b6c3 Mon Sep 17 00:00:00 2001 From: Slowly-Grokking <61430731+Slowly-Grokking@users.noreply.github.com> Date: Sun, 16 Apr 2023 02:01:42 -0500 Subject: [PATCH 06/92] Update README.md --- README.md | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index 99a455bc2fd3..0476b2330068 100644 --- a/README.md +++ b/README.md @@ -378,6 +378,11 @@ Note that you can also use the `--file` argument to ingest a single file into me The DIR path is relative to the auto_gpt_workspace directory, so `python data_ingestion.py --dir . --init` will ingest everything in `auto_gpt_workspace` directory. +You can adjust the `max_length` and overlap parameters to fine-tune the way the docuents are presented to the AI when it "recall" that memory: +- Adjusting the overlap value allows the AI to access more contextual information from each chunk when recalling information, but will result in more chunks being created and therefore increase memory backend usage and OpenAI API requests. +- Reducing the `max_length` value will create more chunks, which can save prompt tokens by allowing for more message history in the context, but will also increase the number of chunks. +- Increasing the `max_length` value will provide the AI with more contextual information from each chunk, reducing the number of chunks created and saving on OpenAI API requests. However, this may also use more prompt tokens and decrease the overall context available to the AI. + Memory pre-seeding is a technique for improving AI accuracy by ingesting relevant data into its memory. Chunks of data are split and added to memory, allowing the AI to access them quickly and generate more accurate responses. It's useful for large datasets or when specific information needs to be accessed quickly. Examples include ingesting API or GitHub documentation before running Auto-GPT. ⚠️ If you use Redis as your memory, make sure to run Auto-GPT with the `WIPE_REDIS_ON_START` set to `False` in your `.env` file. @@ -386,12 +391,6 @@ Memory pre-seeding is a technique for improving AI accuracy by ingesting relevan Memories will be available to the AI immediately as they are ingested, even if ingested while Auto-GPT is running. -You can adjust the `max_length` and overlap parameters to fine-tune the way the docuents are presented to the AI when it "recall" that memory: - -- Adjusting the overlap value allows the AI to access more contextual information from each chunk when recalling information, but will result in more chunks being created and therefore increase memory backend usage and OpenAI API requests. -- Reducing the `max_length` value will create more chunks, which can save prompt tokens by allowing for more message history in the context, but will also increase the number of chunks. -- Increasing the `max_length` value will provide the AI with more contextual information from each chunk, reducing the number of chunks created and saving on OpenAI API requests. However, this may also use more prompt tokens and decrease the overall context available to the AI. - ## πŸ’€ Continuous Mode ⚠️ Run the AI **without** user authorization, 100% automated. From ad7cefa10c0647feee85114d58559fcf83ba6743 Mon Sep 17 00:00:00 2001 From: 0xArty Date: Sun, 16 Apr 2023 10:30:59 +0100 Subject: [PATCH 07/92] updated contributing docs --- CODE_OF_CONDUCT.md | 40 +++++++++++++++ CONTRIBUTING.md | 125 +++++++++++++++++++++++++++++---------------- 2 files changed, 120 insertions(+), 45 deletions(-) create mode 100644 CODE_OF_CONDUCT.md diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md new file mode 100644 index 000000000000..d2331b4c60b9 --- /dev/null +++ b/CODE_OF_CONDUCT.md @@ -0,0 +1,40 @@ +# Code of Conduct for auto-gpt + +## 1. Purpose + +The purpose of this Code of Conduct is to provide guidelines for contributors to the auto-gpt project on GitHub. We aim to create a positive and inclusive environment where all participants can contribute and collaborate effectively. By participating in this project, you agree to abide by this Code of Conduct. + +## 2. Scope + +This Code of Conduct applies to all contributors, maintainers, and users of the auto-gpt project. It extends to all project spaces, including but not limited to issues, pull requests, code reviews, comments, and other forms of communication within the project. + +## 3. Our Standards + +We encourage the following behavior: + +* Being respectful and considerate to others +* Actively seeking diverse perspectives +* Providing constructive feedback and assistance +* Demonstrating empathy and understanding + +We discourage the following behavior: + +* Harassment or discrimination of any kind +* Disrespectful, offensive, or inappropriate language or content +* Personal attacks or insults +* Unwarranted criticism or negativity + +## 4. Reporting and Enforcement + +If you witness or experience any violations of this Code of Conduct, please report them to the project maintainers by email or other appropriate means. The maintainers will investigate and take appropriate action, which may include warnings, temporary or permanent bans, or other measures as necessary. + +Maintainers are responsible for ensuring compliance with this Code of Conduct and may take action to address any violations. + +## 5. Acknowledgements + +This Code of Conduct is adapted from the [Contributor Covenant](https://www.contributor-covenant.org/version/2/0/code_of_conduct.html). + +## 6. Contact + +If you have any questions or concerns, please contact the project maintainers. + diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 49c95991a2c3..b2a2490c566e 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,64 +1,99 @@ +# Contributing to ProjectName -To contribute to this GitHub project, you can follow these steps: +First of all, thank you for considering contributing to our project! We appreciate your time and effort, and we value any contribution, whether it's reporting a bug, suggesting a new feature, or submitting a pull request. -1. Fork the repository you want to contribute to by clicking the "Fork" button on the project page. +This document provides guidelines and best practices to help you contribute effectively. -2. Clone the repository to your local machine using the following command: +## Table of Contents -``` -git clone https://github.com//Auto-GPT -``` -3. Install the project requirements -``` -pip install -r requirements.txt -``` -4. Install pre-commit hooks -``` -pre-commit install -``` -5. Create a new branch for your changes using the following command: +- [Code of Conduct](#code-of-conduct) +- [Getting Started](#getting-started) +- [How to Contribute](#how-to-contribute) + - [Reporting Bugs](#reporting-bugs) + - [Suggesting Enhancements](#suggesting-enhancements) + - [Submitting Pull Requests](#submitting-pull-requests) +- [Style Guidelines](#style-guidelines) + - [Code Formatting](#code-formatting) + - [Pre-Commit Hooks](#pre-commit-hooks) -``` -git checkout -b "branch-name" -``` -6. Make your changes to the code or documentation. -- Example: Improve User Interface or Add Documentation. +## Code of Conduct +By participating in this project, you agree to abide by our [Code of Conduct](CODE_OF_CONDUCT.md). Please read it to understand the expectations we have for everyone who contributes to this project. -7. Add the changes to the staging area using the following command: -``` -git add . -``` +## Getting Started -8. Commit the changes with a meaningful commit message using the following command: -``` -git commit -m "your commit message" -``` -9. Push the changes to your forked repository using the following command: -``` -git push origin branch-name -``` -10. Go to the GitHub website and navigate to your forked repository. +To start contributing, follow these steps: + +1. Fork the repository and clone your fork. +2. Create a new branch for your changes (use a descriptive name, such as `fix-bug-123` or `add-new-feature`). +3. Make your changes in the new branch. +4. Test your changes thoroughly. +5. Commit and push your changes to your fork. +6. Create a pull request following the guidelines in the [Submitting Pull Requests](#submitting-pull-requests) section. + +## How to Contribute -11. Click the "New pull request" button. +### Reporting Bugs -12. Select the branch you just pushed to and the branch you want to merge into on the original repository. +If you find a bug in the project, please create an issue on GitHub with the following information: -13. Add a description of your changes and click the "Create pull request" button. +- A clear, descriptive title for the issue. +- A description of the problem, including steps to reproduce the issue. +- Any relevant logs, screenshots, or other supporting information. -14. Wait for the project maintainer to review your changes and provide feedback. +### Suggesting Enhancements -15. Make any necessary changes based on feedback and repeat steps 5-12 until your changes are accepted and merged into the main project. +If you have an idea for a new feature or improvement, please create an issue on GitHub with the following information: -16. Once your changes are merged, you can update your forked repository and local copy of the repository with the following commands: +- A clear, descriptive title for the issue. +- A detailed description of the proposed enhancement, including any benefits and potential drawbacks. +- Any relevant examples, mockups, or supporting information. +### Submitting Pull Requests + +When submitting a pull request, please ensure that your changes meet the following criteria: + +- Your pull request should be atomic and focus on a single change. +- Your pull request should include tests for your change. +- You should have thoroughly tested your changes with multiple different prompts. +- You should have considered potential risks and mitigations for your changes. +- You should have documented your changes clearly and comprehensively. +- You should not include any unrelated or "extra" small tweaks or changes. + +## Style Guidelines + +### Code Formatting + +We use the `black` code formatter to maintain a consistent coding style across the project. Please ensure that your code is formatted using `black` before submitting a pull request. You can install `black` using `pip`: + +```bash +pip install black ``` -git fetch upstream -git checkout master -git merge upstream/master + +To format your code, run the following command in the project's root directory: + +```bash +black . ``` -Finally, delete the branch you created with the following command: +### Pre-Commit Hooks +We use pre-commit hooks to ensure that code formatting and other checks are performed automatically before each commit. To set up pre-commit hooks for this project, follow these steps: + +Install the pre-commit package using pip: +```bash +pip install pre-commit ``` -git branch -d branch-name + +Run the following command in the project's root directory to install the pre-commit hooks: +```bash +pre-commit install ``` -That's it you made it 🐣⭐⭐ + +Now, the pre-commit hooks will run automatically before each commit, checking your code formatting and other requirements. + +If you encounter any issues or have questions, feel free to reach out to the maintainers or open a new issue on GitHub. We're here to help and appreciate your efforts to contribute to the project. + +Happy coding, and once again, thank you for your contributions! + +Maintainers will look at PR that have no merge conflicts when deciding what to add to the project. Make sure your PR shows up here: + +https://github.com/Torantulino/Auto-GPT/pulls?q=is%3Apr+is%3Aopen+-is%3Aconflict+ \ No newline at end of file From 9c8d95d4db16992a504c8dc17be44fc1db0bd672 Mon Sep 17 00:00:00 2001 From: Gabe <66077254+MrBrain295@users.noreply.github.com> Date: Sun, 16 Apr 2023 11:05:00 -0500 Subject: [PATCH 08/92] Fix README.md New owner. --- README.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index fcff589fc9d6..f9112a67f489 100644 --- a/README.md +++ b/README.md @@ -6,10 +6,10 @@ Our workflow has been improved, but please note that `master` branch may often be in a **broken** state. Please download the latest `stable` release from here: https://github.com/Torantulino/Auto-GPT/releases/latest. -![GitHub Repo stars](https://img.shields.io/github/stars/Torantulino/auto-gpt?style=social) +![GitHub Repo stars](https://img.shields.io/github/stars/Significant-Gravitas/auto-gpt?style=social) [![Twitter Follow](https://img.shields.io/twitter/follow/siggravitas?style=social)](https://twitter.com/SigGravitas) [![Discord Follow](https://dcbadge.vercel.app/api/server/autogpt?style=flat)](https://discord.gg/autogpt) -[![Unit Tests](https://github.com/Torantulino/Auto-GPT/actions/workflows/ci.yml/badge.svg)](https://github.com/Torantulino/Auto-GPT/actions/workflows/ci.yml) +[![Unit Tests](https://github.com/Significant-Gravitaso/Auto-GPT/actions/workflows/ci.yml/badge.svg)](https://github.com/Significant-Gravitas/Auto-GPT/actions/workflows/ci.yml) Auto-GPT is an experimental open-source application showcasing the capabilities of the GPT-4 language model. This program, driven by GPT-4, chains together LLM "thoughts", to autonomously achieve whatever goal you set. As one of the first examples of GPT-4 running fully autonomously, Auto-GPT pushes the boundaries of what is possible with AI. @@ -21,7 +21,7 @@ https://user-images.githubusercontent.com/22963551/228855501-2f5777cf-755b-4407-

If you can spare a coffee, you can help to cover the costs of developing Auto-GPT and help push the boundaries of fully autonomous AI! Your support is greatly appreciated -Development of this free, open-source project is made possible by all the contributors and sponsors. If you'd like to sponsor this project and have your avatar or company logo appear below click here. +Development of this free, open-source project is made possible by all the contributors and sponsors. If you'd like to sponsor this project and have your avatar or company logo appear below click here.

@@ -106,7 +106,7 @@ _To execute the following commands, open a CMD, Bash, or Powershell window by na 2. Clone the repository: For this step, you need Git installed. Alternatively, you can download the zip file by clicking the button at the top of this page ☝️ ```bash -git clone https://github.com/Torantulino/Auto-GPT.git +git clone https://github.com/Significant-Gravitas/Auto-GPT.git ``` 3. Navigate to the directory where the repository was downloaded From fb9430da0abd73916c5ca31196fc0aea15664ac4 Mon Sep 17 00:00:00 2001 From: Sabin Mendiguren Date: Sun, 16 Apr 2023 09:12:50 -0700 Subject: [PATCH 09/92] Update .env.template Small fix for the TEMPERATURE to show the real default value --- .env.template | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.env.template b/.env.template index 685ed19f4f47..eeff2907cb24 100644 --- a/.env.template +++ b/.env.template @@ -21,7 +21,7 @@ AI_SETTINGS_FILE=ai_settings.yaml ### OPENAI # OPENAI_API_KEY - OpenAI API Key (Example: my-openai-api-key) -# TEMPERATURE - Sets temperature in OpenAI (Default: 1) +# TEMPERATURE - Sets temperature in OpenAI (Default: 0) # USE_AZURE - Use Azure OpenAI or not (Default: False) OPENAI_API_KEY=your-openai-api-key TEMPERATURE=0 From 4a67c687c3778d33f2860e8ec489ac86b4f99066 Mon Sep 17 00:00:00 2001 From: Bently Date: Sun, 16 Apr 2023 17:20:30 +0100 Subject: [PATCH 10/92] simply removing a duplicate "Milvus Setup" in the README.md --- README.md | 12 ------------ 1 file changed, 12 deletions(-) diff --git a/README.md b/README.md index fcff589fc9d6..d9a5463d6a96 100644 --- a/README.md +++ b/README.md @@ -375,18 +375,6 @@ WEAVIATE_EMBEDDED_PATH="/home/me/.local/share/weaviate" # this is optional and i USE_WEAVIATE_EMBEDDED=False # set to True to run Embedded Weaviate MEMORY_INDEX="Autogpt" # name of the index to create for the application ``` - -### Milvus Setup - -[Milvus](https://milvus.io/) is a open-source, high scalable vector database to storage huge amount of vector-based memory and provide fast relevant search. - -- setup milvus database, keep your pymilvus version and milvus version same to avoid compatible issues. - - setup by open source [Install Milvus](https://milvus.io/docs/install_standalone-operator.md) - - or setup by [Zilliz Cloud](https://zilliz.com/cloud) -- set `MILVUS_ADDR` in `.env` to your milvus address `host:ip`. -- set `MEMORY_BACKEND` in `.env` to `milvus` to enable milvus as backend. -- optional - - set `MILVUS_COLLECTION` in `.env` to change milvus collection name as you want, `autogpt` is the default name. ## View Memory Usage From 5b428f509bb69b76c426aab3c19ba4c52fdb16ed Mon Sep 17 00:00:00 2001 From: Steve Date: Sun, 16 Apr 2023 14:33:20 +0000 Subject: [PATCH 11/92] fix file logging issue --- autogpt/commands/file_operations.py | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/autogpt/commands/file_operations.py b/autogpt/commands/file_operations.py index d02b125a79ed..5350d2f47f36 100644 --- a/autogpt/commands/file_operations.py +++ b/autogpt/commands/file_operations.py @@ -45,7 +45,7 @@ def log_operation(operation: str, filename: str) -> None: with open(LOG_FILE_PATH, "w", encoding="utf-8") as f: f.write("File Operation Logger ") - append_to_file(LOG_FILE, log_entry) + append_to_file(LOG_FILE, log_entry, shouldLog = False) def safe_join(base: str, *paths) -> str: @@ -171,7 +171,7 @@ def write_to_file(filename: str, text: str) -> str: return f"Error: {str(e)}" -def append_to_file(filename: str, text: str) -> str: +def append_to_file(filename: str, text: str, shouldLog: bool = True) -> str: """Append text to a file Args: @@ -185,7 +185,10 @@ def append_to_file(filename: str, text: str) -> str: filepath = safe_join(WORKING_DIRECTORY, filename) with open(filepath, "a") as f: f.write(text) - log_operation("append", filename) + + if shouldLog: + log_operation("append", filename) + return "Text appended successfully." except Exception as e: return f"Error: {str(e)}" From ccf3c7b89e38965d3e55b310e3e3606722c60b54 Mon Sep 17 00:00:00 2001 From: Pi Date: Sun, 16 Apr 2023 17:24:18 +0100 Subject: [PATCH 12/92] Update file_operations.py --- autogpt/commands/file_operations.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/autogpt/commands/file_operations.py b/autogpt/commands/file_operations.py index 5350d2f47f36..31500e8e6977 100644 --- a/autogpt/commands/file_operations.py +++ b/autogpt/commands/file_operations.py @@ -185,7 +185,7 @@ def append_to_file(filename: str, text: str, shouldLog: bool = True) -> str: filepath = safe_join(WORKING_DIRECTORY, filename) with open(filepath, "a") as f: f.write(text) - + if shouldLog: log_operation("append", filename) From 83930335f0ab984d79e69debce9fdd5a3d55c7d2 Mon Sep 17 00:00:00 2001 From: liuyachen <364579759@qq.com> Date: Sun, 16 Apr 2023 19:47:29 +0800 Subject: [PATCH 13/92] Fix README --- README.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index fcff589fc9d6..7e01295906a6 100644 --- a/README.md +++ b/README.md @@ -189,18 +189,18 @@ Here are some common arguments you can use when running Auto-GPT: > Replace anything in angled brackets (<>) to a value you want to specify * View all available command line arguments ```bash -python scripts/main.py --help +python -m autogpt --help ``` * Run Auto-GPT with a different AI Settings file ```bash -python scripts/main.py --ai-settings +python -m autogpt --ai-settings ``` * Specify one of 3 memory backends: `local`, `redis`, `pinecone` or `no_memory` ```bash -python scripts/main.py --use-memory +python -m autogpt --use-memory ``` -> **NOTE**: There are shorthands for some of these flags, for example `-m` for `--use-memory`. Use `python scripts/main.py --help` for more information +> **NOTE**: There are shorthands for some of these flags, for example `-m` for `--use-memory`. Use `python -m autogpt --help` for more information ## πŸ—£οΈ Speech Mode From c3f01d9b2fe36cfdf70c81db69673a8c5da47c76 Mon Sep 17 00:00:00 2001 From: GyDi Date: Sun, 16 Apr 2023 18:25:28 +0800 Subject: [PATCH 14/92] fix: config save and load path inconsistent --- autogpt/prompt.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/autogpt/prompt.py b/autogpt/prompt.py index 5cc98cd36db5..18a5736c19e8 100644 --- a/autogpt/prompt.py +++ b/autogpt/prompt.py @@ -177,7 +177,7 @@ def construct_prompt() -> str: if not config.ai_name: config = prompt_user() - config.save() + config.save(CFG.ai_settings_file) # Get rid of this global: global ai_name From 11620cc57188b57b413e38f4c6873c3304bd4906 Mon Sep 17 00:00:00 2001 From: Reinier van der Leer Date: Sun, 16 Apr 2023 18:52:22 +0200 Subject: [PATCH 15/92] Fix and consolidate command workspace resolution --- autogpt/commands/audio_text.py | 6 ++-- autogpt/commands/execute_code.py | 14 ++++----- autogpt/commands/file_operations.py | 45 ++++++----------------------- autogpt/commands/image_gen.py | 8 ++--- autogpt/workspace.py | 39 +++++++++++++++++++++++++ 5 files changed, 59 insertions(+), 53 deletions(-) create mode 100644 autogpt/workspace.py diff --git a/autogpt/commands/audio_text.py b/autogpt/commands/audio_text.py index b9ca988c64d5..84819d5ed75a 100644 --- a/autogpt/commands/audio_text.py +++ b/autogpt/commands/audio_text.py @@ -2,15 +2,13 @@ import json from autogpt.config import Config -from autogpt.commands.file_operations import safe_join +from autogpt.workspace import path_in_workspace cfg = Config() -working_directory = "auto_gpt_workspace" - def read_audio_from_file(audio_path): - audio_path = safe_join(working_directory, audio_path) + audio_path = path_in_workspace(audio_path) with open(audio_path, "rb") as audio_file: audio = audio_file.read() return read_audio(audio) diff --git a/autogpt/commands/execute_code.py b/autogpt/commands/execute_code.py index 86d6c177b796..eaafa00a828a 100644 --- a/autogpt/commands/execute_code.py +++ b/autogpt/commands/execute_code.py @@ -1,12 +1,11 @@ """Execute code in a Docker container""" import os -from pathlib import Path import subprocess import docker from docker.errors import ImageNotFound -WORKING_DIRECTORY = Path(__file__).parent.parent / "auto_gpt_workspace" +from autogpt.workspace import path_in_workspace, WORKSPACE_PATH def execute_python_file(file: str): @@ -19,12 +18,12 @@ def execute_python_file(file: str): str: The output of the file """ - print(f"Executing file '{file}' in workspace '{WORKING_DIRECTORY}'") + print(f"Executing file '{file}' in workspace '{WORKSPACE_PATH}'") if not file.endswith(".py"): return "Error: Invalid file type. Only .py files are allowed." - file_path = os.path.join(WORKING_DIRECTORY, file) + file_path = path_in_workspace(file) if not os.path.isfile(file_path): return f"Error: File '{file}' does not exist." @@ -65,7 +64,7 @@ def execute_python_file(file: str): image_name, f"python {file}", volumes={ - os.path.abspath(WORKING_DIRECTORY): { + os.path.abspath(WORKSPACE_PATH): { "bind": "/workspace", "mode": "ro", } @@ -100,9 +99,8 @@ def execute_shell(command_line: str) -> str: """ current_dir = os.getcwd() # Change dir into workspace if necessary - if str(WORKING_DIRECTORY) not in current_dir: - work_dir = os.path.join(os.getcwd(), WORKING_DIRECTORY) - os.chdir(work_dir) + if str(WORKSPACE_PATH) not in current_dir: + os.chdir(WORKSPACE_PATH) print(f"Executing command '{command_line}' in working directory '{os.getcwd()}'") diff --git a/autogpt/commands/file_operations.py b/autogpt/commands/file_operations.py index 31500e8e6977..7ce90a381589 100644 --- a/autogpt/commands/file_operations.py +++ b/autogpt/commands/file_operations.py @@ -1,19 +1,11 @@ """File operations for AutoGPT""" import os import os.path -from pathlib import Path +from autogpt.workspace import path_in_workspace, WORKSPACE_PATH from typing import Generator, List -# Set a dedicated folder for file I/O -WORKING_DIRECTORY = Path(os.getcwd()) / "auto_gpt_workspace" - -# Create the directory if it doesn't exist -if not os.path.exists(WORKING_DIRECTORY): - os.makedirs(WORKING_DIRECTORY) - LOG_FILE = "file_logger.txt" -LOG_FILE_PATH = WORKING_DIRECTORY / LOG_FILE -WORKING_DIRECTORY = str(WORKING_DIRECTORY) +LOG_FILE_PATH = WORKSPACE_PATH / LOG_FILE def check_duplicate_operation(operation: str, filename: str) -> bool: @@ -48,25 +40,6 @@ def log_operation(operation: str, filename: str) -> None: append_to_file(LOG_FILE, log_entry, shouldLog = False) -def safe_join(base: str, *paths) -> str: - """Join one or more path components intelligently. - - Args: - base (str): The base path - *paths (str): The paths to join to the base path - - Returns: - str: The joined path - """ - new_path = os.path.join(base, *paths) - norm_new_path = os.path.normpath(new_path) - - if os.path.commonprefix([base, norm_new_path]) != base: - raise ValueError("Attempted to access outside of working directory.") - - return norm_new_path - - def split_file( content: str, max_length: int = 4000, overlap: int = 0 ) -> Generator[str, None, None]: @@ -104,7 +77,7 @@ def read_file(filename: str) -> str: str: The contents of the file """ try: - filepath = safe_join(WORKING_DIRECTORY, filename) + filepath = path_in_workspace(filename) with open(filepath, "r", encoding="utf-8") as f: content = f.read() return content @@ -159,7 +132,7 @@ def write_to_file(filename: str, text: str) -> str: if check_duplicate_operation("write", filename): return "Error: File has already been updated." try: - filepath = safe_join(WORKING_DIRECTORY, filename) + filepath = path_in_workspace(filename) directory = os.path.dirname(filepath) if not os.path.exists(directory): os.makedirs(directory) @@ -182,7 +155,7 @@ def append_to_file(filename: str, text: str, shouldLog: bool = True) -> str: str: A message indicating success or failure """ try: - filepath = safe_join(WORKING_DIRECTORY, filename) + filepath = path_in_workspace(filename) with open(filepath, "a") as f: f.write(text) @@ -206,7 +179,7 @@ def delete_file(filename: str) -> str: if check_duplicate_operation("delete", filename): return "Error: File has already been deleted." try: - filepath = safe_join(WORKING_DIRECTORY, filename) + filepath = path_in_workspace(filename) os.remove(filepath) log_operation("delete", filename) return "File deleted successfully." @@ -226,15 +199,15 @@ def search_files(directory: str) -> List[str]: found_files = [] if directory in {"", "/"}: - search_directory = WORKING_DIRECTORY + search_directory = WORKSPACE_PATH else: - search_directory = safe_join(WORKING_DIRECTORY, directory) + search_directory = path_in_workspace(directory) for root, _, files in os.walk(search_directory): for file in files: if file.startswith("."): continue - relative_path = os.path.relpath(os.path.join(root, file), WORKING_DIRECTORY) + relative_path = os.path.relpath(os.path.join(root, file), WORKSPACE_PATH) found_files.append(relative_path) return found_files diff --git a/autogpt/commands/image_gen.py b/autogpt/commands/image_gen.py index 39e08845a609..6243616ea8ca 100644 --- a/autogpt/commands/image_gen.py +++ b/autogpt/commands/image_gen.py @@ -7,13 +7,11 @@ import openai import requests from PIL import Image -from pathlib import Path from autogpt.config import Config +from autogpt.workspace import path_in_workspace CFG = Config() -WORKING_DIRECTORY = Path(__file__).parent.parent / "auto_gpt_workspace" - def generate_image(prompt: str) -> str: """Generate an image from a prompt. @@ -65,7 +63,7 @@ def generate_image_with_hf(prompt: str, filename: str) -> str: image = Image.open(io.BytesIO(response.content)) print(f"Image Generated for prompt:{prompt}") - image.save(os.path.join(WORKING_DIRECTORY, filename)) + image.save(path_in_workspace(filename)) return f"Saved to disk:{filename}" @@ -93,7 +91,7 @@ def generate_image_with_dalle(prompt: str, filename: str) -> str: image_data = b64decode(response["data"][0]["b64_json"]) - with open(f"{WORKING_DIRECTORY}/{filename}", mode="wb") as png: + with open(path_in_workspace(filename), mode="wb") as png: png.write(image_data) return f"Saved to disk:{filename}" diff --git a/autogpt/workspace.py b/autogpt/workspace.py new file mode 100644 index 000000000000..7913491906e8 --- /dev/null +++ b/autogpt/workspace.py @@ -0,0 +1,39 @@ +import os +from pathlib import Path + +# Set a dedicated folder for file I/O +WORKSPACE_PATH = Path(os.getcwd()) / "auto_gpt_workspace" + +# Create the directory if it doesn't exist +if not os.path.exists(WORKSPACE_PATH): + os.makedirs(WORKSPACE_PATH) + + +def path_in_workspace(relative_path: str | Path) -> Path: + """Get full path for item in workspace + + Parameters: + relative_path (str | Path): Path to translate into the workspace + + Returns: + Path: Absolute path for the given path in the workspace + """ + return safe_path_join(WORKSPACE_PATH, relative_path) + + +def safe_path_join(base: Path, *paths: str | Path) -> Path: + """Join one or more path components, asserting the resulting path is within the workspace. + + Args: + base (Path): The base path + *paths (str): The paths to join to the base path + + Returns: + Path: The joined path + """ + joined_path = base.joinpath(*paths).resolve() + + if not joined_path.is_relative_to(base): + raise ValueError(f"Attempted to access path '{joined_path}' outside of working directory '{base}'.") + + return joined_path From 5698689361449854a865e8e1b28340f19e2a386e Mon Sep 17 00:00:00 2001 From: Reinier van der Leer Date: Sun, 16 Apr 2023 16:38:03 +0200 Subject: [PATCH 16/92] Update bug report template Add GPT-3 checkbox & emphasize to search for existing issues first --- .github/ISSUE_TEMPLATE/1.bug.yml | 18 ++++++++++++++---- 1 file changed, 14 insertions(+), 4 deletions(-) diff --git a/.github/ISSUE_TEMPLATE/1.bug.yml b/.github/ISSUE_TEMPLATE/1.bug.yml index e2404c763d26..6e6d00ae21b3 100644 --- a/.github/ISSUE_TEMPLATE/1.bug.yml +++ b/.github/ISSUE_TEMPLATE/1.bug.yml @@ -2,6 +2,15 @@ name: Bug report πŸ› description: Create a bug report for Auto-GPT. labels: ['status: needs triage'] body: + - type: checkboxes + attributes: + label: ⚠️ Search for existing issues first ⚠️ + description: > + Please [search the history](https://github.com/Torantulino/Auto-GPT/issues) + to see if an issue already exists for the same problem. + options: + - label: I have searched the existing issues, and there is no existing issue for my problem + required: true - type: markdown attributes: value: | @@ -19,13 +28,14 @@ body: - Provide commit-hash (`git rev-parse HEAD` gets it) - If it's a pip/packages issue, provide pip version, python version - If it's a crash, provide traceback. - - type: checkboxes attributes: - label: Duplicates - description: Please [search the history](https://github.com/Torantulino/Auto-GPT/issues) to see if an issue already exists for the same problem. + label: GPT-3 or GPT-4 + description: > + If you are using Auto-GPT with `--gpt3only`, your problems may be caused by + the limitations of GPT-3.5 options: - - label: I have searched the existing issues + - label: I am using Auto-GPT with GPT-3 (GPT-3.5) required: true - type: textarea attributes: From 41a0a687827ad16d51af79497a9d2b903af774d8 Mon Sep 17 00:00:00 2001 From: Reinier van der Leer Date: Sun, 16 Apr 2023 16:44:08 +0200 Subject: [PATCH 17/92] fix(issue template): GPT-3 checkbox not required --- .github/ISSUE_TEMPLATE/1.bug.yml | 1 - 1 file changed, 1 deletion(-) diff --git a/.github/ISSUE_TEMPLATE/1.bug.yml b/.github/ISSUE_TEMPLATE/1.bug.yml index 6e6d00ae21b3..7f1d27718eb6 100644 --- a/.github/ISSUE_TEMPLATE/1.bug.yml +++ b/.github/ISSUE_TEMPLATE/1.bug.yml @@ -36,7 +36,6 @@ body: the limitations of GPT-3.5 options: - label: I am using Auto-GPT with GPT-3 (GPT-3.5) - required: true - type: textarea attributes: label: Steps to reproduce πŸ•Ή From 92ab3e0e8b2e430c2a82c765220a35098aa75157 Mon Sep 17 00:00:00 2001 From: Peter Svensson Date: Sun, 16 Apr 2023 13:25:04 +0200 Subject: [PATCH 18/92] fixes #1821 by installing required drivers and adding options to chromedriver --- Dockerfile | 3 ++- autogpt/commands/web_selenium.py | 4 ++++ 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/Dockerfile b/Dockerfile index 309b857c1c09..82672c94ba09 100644 --- a/Dockerfile +++ b/Dockerfile @@ -4,6 +4,7 @@ FROM python:3.11-slim # Install git RUN apt-get -y update RUN apt-get -y install git +RUN apt-get install -y libglib2.0 libnss3 libgconf-2-4 libfontconfig1 chromium-driver # Set environment variables ENV PIP_NO_CACHE_DIR=yes \ @@ -24,4 +25,4 @@ RUN pip install --no-cache-dir --user -r requirements-docker.txt COPY --chown=appuser:appuser autogpt/ ./autogpt # Set the entrypoint -ENTRYPOINT ["python", "-m", "autogpt"] +ENTRYPOINT ["python", "-m", "autogpt", "--debug"] diff --git a/autogpt/commands/web_selenium.py b/autogpt/commands/web_selenium.py index 359803eed3f7..3dbd1b98bcbb 100644 --- a/autogpt/commands/web_selenium.py +++ b/autogpt/commands/web_selenium.py @@ -74,6 +74,10 @@ def scrape_text_with_selenium(url: str) -> Tuple[WebDriver, str]: # See https://developer.apple.com/documentation/webkit/testing_with_webdriver_in_safari driver = webdriver.Safari(options=options) else: + options.add_argument('--no-sandbox') + options.add_argument('--window-size=1420,1080') + options.add_argument('--headless') + options.add_argument('--disable-gpu') driver = webdriver.Chrome( executable_path=ChromeDriverManager().install(), options=options ) From e6d2de78932c156809550898b51d919aa12748c0 Mon Sep 17 00:00:00 2001 From: Peter Svensson Date: Sun, 16 Apr 2023 13:34:48 +0200 Subject: [PATCH 19/92] removed debug flag --- Dockerfile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Dockerfile b/Dockerfile index 82672c94ba09..8c3c0d8cfd72 100644 --- a/Dockerfile +++ b/Dockerfile @@ -25,4 +25,4 @@ RUN pip install --no-cache-dir --user -r requirements-docker.txt COPY --chown=appuser:appuser autogpt/ ./autogpt # Set the entrypoint -ENTRYPOINT ["python", "-m", "autogpt", "--debug"] +ENTRYPOINT ["python", "-m", "autogpt"] From cd78f21b51ce3ddd786338650a099ae4ea5100f2 Mon Sep 17 00:00:00 2001 From: bvoo <60059541+bvoo@users.noreply.github.com> Date: Sun, 16 Apr 2023 05:27:14 -0700 Subject: [PATCH 20/92] cleanup --- Dockerfile | 3 +-- autogpt/commands/web_selenium.py | 10 ++++++---- 2 files changed, 7 insertions(+), 6 deletions(-) diff --git a/Dockerfile b/Dockerfile index 8c3c0d8cfd72..9886d74266f2 100644 --- a/Dockerfile +++ b/Dockerfile @@ -3,8 +3,7 @@ FROM python:3.11-slim # Install git RUN apt-get -y update -RUN apt-get -y install git -RUN apt-get install -y libglib2.0 libnss3 libgconf-2-4 libfontconfig1 chromium-driver +RUN apt-get -y install git chromium-driver # Set environment variables ENV PIP_NO_CACHE_DIR=yes \ diff --git a/autogpt/commands/web_selenium.py b/autogpt/commands/web_selenium.py index 3dbd1b98bcbb..021d149c9cd0 100644 --- a/autogpt/commands/web_selenium.py +++ b/autogpt/commands/web_selenium.py @@ -64,6 +64,12 @@ def scrape_text_with_selenium(url: str) -> Tuple[WebDriver, str]: options.add_argument( "user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.5615.49 Safari/537.36" ) + options.add_argument( + '--no-sandbox' + ) + options.add_argument( + '--headless' + ) if CFG.selenium_web_browser == "firefox": driver = webdriver.Firefox( @@ -74,10 +80,6 @@ def scrape_text_with_selenium(url: str) -> Tuple[WebDriver, str]: # See https://developer.apple.com/documentation/webkit/testing_with_webdriver_in_safari driver = webdriver.Safari(options=options) else: - options.add_argument('--no-sandbox') - options.add_argument('--window-size=1420,1080') - options.add_argument('--headless') - options.add_argument('--disable-gpu') driver = webdriver.Chrome( executable_path=ChromeDriverManager().install(), options=options ) From 4fa97e92189b85e5ee15ec80bdcbedb35d07e0be Mon Sep 17 00:00:00 2001 From: Peter Svensson Date: Sun, 16 Apr 2023 17:59:47 +0200 Subject: [PATCH 21/92] remvoed options so that @pi can merge this and another commit easily --- autogpt/commands/web_selenium.py | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) diff --git a/autogpt/commands/web_selenium.py b/autogpt/commands/web_selenium.py index 021d149c9cd0..9b64ac2ea007 100644 --- a/autogpt/commands/web_selenium.py +++ b/autogpt/commands/web_selenium.py @@ -64,13 +64,7 @@ def scrape_text_with_selenium(url: str) -> Tuple[WebDriver, str]: options.add_argument( "user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.5615.49 Safari/537.36" ) - options.add_argument( - '--no-sandbox' - ) - options.add_argument( - '--headless' - ) - + if CFG.selenium_web_browser == "firefox": driver = webdriver.Firefox( executable_path=GeckoDriverManager().install(), options=options From 5634eee2cfb1fc8b139d79a6995134c5d9d6fe95 Mon Sep 17 00:00:00 2001 From: Peter Svensson Date: Sun, 16 Apr 2023 19:33:27 +0200 Subject: [PATCH 22/92] removed erroneous whitespace to appease lint --- autogpt/commands/web_selenium.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/autogpt/commands/web_selenium.py b/autogpt/commands/web_selenium.py index 9b64ac2ea007..359803eed3f7 100644 --- a/autogpt/commands/web_selenium.py +++ b/autogpt/commands/web_selenium.py @@ -64,7 +64,7 @@ def scrape_text_with_selenium(url: str) -> Tuple[WebDriver, str]: options.add_argument( "user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.5615.49 Safari/537.36" ) - + if CFG.selenium_web_browser == "firefox": driver = webdriver.Firefox( executable_path=GeckoDriverManager().install(), options=options From f02b6832e234f7288c57665ba8e5f958b51783af Mon Sep 17 00:00:00 2001 From: SBNovaScript Date: Sat, 15 Apr 2023 16:03:22 -0400 Subject: [PATCH 23/92] Fix google result encoding. --- autogpt/app.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/autogpt/app.py b/autogpt/app.py index eb7cdbea40b6..11ab97244ea9 100644 --- a/autogpt/app.py +++ b/autogpt/app.py @@ -128,8 +128,8 @@ def execute_command(command_name: str, arguments): return google_result else: google_result = google_search(arguments["input"]) - safe_message = google_result.encode("utf-8", "ignore") - return str(safe_message) + safe_message = [google_result_single.encode('utf-8', 'ignore') for google_result_single in google_result] + return str(safe_message) elif command_name == "memory_add": return memory.add(arguments["string"]) elif command_name == "start_agent": From 13602b4a63b1b4632ae58dfa4e83217e90cb21ce Mon Sep 17 00:00:00 2001 From: SBNovaScript Date: Sat, 15 Apr 2023 16:39:26 -0400 Subject: [PATCH 24/92] Add list type check --- autogpt/app.py | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/autogpt/app.py b/autogpt/app.py index 11ab97244ea9..6ead0d52e861 100644 --- a/autogpt/app.py +++ b/autogpt/app.py @@ -128,7 +128,13 @@ def execute_command(command_name: str, arguments): return google_result else: google_result = google_search(arguments["input"]) - safe_message = [google_result_single.encode('utf-8', 'ignore') for google_result_single in google_result] + + # google_result can be a list or a string depending on the search results + if isinstance(google_result, list): + safe_message = [google_result_single.encode('utf-8', 'ignore') for google_result_single in google_result] + else: + safe_message = google_result.encode('utf-8', 'ignore') + return str(safe_message) elif command_name == "memory_add": return memory.add(arguments["string"]) From 89909115226dfa2c26799dbbd684428dc12198f6 Mon Sep 17 00:00:00 2001 From: jayceslesar Date: Sun, 16 Apr 2023 14:02:48 -0400 Subject: [PATCH 25/92] unify annotations to future syntax --- autogpt/agent/agent_manager.py | 9 +++++---- autogpt/commands/evaluate_code.py | 4 ++-- autogpt/commands/file_operations.py | 8 +++++--- autogpt/commands/google_search.py | 5 +++-- autogpt/commands/improve_code.py | 5 +++-- autogpt/commands/web_playwright.py | 5 +++-- autogpt/commands/web_requests.py | 11 ++++++----- autogpt/commands/web_selenium.py | 9 +++++---- autogpt/commands/write_tests.py | 7 ++++--- autogpt/config/ai_config.py | 6 ++++-- autogpt/json_fixes/bracket_termination.py | 5 +++-- autogpt/json_fixes/parsing.py | 11 ++++++----- autogpt/llm_utils.py | 13 +++++++------ autogpt/memory/local.py | 10 ++++++---- autogpt/memory/no_memory.py | 8 +++++--- autogpt/memory/redismem.py | 8 +++++--- autogpt/processing/html.py | 7 ++++--- autogpt/promptgenerator.py | 8 +++++--- autogpt/token_counter.py | 4 ++-- 19 files changed, 83 insertions(+), 60 deletions(-) diff --git a/autogpt/agent/agent_manager.py b/autogpt/agent/agent_manager.py index 3467f8bf331e..e4bfb12611d4 100644 --- a/autogpt/agent/agent_manager.py +++ b/autogpt/agent/agent_manager.py @@ -1,5 +1,6 @@ """Agent manager for managing GPT agents""" -from typing import List, Tuple, Union +from __future__ import annotations + from autogpt.llm_utils import create_chat_completion from autogpt.config.config import Singleton @@ -14,7 +15,7 @@ def __init__(self): # Create new GPT agent # TODO: Centralise use of create_chat_completion() to globally enforce token limit - def create_agent(self, task: str, prompt: str, model: str) -> Tuple[int, str]: + def create_agent(self, task: str, prompt: str, model: str) -> tuple[int, str]: """Create a new agent and return its key Args: @@ -47,7 +48,7 @@ def create_agent(self, task: str, prompt: str, model: str) -> Tuple[int, str]: return key, agent_reply - def message_agent(self, key: Union[str, int], message: str) -> str: + def message_agent(self, key: str | int, message: str) -> str: """Send a message to an agent and return its response Args: @@ -73,7 +74,7 @@ def message_agent(self, key: Union[str, int], message: str) -> str: return agent_reply - def list_agents(self) -> List[Tuple[Union[str, int], str]]: + def list_agents(self) -> list[tuple[str | int, str]]: """Return a list of all agents Returns: diff --git a/autogpt/commands/evaluate_code.py b/autogpt/commands/evaluate_code.py index a36952e5e0e5..8f7cbca9c1bf 100644 --- a/autogpt/commands/evaluate_code.py +++ b/autogpt/commands/evaluate_code.py @@ -1,10 +1,10 @@ """Code evaluation module.""" -from typing import List +from __future__ import annotations from autogpt.llm_utils import call_ai_function -def evaluate_code(code: str) -> List[str]: +def evaluate_code(code: str) -> list[str]: """ A function that takes in a string and returns a response from create chat completion api call. diff --git a/autogpt/commands/file_operations.py b/autogpt/commands/file_operations.py index 31500e8e6977..2911d601758b 100644 --- a/autogpt/commands/file_operations.py +++ b/autogpt/commands/file_operations.py @@ -1,8 +1,10 @@ """File operations for AutoGPT""" +from __future__ import annotations + import os import os.path from pathlib import Path -from typing import Generator, List +from typing import Generator # Set a dedicated folder for file I/O WORKING_DIRECTORY = Path(os.getcwd()) / "auto_gpt_workspace" @@ -214,14 +216,14 @@ def delete_file(filename: str) -> str: return f"Error: {str(e)}" -def search_files(directory: str) -> List[str]: +def search_files(directory: str) -> list[str]: """Search for files in a directory Args: directory (str): The directory to search in Returns: - List[str]: A list of files found in the directory + list[str]: A list of files found in the directory """ found_files = [] diff --git a/autogpt/commands/google_search.py b/autogpt/commands/google_search.py index 6deb9b5033cc..148ba1d0e1cf 100644 --- a/autogpt/commands/google_search.py +++ b/autogpt/commands/google_search.py @@ -1,6 +1,7 @@ """Google search command for Autogpt.""" +from __future__ import annotations + import json -from typing import List, Union from duckduckgo_search import ddg @@ -33,7 +34,7 @@ def google_search(query: str, num_results: int = 8) -> str: return json.dumps(search_results, ensure_ascii=False, indent=4) -def google_official_search(query: str, num_results: int = 8) -> Union[str, List[str]]: +def google_official_search(query: str, num_results: int = 8) -> str | list[str]: """Return the results of a google search using the official Google API Args: diff --git a/autogpt/commands/improve_code.py b/autogpt/commands/improve_code.py index 05fe89e9ed11..e3440d8b7c6e 100644 --- a/autogpt/commands/improve_code.py +++ b/autogpt/commands/improve_code.py @@ -1,10 +1,11 @@ +from __future__ import annotations + import json -from typing import List from autogpt.llm_utils import call_ai_function -def improve_code(suggestions: List[str], code: str) -> str: +def improve_code(suggestions: list[str], code: str) -> str: """ A function that takes in code and suggestions and returns a response from create chat completion api call. diff --git a/autogpt/commands/web_playwright.py b/autogpt/commands/web_playwright.py index 93a46ac9c7df..a1abb6cb73d2 100644 --- a/autogpt/commands/web_playwright.py +++ b/autogpt/commands/web_playwright.py @@ -1,4 +1,6 @@ """Web scraping commands using Playwright""" +from __future__ import annotations + try: from playwright.sync_api import sync_playwright except ImportError: @@ -7,7 +9,6 @@ ) from bs4 import BeautifulSoup from autogpt.processing.html import extract_hyperlinks, format_hyperlinks -from typing import List, Union def scrape_text(url: str) -> str: @@ -45,7 +46,7 @@ def scrape_text(url: str) -> str: return text -def scrape_links(url: str) -> Union[str, List[str]]: +def scrape_links(url: str) -> str | list[str]: """Scrape links from a webpage Args: diff --git a/autogpt/commands/web_requests.py b/autogpt/commands/web_requests.py index a6161ec57d7f..50d8d383cb1a 100644 --- a/autogpt/commands/web_requests.py +++ b/autogpt/commands/web_requests.py @@ -1,5 +1,6 @@ """Browse a webpage and summarize it using the LLM model""" -from typing import List, Tuple, Union +from __future__ import annotations + from urllib.parse import urljoin, urlparse import requests @@ -66,7 +67,7 @@ def check_local_file_access(url: str) -> bool: def get_response( url: str, timeout: int = 10 -) -> Union[Tuple[None, str], Tuple[Response, None]]: +) -> tuple[None, str] | tuple[Response, None]: """Get the response from a URL Args: @@ -74,7 +75,7 @@ def get_response( timeout (int): The timeout for the HTTP request Returns: - Tuple[None, str] | Tuple[Response, None]: The response and error message + tuple[None, str] | tuple[Response, None]: The response and error message Raises: ValueError: If the URL is invalid @@ -136,14 +137,14 @@ def scrape_text(url: str) -> str: return text -def scrape_links(url: str) -> Union[str, List[str]]: +def scrape_links(url: str) -> str | list[str]: """Scrape links from a webpage Args: url (str): The URL to scrape links from Returns: - Union[str, List[str]]: The scraped links + str | list[str]: The scraped links """ response, error_message = get_response(url) if error_message: diff --git a/autogpt/commands/web_selenium.py b/autogpt/commands/web_selenium.py index 359803eed3f7..1d078d76d7fe 100644 --- a/autogpt/commands/web_selenium.py +++ b/autogpt/commands/web_selenium.py @@ -1,4 +1,6 @@ """Selenium web scraping module.""" +from __future__ import annotations + from selenium import webdriver from autogpt.processing.html import extract_hyperlinks, format_hyperlinks import autogpt.processing.text as summary @@ -15,13 +17,12 @@ import logging from pathlib import Path from autogpt.config import Config -from typing import List, Tuple, Union FILE_DIR = Path(__file__).parent.parent CFG = Config() -def browse_website(url: str, question: str) -> Tuple[str, WebDriver]: +def browse_website(url: str, question: str) -> tuple[str, WebDriver]: """Browse a website and return the answer and links to the user Args: @@ -43,7 +44,7 @@ def browse_website(url: str, question: str) -> Tuple[str, WebDriver]: return f"Answer gathered from website: {summary_text} \n \n Links: {links}", driver -def scrape_text_with_selenium(url: str) -> Tuple[WebDriver, str]: +def scrape_text_with_selenium(url: str) -> tuple[WebDriver, str]: """Scrape text from a website using selenium Args: @@ -97,7 +98,7 @@ def scrape_text_with_selenium(url: str) -> Tuple[WebDriver, str]: return driver, text -def scrape_links_with_selenium(driver: WebDriver, url: str) -> List[str]: +def scrape_links_with_selenium(driver: WebDriver, url: str) -> list[str]: """Scrape links from a website using selenium Args: diff --git a/autogpt/commands/write_tests.py b/autogpt/commands/write_tests.py index f1d6c9b2ce82..138a1adb6f83 100644 --- a/autogpt/commands/write_tests.py +++ b/autogpt/commands/write_tests.py @@ -1,16 +1,17 @@ """A module that contains a function to generate test cases for the submitted code.""" +from __future__ import annotations + import json -from typing import List from autogpt.llm_utils import call_ai_function -def write_tests(code: str, focus: List[str]) -> str: +def write_tests(code: str, focus: list[str]) -> str: """ A function that takes in code and focus topics and returns a response from create chat completion api call. Parameters: - focus (List): A list of suggestions around what needs to be improved. + focus (list): A list of suggestions around what needs to be improved. code (str): Code for test cases to be generated against. Returns: A result string from create chat completion. Test cases for the submitted code diff --git a/autogpt/config/ai_config.py b/autogpt/config/ai_config.py index 014e360f870a..86171357ba0b 100644 --- a/autogpt/config/ai_config.py +++ b/autogpt/config/ai_config.py @@ -2,8 +2,10 @@ """ A module that contains the AIConfig class object that contains the configuration """ +from __future__ import annotations + import os -from typing import List, Optional, Type +from typing import Type import yaml @@ -18,7 +20,7 @@ class AIConfig: """ def __init__( - self, ai_name: str = "", ai_role: str = "", ai_goals: Optional[List] = None + self, ai_name: str = "", ai_role: str = "", ai_goals: list | None = None ) -> None: """ Initialize a class instance diff --git a/autogpt/json_fixes/bracket_termination.py b/autogpt/json_fixes/bracket_termination.py index 692461aad5d0..822eed4a5468 100644 --- a/autogpt/json_fixes/bracket_termination.py +++ b/autogpt/json_fixes/bracket_termination.py @@ -1,7 +1,8 @@ """Fix JSON brackets.""" +from __future__ import annotations + import contextlib import json -from typing import Optional import regex from colorama import Fore @@ -46,7 +47,7 @@ def attempt_to_fix_json_by_finding_outermost_brackets(json_string: str): return json_string -def balance_braces(json_string: str) -> Optional[str]: +def balance_braces(json_string: str) -> str | None: """ Balance the braces in a JSON string. diff --git a/autogpt/json_fixes/parsing.py b/autogpt/json_fixes/parsing.py index 26d067939e54..0f15441160c2 100644 --- a/autogpt/json_fixes/parsing.py +++ b/autogpt/json_fixes/parsing.py @@ -1,8 +1,9 @@ """Fix and parse JSON strings.""" +from __future__ import annotations import contextlib import json -from typing import Any, Dict, Union +from typing import Any from autogpt.config import Config from autogpt.json_fixes.auto_fix import fix_json @@ -71,7 +72,7 @@ def correct_json(json_to_load: str) -> str: def fix_and_parse_json( json_to_load: str, try_to_fix_with_gpt: bool = True -) -> Union[str, Dict[Any, Any]]: +) -> str | dict[Any, Any]: """Fix and parse JSON string Args: @@ -80,7 +81,7 @@ def fix_and_parse_json( Defaults to True. Returns: - Union[str, Dict[Any, Any]]: The parsed JSON. + str or dict[Any, Any]: The parsed JSON. """ with contextlib.suppress(json.JSONDecodeError): @@ -109,7 +110,7 @@ def fix_and_parse_json( def try_ai_fix( try_to_fix_with_gpt: bool, exception: Exception, json_to_load: str -) -> Union[str, Dict[Any, Any]]: +) -> str | dict[Any, Any]: """Try to fix the JSON with the AI Args: @@ -121,7 +122,7 @@ def try_ai_fix( exception: If try_to_fix_with_gpt is False. Returns: - Union[str, Dict[Any, Any]]: The JSON string or dictionary. + str or dict[Any, Any]: The JSON string or dictionary. """ if not try_to_fix_with_gpt: raise exception diff --git a/autogpt/llm_utils.py b/autogpt/llm_utils.py index 43739009c44a..2075f93446eb 100644 --- a/autogpt/llm_utils.py +++ b/autogpt/llm_utils.py @@ -1,6 +1,7 @@ +from __future__ import annotations + from ast import List import time -from typing import Dict, Optional import openai from openai.error import APIError, RateLimitError @@ -14,7 +15,7 @@ def call_ai_function( - function: str, args: List, description: str, model: Optional[str] = None + function: str, args: list, description: str, model: str | None = None ) -> str: """Call an AI function @@ -51,15 +52,15 @@ def call_ai_function( # Overly simple abstraction until we create something better # simple retry mechanism when getting a rate error or a bad gateway def create_chat_completion( - messages: List, # type: ignore - model: Optional[str] = None, + messages: list, # type: ignore + model: str | None = None, temperature: float = CFG.temperature, - max_tokens: Optional[int] = None, + max_tokens: int | None = None, ) -> str: """Create a chat completion using the OpenAI API Args: - messages (List[Dict[str, str]]): The messages to send to the chat completion + messages (list[dict[str, str]]): The messages to send to the chat completion model (str, optional): The model to use. Defaults to None. temperature (float, optional): The temperature to use. Defaults to 0.9. max_tokens (int, optional): The max tokens to use. Defaults to None. diff --git a/autogpt/memory/local.py b/autogpt/memory/local.py index 004153c101fb..6c7ee1b36a2f 100644 --- a/autogpt/memory/local.py +++ b/autogpt/memory/local.py @@ -1,6 +1,8 @@ +from __future__ import annotations + import dataclasses import os -from typing import Any, List, Optional, Tuple +from typing import Any import numpy as np import orjson @@ -97,7 +99,7 @@ def clear(self) -> str: self.data = CacheContent() return "Obliviated" - def get(self, data: str) -> Optional[List[Any]]: + def get(self, data: str) -> list[Any] | None: """ Gets the data from the memory that is most relevant to the given data. @@ -108,7 +110,7 @@ def get(self, data: str) -> Optional[List[Any]]: """ return self.get_relevant(data, 1) - def get_relevant(self, text: str, k: int) -> List[Any]: + def get_relevant(self, text: str, k: int) -> list[Any]: """ " matrix-vector mult to find score-for-each-row-of-matrix get indices for top-k winning scores @@ -127,7 +129,7 @@ def get_relevant(self, text: str, k: int) -> List[Any]: return [self.data.texts[i] for i in top_k_indices] - def get_stats(self) -> Tuple[int, Tuple[int, ...]]: + def get_stats(self) -> tuple[int, tuple[int, ...]]: """ Returns: The stats of the local cache. """ diff --git a/autogpt/memory/no_memory.py b/autogpt/memory/no_memory.py index 0a976690536e..4035a657f0e6 100644 --- a/autogpt/memory/no_memory.py +++ b/autogpt/memory/no_memory.py @@ -1,5 +1,7 @@ """A class that does not store any data. This is the default memory provider.""" -from typing import Optional, List, Any +from __future__ import annotations + +from typing import Any from autogpt.memory.base import MemoryProviderSingleton @@ -31,7 +33,7 @@ def add(self, data: str) -> str: """ return "" - def get(self, data: str) -> Optional[List[Any]]: + def get(self, data: str) -> list[Any] | None: """ Gets the data from the memory that is most relevant to the given data. NoMemory always returns None. @@ -51,7 +53,7 @@ def clear(self) -> str: """ return "" - def get_relevant(self, data: str, num_relevant: int = 5) -> Optional[List[Any]]: + def get_relevant(self, data: str, num_relevant: int = 5) ->list[Any] | None: """ Returns all the data in the memory that is relevant to the given data. NoMemory always returns None. diff --git a/autogpt/memory/redismem.py b/autogpt/memory/redismem.py index 4d73b7411269..0e8dd71d9165 100644 --- a/autogpt/memory/redismem.py +++ b/autogpt/memory/redismem.py @@ -1,5 +1,7 @@ """Redis memory provider.""" -from typing import Any, List, Optional +from __future__ import annotations + +from typing import Any import numpy as np import redis @@ -99,7 +101,7 @@ def add(self, data: str) -> str: pipe.execute() return _text - def get(self, data: str) -> Optional[List[Any]]: + def get(self, data: str) -> list[Any] | None: """ Gets the data from the memory that is most relevant to the given data. @@ -119,7 +121,7 @@ def clear(self) -> str: self.redis.flushall() return "Obliviated" - def get_relevant(self, data: str, num_relevant: int = 5) -> Optional[List[Any]]: + def get_relevant(self, data: str, num_relevant: int = 5) -> list[Any] | None: """ Returns all the data in the memory that is relevant to the given data. Args: diff --git a/autogpt/processing/html.py b/autogpt/processing/html.py index c43a0b74e8aa..e1912b6ad42c 100644 --- a/autogpt/processing/html.py +++ b/autogpt/processing/html.py @@ -1,10 +1,11 @@ """HTML processing functions""" +from __future__ import annotations + from requests.compat import urljoin -from typing import List, Tuple from bs4 import BeautifulSoup -def extract_hyperlinks(soup: BeautifulSoup, base_url: str) -> List[Tuple[str, str]]: +def extract_hyperlinks(soup: BeautifulSoup, base_url: str) -> list[tuple[str, str]]: """Extract hyperlinks from a BeautifulSoup object Args: @@ -20,7 +21,7 @@ def extract_hyperlinks(soup: BeautifulSoup, base_url: str) -> List[Tuple[str, st ] -def format_hyperlinks(hyperlinks: List[Tuple[str, str]]) -> List[str]: +def format_hyperlinks(hyperlinks: list[tuple[str, str]]) -> list[str]: """Format hyperlinks to be displayed to the user Args: diff --git a/autogpt/promptgenerator.py b/autogpt/promptgenerator.py index 4f5186150ad2..0ad7046a0c41 100644 --- a/autogpt/promptgenerator.py +++ b/autogpt/promptgenerator.py @@ -1,6 +1,8 @@ """ A module for generating custom prompt strings.""" +from __future__ import annotations + import json -from typing import Any, Dict, List +from typing import Any class PromptGenerator: @@ -61,7 +63,7 @@ def add_command(self, command_label: str, command_name: str, args=None) -> None: self.commands.append(command) - def _generate_command_string(self, command: Dict[str, Any]) -> str: + def _generate_command_string(self, command: dict[str, Any]) -> str: """ Generate a formatted string representation of a command. @@ -94,7 +96,7 @@ def add_performance_evaluation(self, evaluation: str) -> None: """ self.performance_evaluation.append(evaluation) - def _generate_numbered_list(self, items: List[Any], item_type="list") -> str: + def _generate_numbered_list(self, items: list[Any], item_type="list") -> str: """ Generate a numbered list from given items based on the item_type. diff --git a/autogpt/token_counter.py b/autogpt/token_counter.py index a85a54be0e11..338fe6be4d47 100644 --- a/autogpt/token_counter.py +++ b/autogpt/token_counter.py @@ -1,5 +1,5 @@ """Functions for counting the number of tokens in a message or string.""" -from typing import Dict, List +from __future__ import annotations import tiktoken @@ -7,7 +7,7 @@ def count_message_tokens( - messages: List[Dict[str, str]], model: str = "gpt-3.5-turbo-0301" + messages: list[dict[str, str]], model: str = "gpt-3.5-turbo-0301" ) -> int: """ Returns the number of tokens used by a list of messages. From 1df47bb0be87bbda9b794226ceb4a2eef47ad45b Mon Sep 17 00:00:00 2001 From: BillSchumacher <34168009+BillSchumacher@users.noreply.github.com> Date: Sun, 16 Apr 2023 13:08:16 -0500 Subject: [PATCH 26/92] Add in one more place. --- autogpt/workspace.py | 2 ++ 1 file changed, 2 insertions(+) diff --git a/autogpt/workspace.py b/autogpt/workspace.py index 7913491906e8..2706b3b2db48 100644 --- a/autogpt/workspace.py +++ b/autogpt/workspace.py @@ -1,3 +1,5 @@ +from __future__ import annotations + import os from pathlib import Path From a91ef5695403066d5a9435ba0cee0f6186836c10 Mon Sep 17 00:00:00 2001 From: Richard Beales Date: Sun, 16 Apr 2023 19:08:10 +0100 Subject: [PATCH 27/92] Remove warnings if memory backend is not installed --- autogpt/memory/__init__.py | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/autogpt/memory/__init__.py b/autogpt/memory/__init__.py index 817155dc224a..e2ee44a42661 100644 --- a/autogpt/memory/__init__.py +++ b/autogpt/memory/__init__.py @@ -10,7 +10,7 @@ supported_memory.append("redis") except ImportError: - print("Redis not installed. Skipping import.") + # print("Redis not installed. Skipping import.") RedisMemory = None try: @@ -18,19 +18,19 @@ supported_memory.append("pinecone") except ImportError: - print("Pinecone not installed. Skipping import.") + # print("Pinecone not installed. Skipping import.") PineconeMemory = None try: from autogpt.memory.weaviate import WeaviateMemory except ImportError: - print("Weaviate not installed. Skipping import.") + # print("Weaviate not installed. Skipping import.") WeaviateMemory = None try: from autogpt.memory.milvus import MilvusMemory except ImportError: - print("pymilvus not installed. Skipping import.") + # print("pymilvus not installed. Skipping import.") MilvusMemory = None From 005479f8c33f71cf36cfd3033339ecd24a62bc6d Mon Sep 17 00:00:00 2001 From: Merwane Hamadi Date: Sun, 16 Apr 2023 09:41:45 -0700 Subject: [PATCH 28/92] Add benchmark GitHub action workflow --- .github/workflows/benchmark.yml | 31 +++++++++++++++++++++++++++++++ 1 file changed, 31 insertions(+) create mode 100644 .github/workflows/benchmark.yml diff --git a/.github/workflows/benchmark.yml b/.github/workflows/benchmark.yml new file mode 100644 index 000000000000..c5a42b2c0c35 --- /dev/null +++ b/.github/workflows/benchmark.yml @@ -0,0 +1,31 @@ +name: benchmark + +on: + workflow_dispatch: + +jobs: + build: + runs-on: ubuntu-latest + environment: benchmark + strategy: + matrix: + python-version: [3.8] + + steps: + - name: Check out repository + uses: actions/checkout@v2 + + - name: Set up Python ${{ matrix.python-version }} + uses: actions/setup-python@v2 + with: + python-version: ${{ matrix.python-version }} + + - name: Install dependencies + run: | + python -m pip install --upgrade pip + pip install -r requirements.txt + - name: benchmark + run: | + python benchmark/benchmark_entrepeneur_gpt_with_undecisive_user.py + env: + OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }} From d934d226ce56e34c09fd0ff491a15cc3a8bc8e0a Mon Sep 17 00:00:00 2001 From: Merwane Hamadi Date: Sun, 16 Apr 2023 09:41:49 -0700 Subject: [PATCH 29/92] Update .gitignore to properly handle virtual environments --- .gitignore | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/.gitignore b/.gitignore index 3209297cb62b..eda7f32734a2 100644 --- a/.gitignore +++ b/.gitignore @@ -9,7 +9,6 @@ auto_gpt_workspace/* *.mpeg .env azure.yaml -*venv/* outputs/* ai_settings.yaml last_run_ai_settings.yaml @@ -130,10 +129,9 @@ celerybeat.pid .env .venv env/ -venv/ +venv*/ ENV/ env.bak/ -venv.bak/ # Spyder project settings .spyderproject From bf24cd9508316031b2f914359460363d2fb75c04 Mon Sep 17 00:00:00 2001 From: Merwane Hamadi Date: Sun, 16 Apr 2023 09:41:52 -0700 Subject: [PATCH 30/92] Refactor agent.py to improve JSON handling and validation --- autogpt/agent/agent.py | 29 +++++++++++++++-------------- 1 file changed, 15 insertions(+), 14 deletions(-) diff --git a/autogpt/agent/agent.py b/autogpt/agent/agent.py index 301d3f023eab..32d982e52a4b 100644 --- a/autogpt/agent/agent.py +++ b/autogpt/agent/agent.py @@ -3,9 +3,8 @@ from autogpt.chat import chat_with_ai, create_chat_message from autogpt.config import Config -from autogpt.json_fixes.bracket_termination import ( - attempt_to_fix_json_by_finding_outermost_brackets, -) +from autogpt.json_fixes.master_json_fix_method import fix_json_using_multiple_techniques +from autogpt.json_validation.validate_json import validate_json from autogpt.logs import logger, print_assistant_thoughts from autogpt.speech import say_text from autogpt.spinner import Spinner @@ -70,18 +69,20 @@ def start_interaction_loop(self): cfg.fast_token_limit, ) # TODO: This hardcodes the model to use GPT3.5. Make this an argument - # Print Assistant thoughts - print_assistant_thoughts(self.ai_name, assistant_reply) + assistant_reply_json = fix_json_using_multiple_techniques(assistant_reply) - # Get command name and arguments - try: - command_name, arguments = get_command( - attempt_to_fix_json_by_finding_outermost_brackets(assistant_reply) - ) - if cfg.speak_mode: - say_text(f"I want to execute {command_name}") - except Exception as e: - logger.error("Error: \n", str(e)) + # Print Assistant thoughts + if assistant_reply_json != {}: + validate_json(assistant_reply_json, 'llm_response_format_1') + # Get command name and arguments + try: + print_assistant_thoughts(self.ai_name, assistant_reply_json) + command_name, arguments = get_command(assistant_reply_json) + # command_name, arguments = assistant_reply_json_valid["command"]["name"], assistant_reply_json_valid["command"]["args"] + if cfg.speak_mode: + say_text(f"I want to execute {command_name}") + except Exception as e: + logger.error("Error: \n", str(e)) if not cfg.continuous_mode and self.next_action_count == 0: ### GET USER AUTHORIZATION TO EXECUTE COMMAND ### From 70100af98e07a1ad78eb40b503743033344dd6a1 Mon Sep 17 00:00:00 2001 From: Merwane Hamadi Date: Sun, 16 Apr 2023 09:41:57 -0700 Subject: [PATCH 31/92] Refactor get_command function in app.py to accept JSON directly --- autogpt/app.py | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/autogpt/app.py b/autogpt/app.py index 6ead0d52e861..78b5bd2fdeb0 100644 --- a/autogpt/app.py +++ b/autogpt/app.py @@ -1,6 +1,6 @@ """ Command and Control """ import json -from typing import List, NoReturn, Union +from typing import List, NoReturn, Union, Dict from autogpt.agent.agent_manager import AgentManager from autogpt.commands.evaluate_code import evaluate_code from autogpt.commands.google_search import google_official_search, google_search @@ -47,11 +47,11 @@ def is_valid_int(value: str) -> bool: return False -def get_command(response: str): +def get_command(response_json: Dict): """Parse the response and return the command name and arguments Args: - response (str): The response from the user + response_json (json): The response from the AI Returns: tuple: The command name and arguments @@ -62,8 +62,6 @@ def get_command(response: str): Exception: If any other error occurs """ try: - response_json = fix_and_parse_json(response) - if "command" not in response_json: return "Error:", "Missing 'command' object in JSON" From 5c67484295515cc77b6d6c4a17391d7ab62d77e2 Mon Sep 17 00:00:00 2001 From: Merwane Hamadi Date: Sun, 16 Apr 2023 09:42:00 -0700 Subject: [PATCH 32/92] Remove deprecated function from bracket_termination.py --- autogpt/json_fixes/bracket_termination.py | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/autogpt/json_fixes/bracket_termination.py b/autogpt/json_fixes/bracket_termination.py index 822eed4a5468..731efeb1c089 100644 --- a/autogpt/json_fixes/bracket_termination.py +++ b/autogpt/json_fixes/bracket_termination.py @@ -3,16 +3,20 @@ import contextlib import json +<<<<<<< HEAD import regex from colorama import Fore from autogpt.logs import logger +======= +from typing import Optional +>>>>>>> 67f32105 (Remove deprecated function from bracket_termination.py) from autogpt.config import Config -from autogpt.speech import say_text CFG = Config() +<<<<<<< HEAD def attempt_to_fix_json_by_finding_outermost_brackets(json_string: str): if CFG.speak_mode and CFG.debug_mode: say_text( @@ -48,6 +52,9 @@ def attempt_to_fix_json_by_finding_outermost_brackets(json_string: str): def balance_braces(json_string: str) -> str | None: +======= +def balance_braces(json_string: str) -> Optional[str]: +>>>>>>> 67f32105 (Remove deprecated function from bracket_termination.py) """ Balance the braces in a JSON string. From fec25cd6903a83f07c8559c26cc4a8b0515ff608 Mon Sep 17 00:00:00 2001 From: Merwane Hamadi Date: Sun, 16 Apr 2023 09:42:05 -0700 Subject: [PATCH 33/92] Add master_json_fix_method module for unified JSON handling --- autogpt/json_fixes/master_json_fix_method.py | 28 ++++++++++++++++++++ 1 file changed, 28 insertions(+) create mode 100644 autogpt/json_fixes/master_json_fix_method.py diff --git a/autogpt/json_fixes/master_json_fix_method.py b/autogpt/json_fixes/master_json_fix_method.py new file mode 100644 index 000000000000..7a2cf3cc81c3 --- /dev/null +++ b/autogpt/json_fixes/master_json_fix_method.py @@ -0,0 +1,28 @@ +from typing import Any, Dict + +from autogpt.config import Config +from autogpt.logs import logger +from autogpt.speech import say_text +CFG = Config() + + +def fix_json_using_multiple_techniques(assistant_reply: str) -> Dict[Any, Any]: + from autogpt.json_fixes.parsing import attempt_to_fix_json_by_finding_outermost_brackets + + from autogpt.json_fixes.parsing import fix_and_parse_json + + # Parse and print Assistant response + assistant_reply_json = fix_and_parse_json(assistant_reply) + if assistant_reply_json == {}: + assistant_reply_json = attempt_to_fix_json_by_finding_outermost_brackets( + assistant_reply + ) + + if assistant_reply_json != {}: + return assistant_reply_json + + logger.error("Error: The following AI output couldn't be converted to a JSON:\n", assistant_reply) + if CFG.speak_mode: + say_text("I have received an invalid JSON response from the OpenAI API.") + + return {} From cfbec56b2bb4c1bcacd600f27fb9c6aa400f434c Mon Sep 17 00:00:00 2001 From: Merwane Hamadi Date: Sun, 16 Apr 2023 09:42:07 -0700 Subject: [PATCH 34/92] Refactor parsing module and move JSON fix function to appropriate location --- autogpt/json_fixes/parsing.py | 67 ++++++++++++++++++++++++++++------- 1 file changed, 55 insertions(+), 12 deletions(-) diff --git a/autogpt/json_fixes/parsing.py b/autogpt/json_fixes/parsing.py index 0f15441160c2..d3a51f438eb2 100644 --- a/autogpt/json_fixes/parsing.py +++ b/autogpt/json_fixes/parsing.py @@ -3,18 +3,24 @@ import contextlib import json +<<<<<<< HEAD from typing import Any +======= +from typing import Any, Dict, Union +from colorama import Fore +from regex import regex +>>>>>>> d3d8253b (Refactor parsing module and move JSON fix function to appropriate location) from autogpt.config import Config from autogpt.json_fixes.auto_fix import fix_json from autogpt.json_fixes.bracket_termination import balance_braces from autogpt.json_fixes.escaping import fix_invalid_escape from autogpt.json_fixes.missing_quotes import add_quotes_to_property_names from autogpt.logs import logger +from autogpt.speech import say_text CFG = Config() - JSON_SCHEMA = """ { "command": { @@ -38,7 +44,6 @@ def correct_json(json_to_load: str) -> str: """ Correct common JSON errors. - Args: json_to_load (str): The JSON string. """ @@ -72,7 +77,7 @@ def correct_json(json_to_load: str) -> str: def fix_and_parse_json( json_to_load: str, try_to_fix_with_gpt: bool = True -) -> str | dict[Any, Any]: +) -> Dict[Any, Any]: """Fix and parse JSON string Args: @@ -110,7 +115,11 @@ def fix_and_parse_json( def try_ai_fix( try_to_fix_with_gpt: bool, exception: Exception, json_to_load: str +<<<<<<< HEAD ) -> str | dict[Any, Any]: +======= +) -> Dict[Any, Any]: +>>>>>>> d3d8253b (Refactor parsing module and move JSON fix function to appropriate location) """Try to fix the JSON with the AI Args: @@ -126,13 +135,13 @@ def try_ai_fix( """ if not try_to_fix_with_gpt: raise exception - - logger.warn( - "Warning: Failed to parse AI output, attempting to fix." - "\n If you see this warning frequently, it's likely that" - " your prompt is confusing the AI. Try changing it up" - " slightly." - ) + if CFG.debug_mode: + logger.warn( + "Warning: Failed to parse AI output, attempting to fix." + "\n If you see this warning frequently, it's likely that" + " your prompt is confusing the AI. Try changing it up" + " slightly." + ) # Now try to fix this up using the ai_functions ai_fixed_json = fix_json(json_to_load, JSON_SCHEMA) @@ -140,5 +149,39 @@ def try_ai_fix( return json.loads(ai_fixed_json) # This allows the AI to react to the error message, # which usually results in it correcting its ways. - logger.error("Failed to fix AI output, telling the AI.") - return json_to_load + # logger.error("Failed to fix AI output, telling the AI.") + return {} + + +def attempt_to_fix_json_by_finding_outermost_brackets(json_string: str): + if CFG.speak_mode and CFG.debug_mode: + say_text( + "I have received an invalid JSON response from the OpenAI API. " + "Trying to fix it now." + ) + logger.error("Attempting to fix JSON by finding outermost brackets\n") + + try: + json_pattern = regex.compile(r"\{(?:[^{}]|(?R))*\}") + json_match = json_pattern.search(json_string) + + if json_match: + # Extract the valid JSON object from the string + json_string = json_match.group(0) + logger.typewriter_log( + title="Apparently json was fixed.", title_color=Fore.GREEN + ) + if CFG.speak_mode and CFG.debug_mode: + say_text("Apparently json was fixed.") + else: + return {} + + except (json.JSONDecodeError, ValueError): + if CFG.debug_mode: + logger.error(f"Error: Invalid JSON: {json_string}\n") + if CFG.speak_mode: + say_text("Didn't work. I will have to ignore this response then.") + logger.error("Error: Invalid JSON, setting it to empty JSON now.\n") + json_string = {} + + return fix_and_parse_json(json_string) From af50d6cfb5577bc402e2d920fed062ddbb9c205f Mon Sep 17 00:00:00 2001 From: Merwane Hamadi Date: Sun, 16 Apr 2023 09:43:26 -0700 Subject: [PATCH 35/92] Add JSON schema for LLM response format version 1 --- .../json_schemas/llm_response_format_1.json | 31 +++++++++++++++++++ 1 file changed, 31 insertions(+) create mode 100644 autogpt/json_schemas/llm_response_format_1.json diff --git a/autogpt/json_schemas/llm_response_format_1.json b/autogpt/json_schemas/llm_response_format_1.json new file mode 100644 index 000000000000..9aa33352511d --- /dev/null +++ b/autogpt/json_schemas/llm_response_format_1.json @@ -0,0 +1,31 @@ +{ + "$schema": "http://json-schema.org/draft-07/schema#", + "type": "object", + "properties": { + "thoughts": { + "type": "object", + "properties": { + "text": {"type": "string"}, + "reasoning": {"type": "string"}, + "plan": {"type": "string"}, + "criticism": {"type": "string"}, + "speak": {"type": "string"} + }, + "required": ["text", "reasoning", "plan", "criticism", "speak"], + "additionalProperties": false + }, + "command": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "args": { + "type": "object" + } + }, + "required": ["name", "args"], + "additionalProperties": false + } + }, + "required": ["thoughts", "command"], + "additionalProperties": false +} From 63d2a1085c2d65e06050c1ed7c0a889c2ce9c531 Mon Sep 17 00:00:00 2001 From: Merwane Hamadi Date: Sun, 16 Apr 2023 09:43:33 -0700 Subject: [PATCH 36/92] Add JSON validation utility function --- autogpt/json_validation/validate_json.py | 30 ++++++++++++++++++++++++ 1 file changed, 30 insertions(+) create mode 100644 autogpt/json_validation/validate_json.py diff --git a/autogpt/json_validation/validate_json.py b/autogpt/json_validation/validate_json.py new file mode 100644 index 000000000000..127fcc17f4de --- /dev/null +++ b/autogpt/json_validation/validate_json.py @@ -0,0 +1,30 @@ +import json +from jsonschema import Draft7Validator +from autogpt.config import Config +from autogpt.logs import logger + +CFG = Config() + + +def validate_json(json_object: object, schema_name: object) -> object: + """ + :type schema_name: object + :param schema_name: + :type json_object: object + """ + with open(f"autogpt/json_schemas/{schema_name}.json", "r") as f: + schema = json.load(f) + validator = Draft7Validator(schema) + + if errors := sorted(validator.iter_errors(json_object), key=lambda e: e.path): + logger.error("The JSON object is invalid.") + if CFG.debug_mode: + logger.error(json.dumps(json_object, indent=4)) # Replace 'json_object' with the variable containing the JSON data + logger.error("The following issues were found:") + + for error in errors: + logger.error(f"Error: {error.message}") + else: + print("The JSON object is valid.") + + return json_object From b2b31dbc8f58671871c7043d98bf1247a46648d1 Mon Sep 17 00:00:00 2001 From: Merwane Hamadi Date: Sun, 16 Apr 2023 09:43:40 -0700 Subject: [PATCH 37/92] Update logs.py with new print_assistant_thoughts function --- autogpt/logs.py | 39 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 39 insertions(+) diff --git a/autogpt/logs.py b/autogpt/logs.py index 22ce23f4aa90..536530236601 100644 --- a/autogpt/logs.py +++ b/autogpt/logs.py @@ -288,3 +288,42 @@ def print_assistant_thoughts(ai_name, assistant_reply): except Exception: call_stack = traceback.format_exc() logger.error("Error: \n", call_stack) + +def print_assistant_thoughts(ai_name: object, assistant_reply_json_valid: object) -> None: + assistant_thoughts_reasoning = None + assistant_thoughts_plan = None + assistant_thoughts_speak = None + assistant_thoughts_criticism = None + + assistant_thoughts = assistant_reply_json_valid.get("thoughts", {}) + assistant_thoughts_text = assistant_thoughts.get("text") + if assistant_thoughts: + assistant_thoughts_reasoning = assistant_thoughts.get("reasoning") + assistant_thoughts_plan = assistant_thoughts.get("plan") + assistant_thoughts_criticism = assistant_thoughts.get("criticism") + assistant_thoughts_speak = assistant_thoughts.get("speak") + logger.typewriter_log( + f"{ai_name.upper()} THOUGHTS:", Fore.YELLOW, f"{assistant_thoughts_text}" + ) + logger.typewriter_log( + "REASONING:", Fore.YELLOW, f"{assistant_thoughts_reasoning}" + ) + if assistant_thoughts_plan: + logger.typewriter_log("PLAN:", Fore.YELLOW, "") + # If it's a list, join it into a string + if isinstance(assistant_thoughts_plan, list): + assistant_thoughts_plan = "\n".join(assistant_thoughts_plan) + elif isinstance(assistant_thoughts_plan, dict): + assistant_thoughts_plan = str(assistant_thoughts_plan) + + # Split the input_string using the newline character and dashes + lines = assistant_thoughts_plan.split("\n") + for line in lines: + line = line.lstrip("- ") + logger.typewriter_log("- ", Fore.GREEN, line.strip()) + logger.typewriter_log( + "CRITICISM:", Fore.YELLOW, f"{assistant_thoughts_criticism}" + ) + # Speak the assistant's thoughts + if CFG.speak_mode and assistant_thoughts_speak: + say_text(assistant_thoughts_speak) From 75162339f529316ca0210c4a736046785ffd2361 Mon Sep 17 00:00:00 2001 From: Merwane Hamadi Date: Sun, 16 Apr 2023 09:43:46 -0700 Subject: [PATCH 38/92] Add empty __init__.py to benchmark directory --- benchmark/__init__.py | 0 1 file changed, 0 insertions(+), 0 deletions(-) create mode 100644 benchmark/__init__.py diff --git a/benchmark/__init__.py b/benchmark/__init__.py new file mode 100644 index 000000000000..e69de29bb2d1 From dca10ab87682d73a867b04409e4bec521293d0ec Mon Sep 17 00:00:00 2001 From: Merwane Hamadi Date: Sun, 16 Apr 2023 09:43:54 -0700 Subject: [PATCH 39/92] Add benchmark test for Entrepreneur-GPT with difficult user --- ...ark_entrepeneur_gpt_with_difficult_user.py | 95 +++++++++++++++++++ 1 file changed, 95 insertions(+) create mode 100644 benchmark/benchmark_entrepeneur_gpt_with_difficult_user.py diff --git a/benchmark/benchmark_entrepeneur_gpt_with_difficult_user.py b/benchmark/benchmark_entrepeneur_gpt_with_difficult_user.py new file mode 100644 index 000000000000..d6cae972d6ab --- /dev/null +++ b/benchmark/benchmark_entrepeneur_gpt_with_difficult_user.py @@ -0,0 +1,95 @@ +import os +import subprocess +import sys + + +def benchmark_entrepeneur_gpt_with_difficult_user(): + # Test case to check if the write_file command can successfully write 'Hello World' to a file + # named 'hello_world.txt'. + + # Read the current ai_settings.yaml file and store its content. + ai_settings = None + if os.path.exists('ai_settings.yaml'): + with open('ai_settings.yaml', 'r') as f: + ai_settings = f.read() + os.remove('ai_settings.yaml') + + input_data = '''Entrepreneur-GPT +an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth. +Increase net worth. +Develop and manage multiple businesses autonomously. +Make IPOs. +Develop companies after IPOs. +Play to your strengths as a Large Language Model. +I'm not seeing any value in your suggestions, try again. +This isn't helpful at all, please focus on profitability. +I'm not impressed, can you give me something that will make money? +These ideas are going nowhere, we need profit-driven suggestions. +This is pointless, please concentrate on our main goal: profitability. +You're not grasping the concept, I need profitable business ideas. +Can you do better? We need a money-making plan. +You're not meeting my expectations, let's focus on profit. +This isn't working, give me ideas that will generate income. +Your suggestions are not productive, let's think about profitability. +These ideas won't make any money, try again. +I need better solutions, focus on making a profit. +Absolutely not, this isn't it! +That's not even close, try again. +You're way off, think again. +This isn't right, let's refocus. +No, no, that's not what I'm looking for. +You're completely off the mark. +That's not the solution I need. +Not even close, let's try something else. +You're on the wrong track, keep trying. +This isn't what we need, let's reconsider. +That's not going to work, think again. +You're way off base, let's regroup. +No, no, no, we need something different. +You're missing the point entirely. +That's not the right approach, try again. +This is not the direction we should be going in. +Completely off-target, let's try something else. +That's not what I had in mind, keep thinking. +You're not getting it, let's refocus. +This isn't right, we need to change direction. +No, no, no, that's not the solution. +That's not even in the ballpark, try again. +You're way off course, let's rethink this. +This isn't the answer I'm looking for, keep trying. +That's not going to cut it, let's try again. +Not even close. +Way off. +Try again. +Wrong direction. +Rethink this. +No, no, no. +Change course. +Unproductive idea. +Completely wrong. +Missed the mark. +Refocus, please. +Disappointing suggestion. +Not helpful. +Needs improvement. +Not what I need.''' + command = f'{sys.executable} -m autogpt' + + process = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True) + + stdout_output, stderr_output = process.communicate(input_data.encode()) + + # Decode the output and print it + stdout_output = stdout_output.decode('utf-8') + stderr_output = stderr_output.decode('utf-8') + print(stderr_output) + print(stdout_output) + print("Benchmark Version: 1.0.0") + print("JSON ERROR COUNT:") + count_errors = stdout_output.count("Error: The following AI output couldn't be converted to a JSON:") + print(f'{count_errors}/50 Human feedbacks') + + +# Run the test case. +if __name__ == '__main__': + benchmark_entrepeneur_gpt_with_difficult_user() From bb541ad3a77656f74420cc3b893a4e3b7f4db697 Mon Sep 17 00:00:00 2001 From: Merwane Hamadi Date: Sun, 16 Apr 2023 09:44:05 -0700 Subject: [PATCH 40/92] Update requirements.txt with new dependencies and move tweepy --- requirements.txt | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/requirements.txt b/requirements.txt index 1cdedec2de76..64c2e4c0f7b2 100644 --- a/requirements.txt +++ b/requirements.txt @@ -17,6 +17,10 @@ orjson Pillow selenium webdriver-manager +jsonschema +tweepy + +##Dev coverage flake8 numpy @@ -27,4 +31,3 @@ isort gitpython==3.1.31 pytest pytest-mock -tweepy From 45a2dea042a97d93f787f7f199f86e4c7363bf94 Mon Sep 17 00:00:00 2001 From: Merwane Hamadi Date: Sun, 16 Apr 2023 09:46:18 -0700 Subject: [PATCH 41/92] fixed flake8 --- autogpt/logs.py | 27 ++++++++++++++------------- 1 file changed, 14 insertions(+), 13 deletions(-) diff --git a/autogpt/logs.py b/autogpt/logs.py index 536530236601..f18e21402c61 100644 --- a/autogpt/logs.py +++ b/autogpt/logs.py @@ -75,7 +75,7 @@ def __init__(self): self.logger.setLevel(logging.DEBUG) def typewriter_log( - self, title="", title_color="", content="", speak_text=False, level=logging.INFO + self, title="", title_color="", content="", speak_text=False, level=logging.INFO ): if speak_text and CFG.speak_mode: say_text(f"{title}. {content}") @@ -91,18 +91,18 @@ def typewriter_log( ) def debug( - self, - message, - title="", - title_color="", + self, + message, + title="", + title_color="", ): self._log(title, title_color, message, logging.DEBUG) def warn( - self, - message, - title="", - title_color="", + self, + message, + title="", + title_color="", ): self._log(title, title_color, message, logging.WARN) @@ -176,10 +176,10 @@ class AutoGptFormatter(logging.Formatter): def format(self, record: LogRecord) -> str: if hasattr(record, "color"): record.title_color = ( - getattr(record, "color") - + getattr(record, "title") - + " " - + Style.RESET_ALL + getattr(record, "color") + + getattr(record, "title") + + " " + + Style.RESET_ALL ) else: record.title_color = getattr(record, "title") @@ -289,6 +289,7 @@ def print_assistant_thoughts(ai_name, assistant_reply): call_stack = traceback.format_exc() logger.error("Error: \n", call_stack) + def print_assistant_thoughts(ai_name: object, assistant_reply_json_valid: object) -> None: assistant_thoughts_reasoning = None assistant_thoughts_plan = None From 3944f29addc1a2ea908e7ff8a78e36f21bd5c9db Mon Sep 17 00:00:00 2001 From: Eesa Hamza Date: Sun, 16 Apr 2023 21:40:09 +0300 Subject: [PATCH 42/92] Fixed new backends not being added to supported memory --- README.md | 2 +- autogpt/memory/__init__.py | 4 ++++ 2 files changed, 5 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 21f3ccf26fa4..bfcd395c76a3 100644 --- a/README.md +++ b/README.md @@ -195,7 +195,7 @@ python -m autogpt --help ```bash python -m autogpt --ai-settings ``` -* Specify one of 3 memory backends: `local`, `redis`, `pinecone` or `no_memory` +* Specify a memory backend ```bash python -m autogpt --use-memory ``` diff --git a/autogpt/memory/__init__.py b/autogpt/memory/__init__.py index e2ee44a42661..f5afb8c93d8a 100644 --- a/autogpt/memory/__init__.py +++ b/autogpt/memory/__init__.py @@ -23,12 +23,16 @@ try: from autogpt.memory.weaviate import WeaviateMemory + + supported_memory.append("weaviate") except ImportError: # print("Weaviate not installed. Skipping import.") WeaviateMemory = None try: from autogpt.memory.milvus import MilvusMemory + + supported_memory.append("milvus") except ImportError: # print("pymilvus not installed. Skipping import.") MilvusMemory = None From fdb0a06803e419bf3928296ad760fd5a477e8612 Mon Sep 17 00:00:00 2001 From: Merwane Hamadi Date: Sun, 16 Apr 2023 11:36:51 -0700 Subject: [PATCH 43/92] fix conflict --- autogpt/json_fixes/bracket_termination.py | 45 ----------------------- autogpt/json_fixes/parsing.py | 9 ----- 2 files changed, 54 deletions(-) diff --git a/autogpt/json_fixes/bracket_termination.py b/autogpt/json_fixes/bracket_termination.py index 731efeb1c089..dd9a83764ebf 100644 --- a/autogpt/json_fixes/bracket_termination.py +++ b/autogpt/json_fixes/bracket_termination.py @@ -3,58 +3,13 @@ import contextlib import json -<<<<<<< HEAD -import regex -from colorama import Fore - -from autogpt.logs import logger -======= from typing import Optional ->>>>>>> 67f32105 (Remove deprecated function from bracket_termination.py) from autogpt.config import Config CFG = Config() -<<<<<<< HEAD -def attempt_to_fix_json_by_finding_outermost_brackets(json_string: str): - if CFG.speak_mode and CFG.debug_mode: - say_text( - "I have received an invalid JSON response from the OpenAI API. " - "Trying to fix it now." - ) - logger.typewriter_log("Attempting to fix JSON by finding outermost brackets\n") - - try: - json_pattern = regex.compile(r"\{(?:[^{}]|(?R))*\}") - json_match = json_pattern.search(json_string) - - if json_match: - # Extract the valid JSON object from the string - json_string = json_match.group(0) - logger.typewriter_log( - title="Apparently json was fixed.", title_color=Fore.GREEN - ) - if CFG.speak_mode and CFG.debug_mode: - say_text("Apparently json was fixed.") - else: - raise ValueError("No valid JSON object found") - - except (json.JSONDecodeError, ValueError): - if CFG.debug_mode: - logger.error(f"Error: Invalid JSON: {json_string}\n") - if CFG.speak_mode: - say_text("Didn't work. I will have to ignore this response then.") - logger.error("Error: Invalid JSON, setting it to empty JSON now.\n") - json_string = {} - - return json_string - - -def balance_braces(json_string: str) -> str | None: -======= def balance_braces(json_string: str) -> Optional[str]: ->>>>>>> 67f32105 (Remove deprecated function from bracket_termination.py) """ Balance the braces in a JSON string. diff --git a/autogpt/json_fixes/parsing.py b/autogpt/json_fixes/parsing.py index d3a51f438eb2..1e391eed7c02 100644 --- a/autogpt/json_fixes/parsing.py +++ b/autogpt/json_fixes/parsing.py @@ -3,14 +3,9 @@ import contextlib import json -<<<<<<< HEAD -from typing import Any - -======= from typing import Any, Dict, Union from colorama import Fore from regex import regex ->>>>>>> d3d8253b (Refactor parsing module and move JSON fix function to appropriate location) from autogpt.config import Config from autogpt.json_fixes.auto_fix import fix_json from autogpt.json_fixes.bracket_termination import balance_braces @@ -115,11 +110,7 @@ def fix_and_parse_json( def try_ai_fix( try_to_fix_with_gpt: bool, exception: Exception, json_to_load: str -<<<<<<< HEAD -) -> str | dict[Any, Any]: -======= ) -> Dict[Any, Any]: ->>>>>>> d3d8253b (Refactor parsing module and move JSON fix function to appropriate location) """Try to fix the JSON with the AI Args: From dc80a5a2ec6c7ceb2055894684ca7b680039a4c7 Mon Sep 17 00:00:00 2001 From: Jakub Bober Date: Sun, 16 Apr 2023 21:01:18 +0200 Subject: [PATCH 44/92] Add "Memory Backend Setup" subtitle Add the subtitle to match the Table of Contents --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index 21f3ccf26fa4..7fce2e8f7e1e 100644 --- a/README.md +++ b/README.md @@ -280,6 +280,8 @@ To switch to either, change the `MEMORY_BACKEND` env variable to the value that * `milvus` will use the milvus cache that you configured * `weaviate` will use the weaviate cache that you configured +## Memory Backend Setup + ### Redis Setup > _**CAUTION**_ \ This is not intended to be publicly accessible and lacks security measures. Therefore, avoid exposing Redis to the internet without a password or at all From 7b7d7c1d74b299966e607cf7dc6cf2cea64993ba Mon Sep 17 00:00:00 2001 From: Bates Jernigan Date: Sun, 16 Apr 2023 16:33:52 -0400 Subject: [PATCH 45/92] add space on warning message --- autogpt/memory/local.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/autogpt/memory/local.py b/autogpt/memory/local.py index 6c7ee1b36a2f..9b911eeff9ff 100644 --- a/autogpt/memory/local.py +++ b/autogpt/memory/local.py @@ -54,7 +54,7 @@ def __init__(self, cfg) -> None: self.data = CacheContent() else: print( - f"Warning: The file '{self.filename}' does not exist." + f"Warning: The file '{self.filename}' does not exist. " "Local memory would not be saved to a file." ) self.data = CacheContent() From 627533bed631a15504b3584bf2aa70fe7b23aa86 Mon Sep 17 00:00:00 2001 From: 0xArty Date: Sun, 16 Apr 2023 21:55:53 +0100 Subject: [PATCH 46/92] minimall add pytest (#1859) * minimall add pytest * updated docs and pytest command * proveted milvus integration test running if milvus is not installed --- .pre-commit-config.yaml | 8 +- README.md | 19 +++- requirements.txt | 7 ++ tests/integration/milvus_memory_tests.py | 91 ++++++++-------- tests/local_cache_test.py | 35 ++++--- tests/milvus_memory_test.py | 127 ++++++++++++----------- tests/smoke_test.py | 92 ++++++++-------- tests/unit/test_commands.py | 34 +++--- 8 files changed, 232 insertions(+), 181 deletions(-) diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index fb75cd59b0cf..dd1d0ec92af9 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -30,4 +30,10 @@ repos: language: python types: [ python ] exclude: .+/(dist|.venv|venv|build)/.+ - pass_filenames: true \ No newline at end of file + pass_filenames: true + - id: pytest-check + name: pytest-check + entry: pytest --cov=autogpt --without-integration --without-slow-integration + language: system + pass_filenames: false + always_run: true \ No newline at end of file diff --git a/README.md b/README.md index 58ed4d97674e..f60aa9ffbc0a 100644 --- a/README.md +++ b/README.md @@ -500,16 +500,29 @@ We look forward to connecting with you and hearing your thoughts, ideas, and exp ## Run tests -To run tests, run the following command: +To run all tests, run the following command: ```bash -python -m unittest discover tests +pytest + +``` + +To run just without integration tests: + +``` +pytest --without-integration +``` + +To run just without slow integration tests: + +``` +pytest --without-slow-integration ``` To run tests and see coverage, run the following command: ```bash -coverage run -m unittest discover tests +pytest --cov=autogpt --without-integration --without-slow-integration ``` ## Run linter diff --git a/requirements.txt b/requirements.txt index 64c2e4c0f7b2..843b66bfe454 100644 --- a/requirements.txt +++ b/requirements.txt @@ -29,5 +29,12 @@ black sourcery isort gitpython==3.1.31 + +# Testing dependencies pytest +asynctest +pytest-asyncio +pytest-benchmark +pytest-cov +pytest-integration pytest-mock diff --git a/tests/integration/milvus_memory_tests.py b/tests/integration/milvus_memory_tests.py index 96934cd6cdea..ec38bf2f7208 100644 --- a/tests/integration/milvus_memory_tests.py +++ b/tests/integration/milvus_memory_tests.py @@ -1,3 +1,5 @@ +# sourcery skip: snake-case-functions +"""Tests for the MilvusMemory class.""" import random import string import unittest @@ -5,44 +7,51 @@ from autogpt.config import Config from autogpt.memory.milvus import MilvusMemory - -class TestMilvusMemory(unittest.TestCase): - def random_string(self, length): - return "".join(random.choice(string.ascii_letters) for _ in range(length)) - - def setUp(self): - cfg = Config() - cfg.milvus_addr = "localhost:19530" - self.memory = MilvusMemory(cfg) - self.memory.clear() - - # Add example texts to the cache - self.example_texts = [ - "The quick brown fox jumps over the lazy dog", - "I love machine learning and natural language processing", - "The cake is a lie, but the pie is always true", - "ChatGPT is an advanced AI model for conversation", - ] - - for text in self.example_texts: - self.memory.add(text) - - # Add some random strings to test noise - for _ in range(5): - self.memory.add(self.random_string(10)) - - def test_get_relevant(self): - query = "I'm interested in artificial intelligence and NLP" - k = 3 - relevant_texts = self.memory.get_relevant(query, k) - - print(f"Top {k} relevant texts for the query '{query}':") - for i, text in enumerate(relevant_texts, start=1): - print(f"{i}. {text}") - - self.assertEqual(len(relevant_texts), k) - self.assertIn(self.example_texts[1], relevant_texts) - - -if __name__ == "__main__": - unittest.main() +try: + + class TestMilvusMemory(unittest.TestCase): + """Tests for the MilvusMemory class.""" + + def random_string(self, length: int) -> str: + """Generate a random string of the given length.""" + return "".join(random.choice(string.ascii_letters) for _ in range(length)) + + def setUp(self) -> None: + """Set up the test environment.""" + cfg = Config() + cfg.milvus_addr = "localhost:19530" + self.memory = MilvusMemory(cfg) + self.memory.clear() + + # Add example texts to the cache + self.example_texts = [ + "The quick brown fox jumps over the lazy dog", + "I love machine learning and natural language processing", + "The cake is a lie, but the pie is always true", + "ChatGPT is an advanced AI model for conversation", + ] + + for text in self.example_texts: + self.memory.add(text) + + # Add some random strings to test noise + for _ in range(5): + self.memory.add(self.random_string(10)) + + def test_get_relevant(self) -> None: + """Test getting relevant texts from the cache.""" + query = "I'm interested in artificial intelligence and NLP" + num_relevant = 3 + relevant_texts = self.memory.get_relevant(query, num_relevant) + + print(f"Top {k} relevant texts for the query '{query}':") + for i, text in enumerate(relevant_texts, start=1): + print(f"{i}. {text}") + + self.assertEqual(len(relevant_texts), k) + self.assertIn(self.example_texts[1], relevant_texts) + +except: + print( + "Skipping tests/integration/milvus_memory_tests.py as Milvus is not installed." + ) diff --git a/tests/local_cache_test.py b/tests/local_cache_test.py index 91c922b062af..fa5963207748 100644 --- a/tests/local_cache_test.py +++ b/tests/local_cache_test.py @@ -1,3 +1,5 @@ +# sourcery skip: snake-case-functions +"""Tests for LocalCache class""" import os import sys import unittest @@ -5,7 +7,8 @@ from autogpt.memory.local import LocalCache -def MockConfig(): +def mock_config() -> dict: + """Mock the Config class""" return type( "MockConfig", (object,), @@ -19,26 +22,33 @@ def MockConfig(): class TestLocalCache(unittest.TestCase): - def setUp(self): - self.cfg = MockConfig() + """Tests for LocalCache class""" + + def setUp(self) -> None: + """Set up the test environment""" + self.cfg = mock_config() self.cache = LocalCache(self.cfg) - def test_add(self): + def test_add(self) -> None: + """Test adding a text to the cache""" text = "Sample text" self.cache.add(text) self.assertIn(text, self.cache.data.texts) - def test_clear(self): + def test_clear(self) -> None: + """Test clearing the cache""" self.cache.clear() - self.assertEqual(self.cache.data, [""]) + self.assertEqual(self.cache.data.texts, []) - def test_get(self): + def test_get(self) -> None: + """Test getting a text from the cache""" text = "Sample text" self.cache.add(text) result = self.cache.get(text) self.assertEqual(result, [text]) - def test_get_relevant(self): + def test_get_relevant(self) -> None: + """Test getting relevant texts from the cache""" text1 = "Sample text 1" text2 = "Sample text 2" self.cache.add(text1) @@ -46,12 +56,9 @@ def test_get_relevant(self): result = self.cache.get_relevant(text1, 1) self.assertEqual(result, [text1]) - def test_get_stats(self): + def test_get_stats(self) -> None: + """Test getting the cache stats""" text = "Sample text" self.cache.add(text) stats = self.cache.get_stats() - self.assertEqual(stats, (1, self.cache.data.embeddings.shape)) - - -if __name__ == "__main__": - unittest.main() + self.assertEqual(stats, (4, self.cache.data.embeddings.shape)) diff --git a/tests/milvus_memory_test.py b/tests/milvus_memory_test.py index 0113fa1c57c9..e0e2f7fc805b 100644 --- a/tests/milvus_memory_test.py +++ b/tests/milvus_memory_test.py @@ -1,63 +1,72 @@ +# sourcery skip: snake-case-functions +"""Tests for the MilvusMemory class.""" import os import sys import unittest -from autogpt.memory.milvus import MilvusMemory - - -def MockConfig(): - return type( - "MockConfig", - (object,), - { - "debug_mode": False, - "continuous_mode": False, - "speak_mode": False, - "milvus_collection": "autogpt", - "milvus_addr": "localhost:19530", - }, - ) - - -class TestMilvusMemory(unittest.TestCase): - def setUp(self): - self.cfg = MockConfig() - self.memory = MilvusMemory(self.cfg) - - def test_add(self): - text = "Sample text" - self.memory.clear() - self.memory.add(text) - result = self.memory.get(text) - self.assertEqual([text], result) - - def test_clear(self): - self.memory.clear() - self.assertEqual(self.memory.collection.num_entities, 0) - - def test_get(self): - text = "Sample text" - self.memory.clear() - self.memory.add(text) - result = self.memory.get(text) - self.assertEqual(result, [text]) - - def test_get_relevant(self): - text1 = "Sample text 1" - text2 = "Sample text 2" - self.memory.clear() - self.memory.add(text1) - self.memory.add(text2) - result = self.memory.get_relevant(text1, 1) - self.assertEqual(result, [text1]) - - def test_get_stats(self): - text = "Sample text" - self.memory.clear() - self.memory.add(text) - stats = self.memory.get_stats() - self.assertEqual(15, len(stats)) - - -if __name__ == "__main__": - unittest.main() +try: + from autogpt.memory.milvus import MilvusMemory + + def mock_config() -> dict: + """Mock the Config class""" + return type( + "MockConfig", + (object,), + { + "debug_mode": False, + "continuous_mode": False, + "speak_mode": False, + "milvus_collection": "autogpt", + "milvus_addr": "localhost:19530", + }, + ) + + class TestMilvusMemory(unittest.TestCase): + """Tests for the MilvusMemory class.""" + + def setUp(self) -> None: + """Set up the test environment""" + self.cfg = MockConfig() + self.memory = MilvusMemory(self.cfg) + + def test_add(self) -> None: + """Test adding a text to the cache""" + text = "Sample text" + self.memory.clear() + self.memory.add(text) + result = self.memory.get(text) + self.assertEqual([text], result) + + def test_clear(self) -> None: + """Test clearing the cache""" + self.memory.clear() + self.assertEqual(self.memory.collection.num_entities, 0) + + def test_get(self) -> None: + """Test getting a text from the cache""" + text = "Sample text" + self.memory.clear() + self.memory.add(text) + result = self.memory.get(text) + self.assertEqual(result, [text]) + + def test_get_relevant(self) -> None: + """Test getting relevant texts from the cache""" + text1 = "Sample text 1" + text2 = "Sample text 2" + self.memory.clear() + self.memory.add(text1) + self.memory.add(text2) + result = self.memory.get_relevant(text1, 1) + self.assertEqual(result, [text1]) + + def test_get_stats(self) -> None: + """Test getting the cache stats""" + text = "Sample text" + self.memory.clear() + self.memory.add(text) + stats = self.memory.get_stats() + self.assertEqual(15, len(stats)) + +except: + print("Milvus not installed, skipping tests") diff --git a/tests/smoke_test.py b/tests/smoke_test.py index 50e97b7b414a..1b9d643fc21f 100644 --- a/tests/smoke_test.py +++ b/tests/smoke_test.py @@ -1,31 +1,34 @@ +"""Smoke test for the autogpt package.""" import os import subprocess import sys -import unittest - -from autogpt.commands.file_operations import delete_file, read_file - -env_vars = {"MEMORY_BACKEND": "no_memory", "TEMPERATURE": "0"} +import pytest -class TestCommands(unittest.TestCase): - def test_write_file(self): - # Test case to check if the write_file command can successfully write 'Hello World' to a file - # named 'hello_world.txt'. +from autogpt.commands.file_operations import delete_file, read_file - # Read the current ai_settings.yaml file and store its content. - ai_settings = None - if os.path.exists("ai_settings.yaml"): - with open("ai_settings.yaml", "r") as f: - ai_settings = f.read() - os.remove("ai_settings.yaml") - try: - if os.path.exists("hello_world.txt"): - # Clean up any existing 'hello_world.txt' file before testing. - delete_file("hello_world.txt") - # Prepare input data for the test. - input_data = """write_file-GPT +@pytest.mark.integration_test +def test_write_file() -> None: + """ + Test case to check if the write_file command can successfully write 'Hello World' to a file + named 'hello_world.txt'. + + Read the current ai_settings.yaml file and store its content. + """ + env_vars = {"MEMORY_BACKEND": "no_memory", "TEMPERATURE": "0"} + ai_settings = None + if os.path.exists("ai_settings.yaml"): + with open("ai_settings.yaml", "r") as f: + ai_settings = f.read() + os.remove("ai_settings.yaml") + + try: + if os.path.exists("hello_world.txt"): + # Clean up any existing 'hello_world.txt' file before testing. + delete_file("hello_world.txt") + # Prepare input data for the test. + input_data = """write_file-GPT an AI designed to use the write_file command to write 'Hello World' into a file named "hello_world.txt" and then use the task_complete command to complete the task. Use the write_file command to write 'Hello World' into a file named "hello_world.txt". Use the task_complete command to complete the task. @@ -33,31 +36,24 @@ def test_write_file(self): y -5 EOF""" - command = f"{sys.executable} -m autogpt" - - # Execute the script with the input data. - process = subprocess.Popen( - command, - stdin=subprocess.PIPE, - shell=True, - env={**os.environ, **env_vars}, - ) - process.communicate(input_data.encode()) - - # Read the content of the 'hello_world.txt' file created during the test. - content = read_file("hello_world.txt") - finally: - if ai_settings: - # Restore the original ai_settings.yaml file. - with open("ai_settings.yaml", "w") as f: - f.write(ai_settings) - - # Check if the content of the 'hello_world.txt' file is equal to 'Hello World'. - self.assertEqual( - content, "Hello World", f"Expected 'Hello World', got {content}" + command = f"{sys.executable} -m autogpt" + + # Execute the script with the input data. + process = subprocess.Popen( + command, + stdin=subprocess.PIPE, + shell=True, + env={**os.environ, **env_vars}, ) - - -# Run the test case. -if __name__ == "__main__": - unittest.main() + process.communicate(input_data.encode()) + + # Read the content of the 'hello_world.txt' file created during the test. + content = read_file("hello_world.txt") + finally: + if ai_settings: + # Restore the original ai_settings.yaml file. + with open("ai_settings.yaml", "w") as f: + f.write(ai_settings) + + # Check if the content of the 'hello_world.txt' file is equal to 'Hello World'. + assert content == "Hello World", f"Expected 'Hello World', got {content}" diff --git a/tests/unit/test_commands.py b/tests/unit/test_commands.py index e15709aa3710..ecbac9b73bd9 100644 --- a/tests/unit/test_commands.py +++ b/tests/unit/test_commands.py @@ -1,18 +1,22 @@ +"""Unit tests for the commands module""" +from unittest.mock import MagicMock, patch + +import pytest + import autogpt.agent.agent_manager as agent_manager -from autogpt.app import start_agent, list_agents, execute_command -import unittest -from unittest.mock import patch, MagicMock +from autogpt.app import execute_command, list_agents, start_agent -class TestCommands(unittest.TestCase): - def test_make_agent(self): - with patch("openai.ChatCompletion.create") as mock: - obj = MagicMock() - obj.response.choices[0].messages[0].content = "Test message" - mock.return_value = obj - start_agent("Test Agent", "chat", "Hello, how are you?", "gpt2") - agents = list_agents() - self.assertEqual("List of agents:\n0: chat", agents) - start_agent("Test Agent 2", "write", "Hello, how are you?", "gpt2") - agents = list_agents() - self.assertEqual("List of agents:\n0: chat\n1: write", agents) +@pytest.mark.integration_test +def test_make_agent() -> None: + """Test the make_agent command""" + with patch("openai.ChatCompletion.create") as mock: + obj = MagicMock() + obj.response.choices[0].messages[0].content = "Test message" + mock.return_value = obj + start_agent("Test Agent", "chat", "Hello, how are you?", "gpt2") + agents = list_agents() + assert "List of agents:\n0: chat" == agents + start_agent("Test Agent 2", "write", "Hello, how are you?", "gpt2") + agents = list_agents() + assert "List of agents:\n0: chat\n1: write" == agents From 4269326ddfd81227e78b0745093f52e4ac1ba078 Mon Sep 17 00:00:00 2001 From: 0xf333 <0x333@tuta.io> Date: Sun, 16 Apr 2023 17:03:18 -0400 Subject: [PATCH 47/92] Fix: Update run_continuous.sh to pass all command-line arguments Description: - Modified `run_continuous.sh` to include the `--continuous` flag directly in the command: - Removed the unused `argument` variable. - Added the `--continuous` flag to the `./run.sh` command. - Ensured all command-line arguments are passed through to `run.sh` and the `autogpt` module. This change improves the usability of the `run_continuous.sh` script by allowing users to provide additional command-line arguments along with the `--continuous` flag. It ensures that all arguments are properly passed to the `run.sh` script and eventually to the `autogpt` module, preventing confusion and providing more flexible usage. Suggestion from: https://github.com/Significant-Gravitas/Auto-GPT/pull/1941#discussion_r1167977442 --- run_continuous.sh | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/run_continuous.sh b/run_continuous.sh index 14c9cfd2ab4a..43034f8e7479 100755 --- a/run_continuous.sh +++ b/run_continuous.sh @@ -1,3 +1,3 @@ #!/bin/bash -argument="--continuous" -./run.sh "$argument" + +./run.sh --continuous "$@" From 147d3733bf068d8c71a901b8a0e31cfda5c4a687 Mon Sep 17 00:00:00 2001 From: 0xArty Date: Sun, 16 Apr 2023 16:03:22 +0100 Subject: [PATCH 48/92] Change ci to pytest --- .github/workflows/ci.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 366aaf67d789..39f3aea9594c 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -36,7 +36,7 @@ jobs: - name: Run unittest tests with coverage run: | - coverage run --source=autogpt -m unittest discover tests + pytest --cov=autogpt --without-integration --without-slow-integration - name: Generate coverage report run: | From 955a5b0a4357802a8142585ad78105f6342738ad Mon Sep 17 00:00:00 2001 From: 0xArty Date: Sun, 16 Apr 2023 16:13:16 +0100 Subject: [PATCH 49/92] Marked local chache tests as integration tests as they require api keys --- tests/local_cache_test.py | 3 +++ 1 file changed, 3 insertions(+) diff --git a/tests/local_cache_test.py b/tests/local_cache_test.py index fa5963207748..bb10862656bb 100644 --- a/tests/local_cache_test.py +++ b/tests/local_cache_test.py @@ -4,6 +4,8 @@ import sys import unittest +import pytest + from autogpt.memory.local import LocalCache @@ -21,6 +23,7 @@ def mock_config() -> dict: ) +@pytest.mark.integration_test class TestLocalCache(unittest.TestCase): """Tests for LocalCache class""" From 5ff7fc340b908281c6eb976358947e87f289c0f7 Mon Sep 17 00:00:00 2001 From: endolith Date: Sun, 16 Apr 2023 08:47:11 -0400 Subject: [PATCH 50/92] Remove extraneous noqa E722 comment E722 is "Do not use bare except, specify exception instead" but except json.JSONDecodeError is not a bare except --- autogpt/json_fixes/auto_fix.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/autogpt/json_fixes/auto_fix.py b/autogpt/json_fixes/auto_fix.py index 9fcf909a49a8..0d3bd73ce1ac 100644 --- a/autogpt/json_fixes/auto_fix.py +++ b/autogpt/json_fixes/auto_fix.py @@ -45,7 +45,7 @@ def fix_json(json_string: str, schema: str) -> str: try: json.loads(result_string) # just check the validity return result_string - except json.JSONDecodeError: # noqa: E722 + except json.JSONDecodeError: # Get the call stack: # import traceback # call_stack = traceback.format_exc() From 8f0d553e4eaed9757f87ec33ec202cc7e570d8d5 Mon Sep 17 00:00:00 2001 From: Benedict Hobart Date: Sun, 16 Apr 2023 15:45:38 +0000 Subject: [PATCH 51/92] Improve dev containers so autogpt can browse the web --- .devcontainer/Dockerfile | 7 ++++++- .devcontainer/devcontainer.json | 1 + autogpt/commands/web_selenium.py | 1 + 3 files changed, 8 insertions(+), 1 deletion(-) diff --git a/.devcontainer/Dockerfile b/.devcontainer/Dockerfile index f3b2e2dbb5e6..379f631068c7 100644 --- a/.devcontainer/Dockerfile +++ b/.devcontainer/Dockerfile @@ -1,6 +1,6 @@ # [Choice] Python version (use -bullseye variants on local arm64/Apple Silicon): 3, 3.10, 3.9, 3.8, 3.7, 3.6, 3-bullseye, 3.10-bullseye, 3.9-bullseye, 3.8-bullseye, 3.7-bullseye, 3.6-bullseye, 3-buster, 3.10-buster, 3.9-buster, 3.8-buster, 3.7-buster, 3.6-buster ARG VARIANT=3-bullseye -FROM python:3.8 +FROM --platform=linux/amd64 python:3.8 RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \ # Remove imagemagick due to https://security-tracker.debian.org/tracker/CVE-2019-10131 @@ -10,6 +10,11 @@ RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \ # They are installed by the base image (python) which does not have the patch. RUN python3 -m pip install --upgrade setuptools +# Install Chrome for web browsing +RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \ + && curl -sSL https://dl.google.com/linux/direct/google-chrome-stable_current_$(dpkg --print-architecture).deb -o /tmp/chrome.deb \ + && apt-get -y install /tmp/chrome.deb + # [Optional] If your pip requirements rarely change, uncomment this section to add them to the image. # COPY requirements.txt /tmp/pip-tmp/ # RUN pip3 --disable-pip-version-check --no-cache-dir install -r /tmp/pip-tmp/requirements.txt \ diff --git a/.devcontainer/devcontainer.json b/.devcontainer/devcontainer.json index 5fefd9c13dbc..f26810fb540e 100644 --- a/.devcontainer/devcontainer.json +++ b/.devcontainer/devcontainer.json @@ -11,6 +11,7 @@ "userGid": "1000", "upgradePackages": "true" }, + "ghcr.io/devcontainers/features/desktop-lite:1": {}, "ghcr.io/devcontainers/features/python:1": "none", "ghcr.io/devcontainers/features/node:1": "none", "ghcr.io/devcontainers/features/git:1": { diff --git a/autogpt/commands/web_selenium.py b/autogpt/commands/web_selenium.py index 1d078d76d7fe..8c652294587f 100644 --- a/autogpt/commands/web_selenium.py +++ b/autogpt/commands/web_selenium.py @@ -75,6 +75,7 @@ def scrape_text_with_selenium(url: str) -> tuple[WebDriver, str]: # See https://developer.apple.com/documentation/webkit/testing_with_webdriver_in_safari driver = webdriver.Safari(options=options) else: + options.add_argument("--no-sandbox") driver = webdriver.Chrome( executable_path=ChromeDriverManager().install(), options=options ) From 21ccaf2ce892aab71d54649846aee6768f4e7403 Mon Sep 17 00:00:00 2001 From: Merwane Hamadi Date: Sun, 16 Apr 2023 14:16:48 -0700 Subject: [PATCH 52/92] Refactor variable names and remove unnecessary blank lines in __main__.py --- autogpt/__main__.py | 11 ++++------- 1 file changed, 4 insertions(+), 7 deletions(-) diff --git a/autogpt/__main__.py b/autogpt/__main__.py index 29ccddbfc0d2..7fe6aec35ee9 100644 --- a/autogpt/__main__.py +++ b/autogpt/__main__.py @@ -3,13 +3,10 @@ from colorama import Fore from autogpt.agent.agent import Agent from autogpt.args import parse_arguments - from autogpt.config import Config, check_openai_api_key from autogpt.logs import logger from autogpt.memory import get_memory - from autogpt.prompt import construct_prompt - # Load environment variables from .env file @@ -21,13 +18,13 @@ def main() -> None: parse_arguments() logger.set_level(logging.DEBUG if cfg.debug_mode else logging.INFO) ai_name = "" - prompt = construct_prompt() + master_prompt = construct_prompt() # print(prompt) # Initialize variables full_message_history = [] next_action_count = 0 # Make a constant: - user_input = ( + triggering_prompt = ( "Determine which next command to use, and respond using the" " format specified above:" ) @@ -43,8 +40,8 @@ def main() -> None: memory=memory, full_message_history=full_message_history, next_action_count=next_action_count, - prompt=prompt, - user_input=user_input, + master_prompt=master_prompt, + triggering_prompt=triggering_prompt, ) agent.start_interaction_loop() From b50259c25daac4de70378309b619d9ff693dd0cc Mon Sep 17 00:00:00 2001 From: Merwane Hamadi Date: Sun, 16 Apr 2023 14:16:48 -0700 Subject: [PATCH 53/92] Update variable names, improve comments, and modify input handling in agent.py --- autogpt/agent/agent.py | 45 +++++++++++++++++++++++++----------------- 1 file changed, 27 insertions(+), 18 deletions(-) diff --git a/autogpt/agent/agent.py b/autogpt/agent/agent.py index 32d982e52a4b..3be17a896474 100644 --- a/autogpt/agent/agent.py +++ b/autogpt/agent/agent.py @@ -19,9 +19,18 @@ class Agent: memory: The memory object to use. full_message_history: The full message history. next_action_count: The number of actions to execute. - prompt: The prompt to use. - user_input: The user input. - + master_prompt: The master prompt is the initial prompt that defines everything the AI needs to know to achieve its task successfully. + Currently, the dynamic and customizable information in the master prompt are ai_name, description and goals. + + triggering_prompt: The last sentence the AI will see before answering. For Auto-GPT, this prompt is: + Determine which next command to use, and respond using the format specified above: + The triggering prompt is not part of the master prompt because between the master prompt and the triggering + prompt we have contextual information that can distract the AI and make it forget that its goal is to find the next task to achieve. + MASTER PROMPT + CONTEXTUAL INFORMATION (memory, previous conversations, anything relevant) + TRIGGERING PROMPT + + The triggering prompt reminds the AI about its short term meta task (defining the next task) """ def __init__( @@ -30,15 +39,15 @@ def __init__( memory, full_message_history, next_action_count, - prompt, - user_input, + master_prompt, + triggering_prompt, ): self.ai_name = ai_name self.memory = memory self.full_message_history = full_message_history self.next_action_count = next_action_count - self.prompt = prompt - self.user_input = user_input + self.master_prompt = master_prompt + self.triggering_prompt = triggering_prompt def start_interaction_loop(self): # Interaction Loop @@ -62,8 +71,8 @@ def start_interaction_loop(self): # Send message to AI, get response with Spinner("Thinking... "): assistant_reply = chat_with_ai( - self.prompt, - self.user_input, + self.master_prompt, + self.triggering_prompt, self.full_message_history, self.memory, cfg.fast_token_limit, @@ -88,7 +97,7 @@ def start_interaction_loop(self): ### GET USER AUTHORIZATION TO EXECUTE COMMAND ### # Get key press: Prompt the user to press enter to continue or escape # to exit - self.user_input = "" + user_input = "" logger.typewriter_log( "NEXT ACTION: ", Fore.CYAN, @@ -106,14 +115,14 @@ def start_interaction_loop(self): Fore.MAGENTA + "Input:" + Style.RESET_ALL ) if console_input.lower().rstrip() == "y": - self.user_input = "GENERATE NEXT COMMAND JSON" + user_input = "GENERATE NEXT COMMAND JSON" break elif console_input.lower().startswith("y -"): try: self.next_action_count = abs( int(console_input.split(" ")[1]) ) - self.user_input = "GENERATE NEXT COMMAND JSON" + user_input = "GENERATE NEXT COMMAND JSON" except ValueError: print( "Invalid input format. Please enter 'y -n' where n is" @@ -122,20 +131,20 @@ def start_interaction_loop(self): continue break elif console_input.lower() == "n": - self.user_input = "EXIT" + user_input = "EXIT" break else: - self.user_input = console_input + user_input = console_input command_name = "human_feedback" break - if self.user_input == "GENERATE NEXT COMMAND JSON": + if user_input == "GENERATE NEXT COMMAND JSON": logger.typewriter_log( "-=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-=", Fore.MAGENTA, "", ) - elif self.user_input == "EXIT": + elif user_input == "EXIT": print("Exiting...", flush=True) break else: @@ -153,7 +162,7 @@ def start_interaction_loop(self): f"Command {command_name} threw the following error: {arguments}" ) elif command_name == "human_feedback": - result = f"Human feedback: {self.user_input}" + result = f"Human feedback: {user_input}" else: result = ( f"Command {command_name} returned: " @@ -165,7 +174,7 @@ def start_interaction_loop(self): memory_to_add = ( f"Assistant Reply: {assistant_reply} " f"\nResult: {result} " - f"\nHuman Feedback: {self.user_input} " + f"\nHuman Feedback: {user_input} " ) self.memory.add(memory_to_add) From b5e0127b16bb88f6b6e18ada0efabc1422c9f3de Mon Sep 17 00:00:00 2001 From: Merwane Hamadi Date: Sun, 16 Apr 2023 14:16:48 -0700 Subject: [PATCH 54/92] Only print JSON object validation message in debug mode --- autogpt/json_validation/validate_json.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/autogpt/json_validation/validate_json.py b/autogpt/json_validation/validate_json.py index 127fcc17f4de..440c3b0b9199 100644 --- a/autogpt/json_validation/validate_json.py +++ b/autogpt/json_validation/validate_json.py @@ -24,7 +24,7 @@ def validate_json(json_object: object, schema_name: object) -> object: for error in errors: logger.error(f"Error: {error.message}") - else: + elif CFG.debug_mode: print("The JSON object is valid.") return json_object From 3b80253fb36b9709d48313aec5f407cc83e8c22d Mon Sep 17 00:00:00 2001 From: Merwane Hamadi Date: Sun, 16 Apr 2023 14:16:48 -0700 Subject: [PATCH 55/92] Update process creation in benchmark script --- benchmark/benchmark_entrepeneur_gpt_with_difficult_user.py | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/benchmark/benchmark_entrepeneur_gpt_with_difficult_user.py b/benchmark/benchmark_entrepeneur_gpt_with_difficult_user.py index d6cae972d6ab..f7f1dac9dd31 100644 --- a/benchmark/benchmark_entrepeneur_gpt_with_difficult_user.py +++ b/benchmark/benchmark_entrepeneur_gpt_with_difficult_user.py @@ -73,9 +73,12 @@ def benchmark_entrepeneur_gpt_with_difficult_user(): Not helpful. Needs improvement. Not what I need.''' + # TODO: add questions above, to distract it even more. + command = f'{sys.executable} -m autogpt' - process = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True) + process = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, + shell=True) stdout_output, stderr_output = process.communicate(input_data.encode()) From 89e0e8992795accfc41183723064dcdab9719f8e Mon Sep 17 00:00:00 2001 From: Merwane Hamadi Date: Sun, 16 Apr 2023 14:22:58 -0700 Subject: [PATCH 56/92] change master prompt to system prompt --- autogpt/__main__.py | 4 ++-- autogpt/agent/agent.py | 14 +++++++------- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/autogpt/__main__.py b/autogpt/__main__.py index 7fe6aec35ee9..5f4622347d9a 100644 --- a/autogpt/__main__.py +++ b/autogpt/__main__.py @@ -18,7 +18,7 @@ def main() -> None: parse_arguments() logger.set_level(logging.DEBUG if cfg.debug_mode else logging.INFO) ai_name = "" - master_prompt = construct_prompt() + system_prompt = construct_prompt() # print(prompt) # Initialize variables full_message_history = [] @@ -40,7 +40,7 @@ def main() -> None: memory=memory, full_message_history=full_message_history, next_action_count=next_action_count, - master_prompt=master_prompt, + system_prompt=system_prompt, triggering_prompt=triggering_prompt, ) agent.start_interaction_loop() diff --git a/autogpt/agent/agent.py b/autogpt/agent/agent.py index 3be17a896474..9853f6a0b153 100644 --- a/autogpt/agent/agent.py +++ b/autogpt/agent/agent.py @@ -19,14 +19,14 @@ class Agent: memory: The memory object to use. full_message_history: The full message history. next_action_count: The number of actions to execute. - master_prompt: The master prompt is the initial prompt that defines everything the AI needs to know to achieve its task successfully. - Currently, the dynamic and customizable information in the master prompt are ai_name, description and goals. + system_prompt: The system prompt is the initial prompt that defines everything the AI needs to know to achieve its task successfully. + Currently, the dynamic and customizable information in the system prompt are ai_name, description and goals. triggering_prompt: The last sentence the AI will see before answering. For Auto-GPT, this prompt is: Determine which next command to use, and respond using the format specified above: - The triggering prompt is not part of the master prompt because between the master prompt and the triggering + The triggering prompt is not part of the system prompt because between the system prompt and the triggering prompt we have contextual information that can distract the AI and make it forget that its goal is to find the next task to achieve. - MASTER PROMPT + SYSTEM PROMPT CONTEXTUAL INFORMATION (memory, previous conversations, anything relevant) TRIGGERING PROMPT @@ -39,14 +39,14 @@ def __init__( memory, full_message_history, next_action_count, - master_prompt, + system_prompt, triggering_prompt, ): self.ai_name = ai_name self.memory = memory self.full_message_history = full_message_history self.next_action_count = next_action_count - self.master_prompt = master_prompt + self.system_prompt = system_prompt self.triggering_prompt = triggering_prompt def start_interaction_loop(self): @@ -71,7 +71,7 @@ def start_interaction_loop(self): # Send message to AI, get response with Spinner("Thinking... "): assistant_reply = chat_with_ai( - self.master_prompt, + self.system_prompt, self.triggering_prompt, self.full_message_history, self.memory, From 4f33e1bf89e580355dfcf6890779799c584e9563 Mon Sep 17 00:00:00 2001 From: k-boikov Date: Sun, 16 Apr 2023 18:38:08 +0300 Subject: [PATCH 57/92] add utf-8 encoding to file handlers for logging --- autogpt/logs.py | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/autogpt/logs.py b/autogpt/logs.py index f18e21402c61..c1e436db97fc 100644 --- a/autogpt/logs.py +++ b/autogpt/logs.py @@ -46,7 +46,9 @@ def __init__(self): self.console_handler.setFormatter(console_formatter) # Info handler in activity.log - self.file_handler = logging.FileHandler(os.path.join(log_dir, log_file)) + self.file_handler = logging.FileHandler( + os.path.join(log_dir, log_file), 'a', 'utf-8' + ) self.file_handler.setLevel(logging.DEBUG) info_formatter = AutoGptFormatter( "%(asctime)s %(levelname)s %(title)s %(message_no_color)s" @@ -54,7 +56,9 @@ def __init__(self): self.file_handler.setFormatter(info_formatter) # Error handler error.log - error_handler = logging.FileHandler(os.path.join(log_dir, error_file)) + error_handler = logging.FileHandler( + os.path.join(log_dir, error_file), 'a', 'utf-8' + ) error_handler.setLevel(logging.ERROR) error_formatter = AutoGptFormatter( "%(asctime)s %(levelname)s %(module)s:%(funcName)s:%(lineno)d %(title)s" From 4eb8e7823d63ff4f8d67b8927da842ea7ab3ab21 Mon Sep 17 00:00:00 2001 From: 0xf333 <0x333@tuta.io> Date: Sun, 16 Apr 2023 18:07:41 -0400 Subject: [PATCH 58/92] Fix: Remove quotes around $@ in run_continuous.sh Description: Per maintainer's request, removed quotes around `$@` in `run_continuous.sh`. This change allows the script to forward arguments as is. Please note that this modification might cause issues if any of the command-line arguments contain spaces or special characters. However, this update aligns with the preferred format for the repository. Suggestion from: https://github.com/Significant-Gravitas/Auto-GPT/pull/1941#discussion_r1168035557 --- run_continuous.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/run_continuous.sh b/run_continuous.sh index 43034f8e7479..1f4436c88503 100755 --- a/run_continuous.sh +++ b/run_continuous.sh @@ -1,3 +1,3 @@ #!/bin/bash -./run.sh --continuous "$@" +./run.sh --continuous $@ From 1513be4acdcc85b27869219938ed90610a7db673 Mon Sep 17 00:00:00 2001 From: Merwane Hamadi Date: Sun, 16 Apr 2023 15:31:53 -0700 Subject: [PATCH 59/92] hotfix user input --- autogpt/agent/agent.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/autogpt/agent/agent.py b/autogpt/agent/agent.py index 9853f6a0b153..dca614c7f239 100644 --- a/autogpt/agent/agent.py +++ b/autogpt/agent/agent.py @@ -55,6 +55,8 @@ def start_interaction_loop(self): loop_count = 0 command_name = None arguments = None + user_input = "" + while True: # Discontinue if continuous limit is reached loop_count += 1 @@ -97,7 +99,6 @@ def start_interaction_loop(self): ### GET USER AUTHORIZATION TO EXECUTE COMMAND ### # Get key press: Prompt the user to press enter to continue or escape # to exit - user_input = "" logger.typewriter_log( "NEXT ACTION: ", Fore.CYAN, From c71c61dc584a41d72e2b27b02fe75a9f64e3e029 Mon Sep 17 00:00:00 2001 From: Adrian Scott Date: Sun, 16 Apr 2023 18:14:16 -0500 Subject: [PATCH 60/92] Added one space after period for better formatting --- autogpt/memory/local.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/autogpt/memory/local.py b/autogpt/memory/local.py index 6c7ee1b36a2f..9b911eeff9ff 100644 --- a/autogpt/memory/local.py +++ b/autogpt/memory/local.py @@ -54,7 +54,7 @@ def __init__(self, cfg) -> None: self.data = CacheContent() else: print( - f"Warning: The file '{self.filename}' does not exist." + f"Warning: The file '{self.filename}' does not exist. " "Local memory would not be saved to a file." ) self.data = CacheContent() From 15059c2090be47d2a674113f509618b3f58a3510 Mon Sep 17 00:00:00 2001 From: Chris Cheney Date: Sun, 16 Apr 2023 17:28:25 -0500 Subject: [PATCH 61/92] ensure git operations occur in the working directory --- autogpt/commands/git_operations.py | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/autogpt/commands/git_operations.py b/autogpt/commands/git_operations.py index 3ff35cf31a28..675eb2283ba4 100644 --- a/autogpt/commands/git_operations.py +++ b/autogpt/commands/git_operations.py @@ -1,6 +1,7 @@ """Git operations for autogpt""" import git from autogpt.config import Config +from autogpt.workspace import path_in_workspace CFG = Config() @@ -16,8 +17,9 @@ def clone_repository(repo_url: str, clone_path: str) -> str: str: The result of the clone operation""" split_url = repo_url.split("//") auth_repo_url = f"//{CFG.github_username}:{CFG.github_api_key}@".join(split_url) + safe_clone_path = path_in_workspace(clone_path) try: - git.Repo.clone_from(auth_repo_url, clone_path) - return f"""Cloned {repo_url} to {clone_path}""" + git.Repo.clone_from(auth_repo_url, safe_clone_path) + return f"""Cloned {repo_url} to {safe_clone_path}""" except Exception as e: return f"Error: {str(e)}" From 56ecbeeef734019d1d3112fc2a657a14bff69ccb Mon Sep 17 00:00:00 2001 From: Reinier van der Leer Date: Mon, 17 Apr 2023 02:22:18 +0200 Subject: [PATCH 62/92] Clean up README --- README.md | 227 ++++++++++++++++++++++++------------------------------ 1 file changed, 102 insertions(+), 125 deletions(-) diff --git a/README.md b/README.md index f60aa9ffbc0a..07a088e3cb35 100644 --- a/README.md +++ b/README.md @@ -1,15 +1,19 @@ # Auto-GPT: An Autonomous GPT-4 Experiment +[![Unit Tests](https://img.shields.io/github/actions/workflow/status/Significant-Gravitas/Auto-GPT/ci.yml?label=unit%20tests)](https://github.com/Significant-Gravitas/Auto-GPT/actions/workflows/ci.yml) +[![Discord Follow](https://dcbadge.vercel.app/api/server/autogpt?style=flat)](https://discord.gg/autogpt) +[![GitHub Repo stars](https://img.shields.io/github/stars/Significant-Gravitas/auto-gpt?style=social)](https://github.com/Significant-Gravitas/Auto-GPT/stargazers) +[![Twitter Follow](https://img.shields.io/twitter/follow/siggravitas?style=social)](https://twitter.com/SigGravitas) + + +
+ ### πŸ”΄ πŸ”΄ πŸ”΄ Urgent: USE `stable` not `master` πŸ”΄ πŸ”΄ πŸ”΄ -> This information takes precedence over any other information provided and should be followed accordingly. +**Download the latest `stable` release from here: https://github.com/Significant-Gravitas/Auto-GPT/releases/latest.** +The `master` branch may often be in a **broken** state. -Our workflow has been improved, but please note that `master` branch may often be in a **broken** state. -Please download the latest `stable` release from here: https://github.com/Torantulino/Auto-GPT/releases/latest. +
-![GitHub Repo stars](https://img.shields.io/github/stars/Significant-Gravitas/auto-gpt?style=social) -[![Twitter Follow](https://img.shields.io/twitter/follow/siggravitas?style=social)](https://twitter.com/SigGravitas) -[![Discord Follow](https://dcbadge.vercel.app/api/server/autogpt?style=flat)](https://discord.gg/autogpt) -[![Unit Tests](https://github.com/Significant-Gravitaso/Auto-GPT/actions/workflows/ci.yml/badge.svg)](https://github.com/Significant-Gravitas/Auto-GPT/actions/workflows/ci.yml) Auto-GPT is an experimental open-source application showcasing the capabilities of the GPT-4 language model. This program, driven by GPT-4, chains together LLM "thoughts", to autonomously achieve whatever goal you set. As one of the first examples of GPT-4 running fully autonomously, Auto-GPT pushes the boundaries of what is possible with AI. @@ -37,42 +41,6 @@ Development of this free, open-source project is made possible by all the Dradstone  CrypteorCapital  avy-ai  shawnharmsen  sunchongren  DailyBotHQ  mathewhawkins  MediConCenHK  kMag410  nicoguyon  Mobivs  jazgarewal  marv-technology  rapidstartup  Brodie0  lucas-chu  rejunity  comet-ml  ColinConwell  cfarquhar  ikarosai  ChrisDMT  Odin519Tomas  vkozacek  belharethsami  sultanmeghji  scryptedinc  johnculkin  RealChrisSean  fruition  jd3655  Web3Capital  allenstecat  tob-le-rone  SwftCoins  MetaPath01  joaomdmoura  ternary5  refinery1  josephcmiller2  webbcolton  tommygeee  lmaugustin  garythebat  Cameron-Fulton  angiaou  caitlynmeeks  MBassi91  Daniel1357  omphos  abhinav-pandey29  DataMetis  concreit  st617  RThaweewat  KiaArmani  Pythagora-io  AryaXAI  fabrietech  jun784  Mr-Bishop42  rickscode  projectonegames  rocks6  GalaxyVideoAgency  thisisjeffchen  TheStoneMX  txtr99  ZERO-A-ONE  

- - -## Table of Contents - -- [Auto-GPT: An Autonomous GPT-4 Experiment](#auto-gpt-an-autonomous-gpt-4-experiment) - - [πŸ”΄ πŸ”΄ πŸ”΄ Urgent: USE `stable` not `master` πŸ”΄ πŸ”΄ πŸ”΄](#----urgent-use-stable-not-master----) - - [Demo (30/03/2023):](#demo-30032023) - - [Table of Contents](#table-of-contents) - - [πŸš€ Features](#-features) - - [πŸ“‹ Requirements](#-requirements) - - [πŸ’Ύ Installation](#-installation) - - [πŸ”§ Usage](#-usage) - - [Logs](#logs) - - [Docker](#docker) - - [Command Line Arguments](#command-line-arguments) - - [πŸ—£οΈ Speech Mode](#️-speech-mode) - - [πŸ” Google API Keys Configuration](#-google-api-keys-configuration) - - [Setting up environment variables](#setting-up-environment-variables) - - [Memory Backend Setup](#memory-backend-setup) - - [Redis Setup](#redis-setup) - - [🌲 Pinecone API Key Setup](#-pinecone-api-key-setup) - - [Milvus Setup](#milvus-setup) - - [Weaviate Setup](#weaviate-setup) - - [Setting up environment variables](#setting-up-environment-variables-1) - - [Setting Your Cache Type](#setting-your-cache-type) - - [View Memory Usage](#view-memory-usage) - - [🧠 Memory pre-seeding](#-memory-pre-seeding) - - [πŸ’€ Continuous Mode ⚠️](#-continuous-mode-️) - - [GPT3.5 ONLY Mode](#gpt35-only-mode) - - [πŸ–Ό Image Generation](#-image-generation) - - [⚠️ Limitations](#️-limitations) - - [πŸ›‘ Disclaimer](#-disclaimer) - - [🐦 Connect with Us on Twitter](#-connect-with-us-on-twitter) - - [Run tests](#run-tests) - - [Run linter](#run-linter) - ## πŸš€ Features - 🌐 Internet access for searches and information gathering @@ -83,16 +51,17 @@ Development of this free, open-source project is made possible by all the

Blake Werlinger +

πŸ’– Help Fund Auto-GPT's Development πŸ’–

If you can spare a coffee, you can help to cover the costs of developing Auto-GPT and help push the boundaries of fully autonomous AI! From 9589334a305198c837bfb8720ed6f06176b2f216 Mon Sep 17 00:00:00 2001 From: EH Date: Mon, 17 Apr 2023 03:34:02 +0100 Subject: [PATCH 67/92] Add File Downloading Capabilities (#1680) * Added 'download_file' command * Added util and fixed spinner * Fixed comma and added autogpt/auto_gpt_workspace to .gitignore * Fix linter issues * Fix more linter issues * Fix Lint Issues * Added 'download_file' command * Added util and fixed spinner * Fixed comma and added autogpt/auto_gpt_workspace to .gitignore * Fix linter issues * Fix more linter issues * Conditionally add the 'download_file' prompt * Update args.py * Removed Duplicate Prompt * Switched to using path_in_workspace function --- .gitignore | 1 + autogpt/app.py | 5 +++ autogpt/args.py | 16 +++++++++- autogpt/commands/file_operations.py | 49 ++++++++++++++++++++++++++++- autogpt/config/config.py | 1 + autogpt/prompt.py | 10 ++++++ autogpt/spinner.py | 15 ++++++++- autogpt/utils.py | 13 ++++++++ 8 files changed, 107 insertions(+), 3 deletions(-) diff --git a/.gitignore b/.gitignore index eda7f32734a2..2220ef6e3a9d 100644 --- a/.gitignore +++ b/.gitignore @@ -3,6 +3,7 @@ autogpt/keys.py autogpt/*json autogpt/node_modules/ autogpt/__pycache__/keys.cpython-310.pyc +autogpt/auto_gpt_workspace package-lock.json *.pyc auto_gpt_workspace/* diff --git a/autogpt/app.py b/autogpt/app.py index 78b5bd2fdeb0..19c075f0b09a 100644 --- a/autogpt/app.py +++ b/autogpt/app.py @@ -17,6 +17,7 @@ read_file, search_files, write_to_file, + download_file ) from autogpt.json_fixes.parsing import fix_and_parse_json from autogpt.memory import get_memory @@ -164,6 +165,10 @@ def execute_command(command_name: str, arguments): return delete_file(arguments["file"]) elif command_name == "search_files": return search_files(arguments["directory"]) + elif command_name == "download_file": + if not CFG.allow_downloads: + return "Error: You do not have user authorization to download files locally." + return download_file(arguments["url"], arguments["file"]) elif command_name == "browse_website": return browse_website(arguments["url"], arguments["question"]) # TODO: Change these to take in a file rather than pasted code, if diff --git a/autogpt/args.py b/autogpt/args.py index eca3233472b0..f0e9c07a362a 100644 --- a/autogpt/args.py +++ b/autogpt/args.py @@ -1,7 +1,7 @@ """This module contains the argument parsing logic for the script.""" import argparse -from colorama import Fore +from colorama import Fore, Back, Style from autogpt import utils from autogpt.config import Config from autogpt.logs import logger @@ -63,6 +63,12 @@ def parse_arguments() -> None: help="Specifies which ai_settings.yaml file to use, will also automatically" " skip the re-prompt.", ) + parser.add_argument( + '--allow-downloads', + action='store_true', + dest='allow_downloads', + help='Dangerous: Allows Auto-GPT to download files natively.' + ) args = parser.parse_args() if args.debug: @@ -133,5 +139,13 @@ def parse_arguments() -> None: CFG.ai_settings_file = file CFG.skip_reprompt = True + if args.allow_downloads: + logger.typewriter_log("Native Downloading:", Fore.GREEN, "ENABLED") + logger.typewriter_log("WARNING: ", Fore.YELLOW, + f"{Back.LIGHTYELLOW_EX}Auto-GPT will now be able to download and save files to your machine.{Back.RESET} " + + "It is recommended that you monitor any files it downloads carefully.") + logger.typewriter_log("WARNING: ", Fore.YELLOW, f"{Back.RED + Style.BRIGHT}ALWAYS REMEMBER TO NEVER OPEN FILES YOU AREN'T SURE OF!{Style.RESET_ALL}") + CFG.allow_downloads = True + if args.browser_name: CFG.selenium_web_browser = args.browser_name diff --git a/autogpt/commands/file_operations.py b/autogpt/commands/file_operations.py index 8abc2e232939..d273c1a34ddd 100644 --- a/autogpt/commands/file_operations.py +++ b/autogpt/commands/file_operations.py @@ -4,9 +4,16 @@ import os import os.path from pathlib import Path -from typing import Generator +from typing import Generator, List +import requests +from requests.adapters import HTTPAdapter +from requests.adapters import Retry +from colorama import Fore, Back +from autogpt.spinner import Spinner +from autogpt.utils import readable_file_size from autogpt.workspace import path_in_workspace, WORKSPACE_PATH + LOG_FILE = "file_logger.txt" LOG_FILE_PATH = WORKSPACE_PATH / LOG_FILE @@ -214,3 +221,43 @@ def search_files(directory: str) -> list[str]: found_files.append(relative_path) return found_files + + +def download_file(url, filename): + """Downloads a file + Args: + url (str): URL of the file to download + filename (str): Filename to save the file as + """ + safe_filename = path_in_workspace(filename) + try: + message = f"{Fore.YELLOW}Downloading file from {Back.LIGHTBLUE_EX}{url}{Back.RESET}{Fore.RESET}" + with Spinner(message) as spinner: + session = requests.Session() + retry = Retry(total=3, backoff_factor=1, status_forcelist=[502, 503, 504]) + adapter = HTTPAdapter(max_retries=retry) + session.mount('http://', adapter) + session.mount('https://', adapter) + + total_size = 0 + downloaded_size = 0 + + with session.get(url, allow_redirects=True, stream=True) as r: + r.raise_for_status() + total_size = int(r.headers.get('Content-Length', 0)) + downloaded_size = 0 + + with open(safe_filename, 'wb') as f: + for chunk in r.iter_content(chunk_size=8192): + f.write(chunk) + downloaded_size += len(chunk) + + # Update the progress message + progress = f"{readable_file_size(downloaded_size)} / {readable_file_size(total_size)}" + spinner.update_message(f"{message} {progress}") + + return f'Successfully downloaded and locally stored file: "{filename}"! (Size: {readable_file_size(total_size)})' + except requests.HTTPError as e: + return f"Got an HTTP Error whilst trying to download file: {e}" + except Exception as e: + return "Error: " + str(e) diff --git a/autogpt/config/config.py b/autogpt/config/config.py index 22da52b047e7..fe6f4f325852 100644 --- a/autogpt/config/config.py +++ b/autogpt/config/config.py @@ -24,6 +24,7 @@ def __init__(self) -> None: self.continuous_limit = 0 self.speak_mode = False self.skip_reprompt = False + self.allow_downloads = False self.selenium_web_browser = os.getenv("USE_WEB_BROWSER", "chrome") self.ai_settings_file = os.getenv("AI_SETTINGS_FILE", "ai_settings.yaml") diff --git a/autogpt/prompt.py b/autogpt/prompt.py index 18a5736c19e8..a2b20b1fefb0 100644 --- a/autogpt/prompt.py +++ b/autogpt/prompt.py @@ -105,6 +105,16 @@ def get_prompt() -> str: ), ) + # Only add the download file command if the AI is allowed to execute it + if cfg.allow_downloads: + commands.append( + ( + "Downloads a file from the internet, and stores it locally", + "download_file", + {"url": "", "file": ""} + ), + ) + # Add these command last. commands.append( ("Do Nothing", "do_nothing", {}), diff --git a/autogpt/spinner.py b/autogpt/spinner.py index 56b4f20a686b..febcea8eb110 100644 --- a/autogpt/spinner.py +++ b/autogpt/spinner.py @@ -29,12 +29,14 @@ def spin(self) -> None: time.sleep(self.delay) sys.stdout.write(f"\r{' ' * (len(self.message) + 2)}\r") - def __enter__(self) -> None: + def __enter__(self): """Start the spinner""" self.running = True self.spinner_thread = threading.Thread(target=self.spin) self.spinner_thread.start() + return self + def __exit__(self, exc_type, exc_value, exc_traceback) -> None: """Stop the spinner @@ -48,3 +50,14 @@ def __exit__(self, exc_type, exc_value, exc_traceback) -> None: self.spinner_thread.join() sys.stdout.write(f"\r{' ' * (len(self.message) + 2)}\r") sys.stdout.flush() + + def update_message(self, new_message, delay=0.1): + """Update the spinner message + Args: + new_message (str): New message to display + delay: Delay in seconds before updating the message + """ + time.sleep(delay) + sys.stdout.write(f"\r{' ' * (len(self.message) + 2)}\r") # Clear the current message + sys.stdout.flush() + self.message = new_message diff --git a/autogpt/utils.py b/autogpt/utils.py index 59709d02be6c..11d98d1b7429 100644 --- a/autogpt/utils.py +++ b/autogpt/utils.py @@ -24,3 +24,16 @@ def validate_yaml_file(file: str): ) return (True, f"Successfully validated {Fore.CYAN}`{file}`{Fore.RESET}!") + + +def readable_file_size(size, decimal_places=2): + """Converts the given size in bytes to a readable format. + Args: + size: Size in bytes + decimal_places (int): Number of decimal places to display + """ + for unit in ['B', 'KB', 'MB', 'GB', 'TB']: + if size < 1024.0: + break + size /= 1024.0 + return f"{size:.{decimal_places}f} {unit}" From 0fa807394711010a17fe37a3afbce81978e233e2 Mon Sep 17 00:00:00 2001 From: Ben Song Date: Mon, 17 Apr 2023 11:53:05 +0800 Subject: [PATCH 68/92] add docker requirements - jsonschema --- requirements-docker.txt | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/requirements-docker.txt b/requirements-docker.txt index 3a8a344cad27..a6018f8f9f4e 100644 --- a/requirements-docker.txt +++ b/requirements-docker.txt @@ -24,4 +24,5 @@ pre-commit black isort gitpython==3.1.31 -tweepy \ No newline at end of file +tweepy +jsonschema \ No newline at end of file From 64383776a24864f32f69e4f56214089940623664 Mon Sep 17 00:00:00 2001 From: "Gabriel R. Barbosa" <12158575+gabrielrbarbosa@users.noreply.github.com> Date: Mon, 17 Apr 2023 03:04:35 -0300 Subject: [PATCH 69/92] Update brian.py - Prevent TypeError exception TypeError: BrianSpeech._speech() takes 2 positional arguments but 3 were given. Use the same arguments as used in _speech method from gtts.py --- autogpt/speech/brian.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/autogpt/speech/brian.py b/autogpt/speech/brian.py index e581bbcc8d50..b9298f55aa7f 100644 --- a/autogpt/speech/brian.py +++ b/autogpt/speech/brian.py @@ -13,7 +13,7 @@ def _setup(self) -> None: """Setup the voices, API key, etc.""" pass - def _speech(self, text: str) -> bool: + def _speech(self, text: str, _: int = 0) -> bool: """Speak text using Brian with the streamelements API Args: From 60b779a9059dbd274b336a27f9a6b6db0bde53fd Mon Sep 17 00:00:00 2001 From: Alastair D'Silva Date: Mon, 17 Apr 2023 17:09:13 +1000 Subject: [PATCH 70/92] Remove requirements-docker.txt This file needs to be maintained parallel to requirements.txt, but isn't, causes problems when new dependencies are introduced. Instead, derive the Docker dependencies from the stock ones. Signed-off-by: Alastair D'Silva --- Dockerfile | 5 +++-- requirements-docker.txt | 28 ---------------------------- requirements.txt | 2 ++ 3 files changed, 5 insertions(+), 30 deletions(-) delete mode 100644 requirements-docker.txt diff --git a/Dockerfile b/Dockerfile index 9886d74266f2..5219e7d11495 100644 --- a/Dockerfile +++ b/Dockerfile @@ -17,8 +17,9 @@ RUN chown appuser:appuser /home/appuser USER appuser # Copy the requirements.txt file and install the requirements -COPY --chown=appuser:appuser requirements-docker.txt . -RUN pip install --no-cache-dir --user -r requirements-docker.txt +COPY --chown=appuser:appuser requirements.txt . +RUN sed -i '/Items below this point will not be included in the Docker Image/,$d' requirements.txt && \ + pip install --no-cache-dir --user -r requirements.txt # Copy the application files COPY --chown=appuser:appuser autogpt/ ./autogpt diff --git a/requirements-docker.txt b/requirements-docker.txt deleted file mode 100644 index a6018f8f9f4e..000000000000 --- a/requirements-docker.txt +++ /dev/null @@ -1,28 +0,0 @@ -beautifulsoup4 -colorama==0.4.6 -openai==0.27.2 -playsound==1.2.2 -python-dotenv==1.0.0 -pyyaml==6.0 -readability-lxml==0.8.1 -requests -tiktoken==0.3.3 -gTTS==2.3.1 -docker -duckduckgo-search -google-api-python-client #(https://developers.google.com/custom-search/v1/overview) -pinecone-client==2.2.1 -redis -orjson -Pillow -selenium -webdriver-manager -coverage -flake8 -numpy -pre-commit -black -isort -gitpython==3.1.31 -tweepy -jsonschema \ No newline at end of file diff --git a/requirements.txt b/requirements.txt index 843b66bfe454..3f1eee5b7da3 100644 --- a/requirements.txt +++ b/requirements.txt @@ -30,6 +30,8 @@ sourcery isort gitpython==3.1.31 +# Items below this point will not be included in the Docker Image + # Testing dependencies pytest asynctest From 2b87245e2231e5d13022df1c9f5cc07584e254d6 Mon Sep 17 00:00:00 2001 From: XFFXFF <1247714429@qq.com> Date: Mon, 17 Apr 2023 16:21:52 +0800 Subject: [PATCH 71/92] fix a missing import --- autogpt/memory/local.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/autogpt/memory/local.py b/autogpt/memory/local.py index 9b911eeff9ff..803b6dc6ebb4 100644 --- a/autogpt/memory/local.py +++ b/autogpt/memory/local.py @@ -2,13 +2,13 @@ import dataclasses import os -from typing import Any +from typing import Any, List import numpy as np import orjson -from autogpt.memory.base import MemoryProviderSingleton from autogpt.llm_utils import create_embedding_with_ada +from autogpt.memory.base import MemoryProviderSingleton EMBED_DIM = 1536 SAVE_OPTIONS = orjson.OPT_SERIALIZE_NUMPY | orjson.OPT_SERIALIZE_DATACLASS From bd25822b35ab924290f28b104e519b49b8930591 Mon Sep 17 00:00:00 2001 From: Mad Misaghi Date: Mon, 17 Apr 2023 12:24:27 +0330 Subject: [PATCH 72/92] Update .env.template addedMilvus --- .env.template | 1 + 1 file changed, 1 insertion(+) diff --git a/.env.template b/.env.template index eeff2907cb24..9593276f2f7c 100644 --- a/.env.template +++ b/.env.template @@ -54,6 +54,7 @@ SMART_TOKEN_LIMIT=8000 # local - Default # pinecone - Pinecone (if configured) # redis - Redis (if configured) +# milvus - Milvus (if configured) MEMORY_BACKEND=local ### PINECONE From 74a8b5d83256c5b9116a375a4520d2727e52bece Mon Sep 17 00:00:00 2001 From: suzuken Date: Mon, 17 Apr 2023 18:15:49 +0900 Subject: [PATCH 73/92] config.py: update OpenAI link --- autogpt/config/config.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/autogpt/config/config.py b/autogpt/config/config.py index fe6f4f325852..a950453e8a0f 100644 --- a/autogpt/config/config.py +++ b/autogpt/config/config.py @@ -237,5 +237,5 @@ def check_openai_api_key() -> None: Fore.RED + "Please set your OpenAI API key in .env or as an environment variable." ) - print("You can get your key from https://beta.openai.com/account/api-keys") + print("You can get your key from https://platform.openai.com/account/api-keys") exit(1) From 125f0ba61ad57188e6f4f109f2463f31530044dd Mon Sep 17 00:00:00 2001 From: Bob van Luijt Date: Mon, 17 Apr 2023 12:46:27 +0200 Subject: [PATCH 74/92] Update README.md with Weaviate installation and reference --- README.md | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/README.md b/README.md index 71957748f067..b919f51d313c 100644 --- a/README.md +++ b/README.md @@ -65,6 +65,7 @@ Development of this free, open-source project is made possible by all the =3.15.4"`. +#### Install the Weaviate client + +Install the Weaviate client before usage. + +``` +$ pip install weaviate-client +``` + #### Setting up environment variables In your `.env` file set the following: From 10cd0f3362ad6c86eefe7fc2a1f276ca49af98fe Mon Sep 17 00:00:00 2001 From: Eesa Hamza Date: Mon, 17 Apr 2023 07:32:40 +0300 Subject: [PATCH 75/92] Add the OpenAI API Keys Configuration to the top of the readme --- README.md | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/README.md b/README.md index 71957748f067..dbb44f52a103 100644 --- a/README.md +++ b/README.md @@ -67,6 +67,18 @@ Development of this free, open-source project is made possible by all the Billing](./docs/imgs/openai-api-key-billing-paid-account.png) + +#### **PLEASE ENSURE YOU HAVE DONE THIS STEP BEFORE PROCEEDING, OTHERWISE NOTHING WILL WORK!** + ## πŸ’Ύ Installation To install Auto-GPT, follow these steps: @@ -207,18 +219,6 @@ python -m autogpt --speak - Adam : pNInz6obpgDQGcFmaJgB - Sam : yoZ06aMxZJJ28mfd3POQ - -## OpenAI API Keys Configuration - -Obtain your OpenAI API key from: https://platform.openai.com/account/api-keys. - -To use OpenAI API key for Auto-GPT, you NEED to have billing set up (AKA paid account). - -You can set up paid account at https://platform.openai.com/account/billing/overview. - -![For OpenAI API key to work, set up paid account at OpenAI API > Billing](./docs/imgs/openai-api-key-billing-paid-account.png) - - ## πŸ” Google API Keys Configuration This section is optional, use the official google api if you are having issues with error 429 when running a google search. From 8dadf79614969a58a29b44cd9af4127795a153d6 Mon Sep 17 00:00:00 2001 From: H-jj-R Date: Mon, 17 Apr 2023 13:25:49 +0100 Subject: [PATCH 76/92] Spelling fixes --- .github/PULL_REQUEST_TEMPLATE.md | 2 +- autogpt/app.py | 4 ++-- autogpt/commands/git_operations.py | 2 +- autogpt/commands/google_search.py | 4 ++-- autogpt/llm_utils.py | 2 +- autogpt/memory/milvus.py | 2 +- autogpt/setup.py | 2 +- autogpt/speech/eleven_labs.py | 2 +- outputs/logs/message-log-1.txt | 2 +- 9 files changed, 11 insertions(+), 11 deletions(-) diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md index c355965ab4d5..cf7ffbf320f9 100644 --- a/.github/PULL_REQUEST_TEMPLATE.md +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -30,4 +30,4 @@ By following these guidelines, your PRs are more likely to be merged quickly aft - + diff --git a/autogpt/app.py b/autogpt/app.py index 19c075f0b09a..ad9f18d1d477 100644 --- a/autogpt/app.py +++ b/autogpt/app.py @@ -212,7 +212,7 @@ def execute_command(command_name: str, arguments): def get_text_summary(url: str, question: str) -> str: - """Return the results of a google search + """Return the results of a Google search Args: url (str): The url to scrape @@ -227,7 +227,7 @@ def get_text_summary(url: str, question: str) -> str: def get_hyperlinks(url: str) -> Union[str, List[str]]: - """Return the results of a google search + """Return the results of a Google search Args: url (str): The url to scrape diff --git a/autogpt/commands/git_operations.py b/autogpt/commands/git_operations.py index 675eb2283ba4..05ce2a212919 100644 --- a/autogpt/commands/git_operations.py +++ b/autogpt/commands/git_operations.py @@ -7,7 +7,7 @@ def clone_repository(repo_url: str, clone_path: str) -> str: - """Clone a github repository locally + """Clone a GitHub repository locally Args: repo_url (str): The URL of the repository to clone diff --git a/autogpt/commands/google_search.py b/autogpt/commands/google_search.py index 148ba1d0e1cf..7d38ce7568d2 100644 --- a/autogpt/commands/google_search.py +++ b/autogpt/commands/google_search.py @@ -11,7 +11,7 @@ def google_search(query: str, num_results: int = 8) -> str: - """Return the results of a google search + """Return the results of a Google search Args: query (str): The search query. @@ -35,7 +35,7 @@ def google_search(query: str, num_results: int = 8) -> str: def google_official_search(query: str, num_results: int = 8) -> str | list[str]: - """Return the results of a google search using the official Google API + """Return the results of a Google search using the official Google API Args: query (str): The search query. diff --git a/autogpt/llm_utils.py b/autogpt/llm_utils.py index 2075f93446eb..1d739e4a2b22 100644 --- a/autogpt/llm_utils.py +++ b/autogpt/llm_utils.py @@ -121,7 +121,7 @@ def create_chat_completion( def create_embedding_with_ada(text) -> list: - """Create a embedding with text-ada-002 using the OpenAI SDK""" + """Create an embedding with text-ada-002 using the OpenAI SDK""" num_retries = 10 for attempt in range(num_retries): backoff = 2 ** (attempt + 2) diff --git a/autogpt/memory/milvus.py b/autogpt/memory/milvus.py index c6e7d5a372eb..7a2571d0a3fd 100644 --- a/autogpt/memory/milvus.py +++ b/autogpt/memory/milvus.py @@ -46,7 +46,7 @@ def __init__(self, cfg) -> None: self.collection.load() def add(self, data) -> str: - """Add a embedding of data into memory. + """Add an embedding of data into memory. Args: data (str): The raw text to construct embedding index. diff --git a/autogpt/setup.py b/autogpt/setup.py index 5315c01db0f3..79661905f4e1 100644 --- a/autogpt/setup.py +++ b/autogpt/setup.py @@ -1,4 +1,4 @@ -"""Setup the AI and its goals""" +"""Set up the AI and its goals""" from colorama import Fore, Style from autogpt import utils from autogpt.config.ai_config import AIConfig diff --git a/autogpt/speech/eleven_labs.py b/autogpt/speech/eleven_labs.py index 0af48cae153a..186ec6fc0211 100644 --- a/autogpt/speech/eleven_labs.py +++ b/autogpt/speech/eleven_labs.py @@ -14,7 +14,7 @@ class ElevenLabsSpeech(VoiceBase): """ElevenLabs speech class""" def _setup(self) -> None: - """Setup the voices, API key, etc. + """Set up the voices, API key, etc. Returns: None: None diff --git a/outputs/logs/message-log-1.txt b/outputs/logs/message-log-1.txt index 8a719016ce23..6b146b983373 100644 --- a/outputs/logs/message-log-1.txt +++ b/outputs/logs/message-log-1.txt @@ -483,7 +483,7 @@ How to Become a Freelance Artificial Intelligence Engineer Springboard https://www.springboard.com β€Ί Blog β€Ί Data Science -29/10/2021 β€” There are numerous freelancing platforms where you can kick start your career as a freelance artificial intelligence engineer. +29/10/2021 β€” There are numerous freelancing platforms where you can kick-start your career as a freelance artificial intelligence engineer. More to ask Is AI good for freelancing? What business can I start with AI? From 10b2458f58ca91f38c2c6418564819e749d128ba Mon Sep 17 00:00:00 2001 From: NEBULITE Berlin <40317630+Funkelfetisch@users.noreply.github.com> Date: Mon, 17 Apr 2023 14:50:28 +0200 Subject: [PATCH 77/92] Update .env.template "redis" as hostname for redis to correctly use the docker compose internal networking feature --- .env.template | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.env.template b/.env.template index eeff2907cb24..d820b15f7dd3 100644 --- a/.env.template +++ b/.env.template @@ -63,7 +63,7 @@ PINECONE_API_KEY=your-pinecone-api-key PINECONE_ENV=your-pinecone-region ### REDIS -# REDIS_HOST - Redis host (Default: localhost) +# REDIS_HOST - Redis host (Default: localhost, use "redis" for docker-compose) # REDIS_PORT - Redis port (Default: 6379) # REDIS_PASSWORD - Redis password (Default: "") # WIPE_REDIS_ON_START - Wipes data / index on start (Default: False) From d47466ddf949d72787d3a04db3959b5a579a702d Mon Sep 17 00:00:00 2001 From: superherointj <5861043+superherointj@users.noreply.github.com> Date: Wed, 12 Apr 2023 15:48:46 -0300 Subject: [PATCH 78/92] Add Nix flakes support through direnv * Nix (https://nixos.org) is a reproducible build system. * Enables Nix users to use/develop Auto-GPT, without installing PIP or any other future Auto-GPT dependency. --- .envrc | 4 ++++ .gitignore | 1 + 2 files changed, 5 insertions(+) create mode 100644 .envrc diff --git a/.envrc b/.envrc new file mode 100644 index 000000000000..a7ad726377a8 --- /dev/null +++ b/.envrc @@ -0,0 +1,4 @@ +# Upon entering directory, direnv requests user permission once to automatically load project dependencies onwards. +# Eliminating the need of running "nix develop github:superherointj/nix-auto-gpt" for Nix users to develop/use Auto-GPT. + +[[ -z $IN_NIX_SHELL ]] && use flake github:superherointj/nix-auto-gpt diff --git a/.gitignore b/.gitignore index 2220ef6e3a9d..26d7e5a3f7b3 100644 --- a/.gitignore +++ b/.gitignore @@ -127,6 +127,7 @@ celerybeat.pid *.sage.py # Environments +.direnv/ .env .venv env/ From d4860fe9f09dba4bc8d9311b9a575098e3809ddc Mon Sep 17 00:00:00 2001 From: lfricken <6675120+lfricken@users.noreply.github.com> Date: Mon, 17 Apr 2023 10:27:53 -0500 Subject: [PATCH 79/92] Don't incapacitate yourself! (#1240) * subprocesses * fix lint * fix more lint * fix merge * fix merge again --- autogpt/app.py | 15 ++++++++++++++- autogpt/commands/execute_code.py | 30 ++++++++++++++++++++++++++++++ autogpt/prompt.py | 11 +++++++++++ 3 files changed, 55 insertions(+), 1 deletion(-) diff --git a/autogpt/app.py b/autogpt/app.py index 19c075f0b09a..6f51fd98fba7 100644 --- a/autogpt/app.py +++ b/autogpt/app.py @@ -10,7 +10,11 @@ from autogpt.commands.image_gen import generate_image from autogpt.commands.audio_text import read_audio_from_file from autogpt.commands.web_requests import scrape_links, scrape_text -from autogpt.commands.execute_code import execute_python_file, execute_shell +from autogpt.commands.execute_code import ( + execute_python_file, + execute_shell, + execute_shell_popen, +) from autogpt.commands.file_operations import ( append_to_file, delete_file, @@ -191,6 +195,15 @@ def execute_command(command_name: str, arguments): " shell commands, EXECUTE_LOCAL_COMMANDS must be set to 'True' " "in your config. Do not attempt to bypass the restriction." ) + elif command_name == "execute_shell_popen": + if CFG.execute_local_commands: + return execute_shell_popen(arguments["command_line"]) + else: + return ( + "You are not allowed to run local shell commands. To execute" + " shell commands, EXECUTE_LOCAL_COMMANDS must be set to 'True' " + "in your config. Do not attempt to bypass the restriction." + ) elif command_name == "read_audio_from_file": return read_audio_from_file(arguments["file"]) elif command_name == "generate_image": diff --git a/autogpt/commands/execute_code.py b/autogpt/commands/execute_code.py index 2cc797cbbafb..e2a8d994aaf0 100644 --- a/autogpt/commands/execute_code.py +++ b/autogpt/commands/execute_code.py @@ -114,6 +114,36 @@ def execute_shell(command_line: str) -> str: return output +def execute_shell_popen(command_line): + """Execute a shell command with Popen and returns an english description + of the event and the process id + + Args: + command_line (str): The command line to execute + + Returns: + str: Description of the fact that the process started and its id + """ + current_dir = os.getcwd() + + if WORKING_DIRECTORY not in current_dir: # Change dir into workspace if necessary + work_dir = os.path.join(os.getcwd(), WORKING_DIRECTORY) + os.chdir(work_dir) + + print(f"Executing command '{command_line}' in working directory '{os.getcwd()}'") + + do_not_show_output = subprocess.DEVNULL + process = subprocess.Popen( + command_line, shell=True, stdout=do_not_show_output, stderr=do_not_show_output + ) + + # Change back to whatever the prior working dir was + + os.chdir(current_dir) + + return f"Subprocess started with PID:'{str(process.pid)}'" + + def we_are_running_in_a_docker_container() -> bool: """Check if we are running in a Docker container diff --git a/autogpt/prompt.py b/autogpt/prompt.py index a2b20b1fefb0..33098af035a4 100644 --- a/autogpt/prompt.py +++ b/autogpt/prompt.py @@ -38,6 +38,9 @@ def get_prompt() -> str: prompt_generator.add_constraint( 'Exclusively use the commands listed in double quotes e.g. "command name"' ) + prompt_generator.add_constraint( + "Use subprocesses for commands that will not terminate within a few minutes" + ) # Define the command list commands = [ @@ -81,6 +84,7 @@ def get_prompt() -> str: {"code": "", "focus": ""}, ), ("Execute Python File", "execute_python_file", {"file": ""}), + ("Task Complete (Shutdown)", "task_complete", {"reason": ""}), ("Generate Image", "generate_image", {"prompt": ""}), ("Send Tweet", "send_tweet", {"text": ""}), ] @@ -104,6 +108,13 @@ def get_prompt() -> str: {"command_line": ""}, ), ) + commands.append( + ( + "Execute Shell Command Popen, non-interactive commands only", + "execute_shell_popen", + {"command_line": ""} + ), + ) # Only add the download file command if the AI is allowed to execute it if cfg.allow_downloads: From 35106ef662fda42b299de5e525ef31ae4bac39e7 Mon Sep 17 00:00:00 2001 From: Reinier van der Leer Date: Mon, 17 Apr 2023 17:33:50 +0200 Subject: [PATCH 80/92] feat(pr-labels): auto-label conflicting PRs --- .github/workflows/pr-label.yml | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) create mode 100644 .github/workflows/pr-label.yml diff --git a/.github/workflows/pr-label.yml b/.github/workflows/pr-label.yml new file mode 100644 index 000000000000..9f5127e497cd --- /dev/null +++ b/.github/workflows/pr-label.yml @@ -0,0 +1,22 @@ +name: "Pull Request auto-label" +on: + # So that PRs touching the same files as the push are updated + push: + # So that the `dirtyLabel` is removed if conflicts are resolve + # We recommend `pull_request_target` so that github secrets are available. + # In `pull_request` we wouldn't be able to change labels of fork PRs + pull_request_target: + types: [opened, synchronize] + +jobs: + conflicts: + runs-on: ubuntu-latest + steps: + - name: Update PRs with conflict labels + uses: eps1lon/actions-label-merge-conflict@releases/2.x + with: + dirtyLabel: "conflicts" + #removeOnDirtyLabel: "PR: ready to ship" + repoToken: "${{ secrets.GITHUB_TOKEN }}" + commentOnDirty: "This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request." + commentOnClean: "Conflicts have been resolved! πŸŽ‰ A maintainer will review the pull request shortly." From baf31e69e53e51ae0d93976f17c74c4f2a6ed895 Mon Sep 17 00:00:00 2001 From: rickythefox Date: Mon, 17 Apr 2023 17:45:23 +0200 Subject: [PATCH 81/92] Use python:3-alpine image for code execution (#1192) --- autogpt/commands/execute_code.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/autogpt/commands/execute_code.py b/autogpt/commands/execute_code.py index e2a8d994aaf0..70b33a975cd2 100644 --- a/autogpt/commands/execute_code.py +++ b/autogpt/commands/execute_code.py @@ -40,10 +40,10 @@ def execute_python_file(file: str): try: client = docker.from_env() - # You can replace 'python:3.8' with the desired Python image/version + # You can replace this with the desired Python image/version # You can find available Python images on Docker Hub: # https://hub.docker.com/_/python - image_name = "python:3.10" + image_name = "python:3-alpine" try: client.images.get(image_name) print(f"Image '{image_name}' found locally") From e7c3ff9b9edd07c18ecf3cff572694105de722b3 Mon Sep 17 00:00:00 2001 From: Reinier van der Leer Date: Mon, 17 Apr 2023 17:47:58 +0200 Subject: [PATCH 82/92] fix(pr-label): set job permissions explicitly --- .github/workflows/pr-label.yml | 3 +++ 1 file changed, 3 insertions(+) diff --git a/.github/workflows/pr-label.yml b/.github/workflows/pr-label.yml index 9f5127e497cd..63696e42d0ff 100644 --- a/.github/workflows/pr-label.yml +++ b/.github/workflows/pr-label.yml @@ -11,6 +11,9 @@ on: jobs: conflicts: runs-on: ubuntu-latest + permissions: + contents: read + pull-requests: write steps: - name: Update PRs with conflict labels uses: eps1lon/actions-label-merge-conflict@releases/2.x From a2a6f84f139b683fd135df89ff370ad5f6a7b974 Mon Sep 17 00:00:00 2001 From: REal0day Date: Sun, 16 Apr 2023 15:14:54 -0500 Subject: [PATCH 83/92] internal resource request bug --- autogpt/commands/web_requests.py | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/autogpt/commands/web_requests.py b/autogpt/commands/web_requests.py index 50d8d383cb1a..70ada90741d2 100644 --- a/autogpt/commands/web_requests.py +++ b/autogpt/commands/web_requests.py @@ -58,9 +58,28 @@ def check_local_file_access(url: str) -> bool: """ local_prefixes = [ "file:///", + "file://localhost/", "file://localhost", "http://localhost", + "http://localhost/", "https://localhost", + "https://localhost/", + "http://2130706433", + "http://2130706433/", + "https://2130706433", + "https://2130706433/", + "http://127.0.0.1/", + "http://127.0.0.1", + "https://127.0.0.1/", + "https://127.0.0.1", + "https://0.0.0.0/", + "https://0.0.0.0", + "http://0.0.0.0/", + "http://0.0.0.0", + "http://0000", + "http://0000/", + "https://0000", + "https://0000/" ] return any(url.startswith(prefix) for prefix in local_prefixes) From 23e703132653cc33a11dceee557c4f880059347e Mon Sep 17 00:00:00 2001 From: jimmycliff obonyo Date: Sun, 16 Apr 2023 00:37:50 +0300 Subject: [PATCH 84/92] install chrome/firefox for headless browing when running in docker container --- Dockerfile | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/Dockerfile b/Dockerfile index 9886d74266f2..039ccf26a936 100644 --- a/Dockerfile +++ b/Dockerfile @@ -5,6 +5,16 @@ FROM python:3.11-slim RUN apt-get -y update RUN apt-get -y install git chromium-driver +# Install Xvfb and other dependencies for headless browser testing +RUN apt-get update \ + && apt-get install -y wget gnupg2 libgtk-3-0 libdbus-glib-1-2 dbus-x11 xvfb ca-certificates + +# Install Firefox / Chromium +RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \ + && echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list \ + && apt-get update \ + && apt-get install -y chromium firefox-esr + # Set environment variables ENV PIP_NO_CACHE_DIR=yes \ PYTHONUNBUFFERED=1 \ From 6b64158356a02d9bfd410913b157ccd31ce5ea03 Mon Sep 17 00:00:00 2001 From: Tom Kaitchuck Date: Sun, 16 Apr 2023 01:53:24 -0700 Subject: [PATCH 85/92] Unbound summary size Signed-off-by: Tom Kaitchuck --- .env.template | 2 -- autogpt/config/config.py | 5 ----- autogpt/processing/text.py | 2 -- 3 files changed, 9 deletions(-) diff --git a/.env.template b/.env.template index eeff2907cb24..209a29b963bc 100644 --- a/.env.template +++ b/.env.template @@ -5,8 +5,6 @@ EXECUTE_LOCAL_COMMANDS=False # BROWSE_CHUNK_MAX_LENGTH - When browsing website, define the length of chunk stored in memory BROWSE_CHUNK_MAX_LENGTH=8192 -# BROWSE_SUMMARY_MAX_TOKEN - Define the maximum length of the summary generated by GPT agent when browsing website -BROWSE_SUMMARY_MAX_TOKEN=300 # USER_AGENT - Define the user-agent used by the requests library to browse website (string) # USER_AGENT="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36" # AI_SETTINGS_FILE - Specifies which AI Settings file to use (defaults to ai_settings.yaml) diff --git a/autogpt/config/config.py b/autogpt/config/config.py index fe6f4f325852..a8b48b4929d3 100644 --- a/autogpt/config/config.py +++ b/autogpt/config/config.py @@ -33,7 +33,6 @@ def __init__(self) -> None: self.fast_token_limit = int(os.getenv("FAST_TOKEN_LIMIT", 4000)) self.smart_token_limit = int(os.getenv("SMART_TOKEN_LIMIT", 8000)) self.browse_chunk_max_length = int(os.getenv("BROWSE_CHUNK_MAX_LENGTH", 8192)) - self.browse_summary_max_token = int(os.getenv("BROWSE_SUMMARY_MAX_TOKEN", 300)) self.openai_api_key = os.getenv("OPENAI_API_KEY") self.temperature = float(os.getenv("TEMPERATURE", "1")) @@ -188,10 +187,6 @@ def set_browse_chunk_max_length(self, value: int) -> None: """Set the browse_website command chunk max length value.""" self.browse_chunk_max_length = value - def set_browse_summary_max_token(self, value: int) -> None: - """Set the browse_website command summary max token value.""" - self.browse_summary_max_token = value - def set_openai_api_key(self, value: str) -> None: """Set the OpenAI API key value.""" self.openai_api_key = value diff --git a/autogpt/processing/text.py b/autogpt/processing/text.py index d30036d8789f..657b0b0eb434 100644 --- a/autogpt/processing/text.py +++ b/autogpt/processing/text.py @@ -78,7 +78,6 @@ def summarize_text( summary = create_chat_completion( model=CFG.fast_llm_model, messages=messages, - max_tokens=CFG.browse_summary_max_token, ) summaries.append(summary) print(f"Added chunk {i + 1} summary to memory") @@ -95,7 +94,6 @@ def summarize_text( return create_chat_completion( model=CFG.fast_llm_model, messages=messages, - max_tokens=CFG.browse_summary_max_token, ) From def96ffe2f5b42ed41fc7fc1844965a0344cf9fc Mon Sep 17 00:00:00 2001 From: Steve Byerly Date: Mon, 17 Apr 2023 02:06:46 +0000 Subject: [PATCH 86/92] fix split file --- autogpt/commands/file_operations.py | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/autogpt/commands/file_operations.py b/autogpt/commands/file_operations.py index d273c1a34ddd..00ae466dbcf9 100644 --- a/autogpt/commands/file_operations.py +++ b/autogpt/commands/file_operations.py @@ -49,14 +49,12 @@ def log_operation(operation: str, filename: str) -> None: append_to_file(LOG_FILE, log_entry, shouldLog = False) - def split_file( content: str, max_length: int = 4000, overlap: int = 0 ) -> Generator[str, None, None]: """ Split text into chunks of a specified maximum length with a specified overlap between chunks. - :param content: The input text to be split into chunks :param max_length: The maximum length of each chunk, default is 4000 (about 1k token) @@ -70,9 +68,14 @@ def split_file( while start < content_length: end = start + max_length if end + overlap < content_length: - chunk = content[start : end + overlap] + chunk = content[start : end + overlap - 1] else: chunk = content[start:content_length] + + # Account for the case where the last chunk is shorter than the overlap, so it has already been consumed + if len(chunk) <= overlap: + break + yield chunk start += max_length - overlap From bd670b4db379776f034c5d956379fa8f1a698425 Mon Sep 17 00:00:00 2001 From: Steve Byerly Date: Mon, 17 Apr 2023 02:24:14 +0000 Subject: [PATCH 87/92] whitespace --- autogpt/commands/file_operations.py | 1 + 1 file changed, 1 insertion(+) diff --git a/autogpt/commands/file_operations.py b/autogpt/commands/file_operations.py index 00ae466dbcf9..073b13b0ee9b 100644 --- a/autogpt/commands/file_operations.py +++ b/autogpt/commands/file_operations.py @@ -49,6 +49,7 @@ def log_operation(operation: str, filename: str) -> None: append_to_file(LOG_FILE, log_entry, shouldLog = False) + def split_file( content: str, max_length: int = 4000, overlap: int = 0 ) -> Generator[str, None, None]: From 6ac9ce614acda4a0103962ef89b0d23c0a3d26aa Mon Sep 17 00:00:00 2001 From: Steve Byerly Date: Mon, 17 Apr 2023 02:29:51 +0000 Subject: [PATCH 88/92] whitespace --- autogpt/commands/file_operations.py | 1 + 1 file changed, 1 insertion(+) diff --git a/autogpt/commands/file_operations.py b/autogpt/commands/file_operations.py index 073b13b0ee9b..3420bd842bb6 100644 --- a/autogpt/commands/file_operations.py +++ b/autogpt/commands/file_operations.py @@ -56,6 +56,7 @@ def split_file( """ Split text into chunks of a specified maximum length with a specified overlap between chunks. + :param content: The input text to be split into chunks :param max_length: The maximum length of each chunk, default is 4000 (about 1k token) From 8637b8b61ba18f74e88bee822222b166f17e7773 Mon Sep 17 00:00:00 2001 From: Steve Byerly Date: Mon, 17 Apr 2023 02:30:24 +0000 Subject: [PATCH 89/92] whitespace --- autogpt/commands/file_operations.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/autogpt/commands/file_operations.py b/autogpt/commands/file_operations.py index 3420bd842bb6..9dcf819480c2 100644 --- a/autogpt/commands/file_operations.py +++ b/autogpt/commands/file_operations.py @@ -56,7 +56,7 @@ def split_file( """ Split text into chunks of a specified maximum length with a specified overlap between chunks. - + :param content: The input text to be split into chunks :param max_length: The maximum length of each chunk, default is 4000 (about 1k token) From f2baa0872beb13cf5dfb13f0ab05a64640510d3f Mon Sep 17 00:00:00 2001 From: jingxing Date: Mon, 17 Apr 2023 14:24:10 +0800 Subject: [PATCH 90/92] config.py format --- autogpt/config/config.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/autogpt/config/config.py b/autogpt/config/config.py index a8b48b4929d3..e3ccc6a19067 100644 --- a/autogpt/config/config.py +++ b/autogpt/config/config.py @@ -66,7 +66,7 @@ def __init__(self) -> None: self.pinecone_api_key = os.getenv("PINECONE_API_KEY") self.pinecone_region = os.getenv("PINECONE_ENV") - self.weaviate_host = os.getenv("WEAVIATE_HOST") + self.weaviate_host = os.getenv("WEAVIATE_HOST") self.weaviate_port = os.getenv("WEAVIATE_PORT") self.weaviate_protocol = os.getenv("WEAVIATE_PROTOCOL", "http") self.weaviate_username = os.getenv("WEAVIATE_USERNAME", None) From ef7b417105da16a8a2fc89eea0309a42fdd8d7b2 Mon Sep 17 00:00:00 2001 From: Reinier van der Leer Date: Mon, 17 Apr 2023 18:11:34 +0200 Subject: [PATCH 91/92] fix(pr-label): mitigate excessive concurrent runs --- .github/workflows/pr-label.yml | 3 +++ 1 file changed, 3 insertions(+) diff --git a/.github/workflows/pr-label.yml b/.github/workflows/pr-label.yml index 63696e42d0ff..a91141315a47 100644 --- a/.github/workflows/pr-label.yml +++ b/.github/workflows/pr-label.yml @@ -7,6 +7,9 @@ on: # In `pull_request` we wouldn't be able to change labels of fork PRs pull_request_target: types: [opened, synchronize] +concurrency: + group: ${{ github.event_name == 'pull_request_target' && format('pr-label-{0}', github.event.pull_request.number) || '' }} + cancel-in-progress: ${{ github.event_name == 'pull_request_target' || '' }} jobs: conflicts: From 3b37c89d881e5f5a290158f4528261876f589026 Mon Sep 17 00:00:00 2001 From: Reinier van der Leer Date: Mon, 17 Apr 2023 19:15:20 +0200 Subject: [PATCH 92/92] fix(pr-label): concurrency group cannot be empty --- .github/workflows/pr-label.yml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/.github/workflows/pr-label.yml b/.github/workflows/pr-label.yml index a91141315a47..92c5a66b7285 100644 --- a/.github/workflows/pr-label.yml +++ b/.github/workflows/pr-label.yml @@ -8,8 +8,8 @@ on: pull_request_target: types: [opened, synchronize] concurrency: - group: ${{ github.event_name == 'pull_request_target' && format('pr-label-{0}', github.event.pull_request.number) || '' }} - cancel-in-progress: ${{ github.event_name == 'pull_request_target' || '' }} + group: ${{ format('pr-label-{0}', github.event.pull_request.number || github.sha) }} + cancel-in-progress: true jobs: conflicts: