This is a command-line interface (CLI) for making requests to OpenAI's GPT-3 language model. The CLI is implemented in Ruby and uses the net/http gem to interface with the OpenAI API.
% ./openai.rb --help
Usage: openai [options]
ChatGPT request parameters:
-m, --model MODEL Set the OpenAI model name (default: OPENAI_MODEL from env or gpt-3.5-turbo)
-s, --system-prompt PROMPT Set the system prompt
-S, --system-prompt-file FILE Set the system prompt based on the contents of FILENAME.
-u, --user-prompt PROMPT Set the user prompt
-U, --user-prompt-file FILE Set the user prompt based on the contents of FILENAME.
-t, --max-tokens TOKENS Set the maximum number of tokens to generate (default: 1000)
-n, --n N Set the number of completions to generate (default: 1)
--stop STOP Set the stop sequence
-p, --temperature TEMPERATURE Set the sampling temperature (default: 0.5)
Specifying all paramters from a file
-r, --read-options FILE Read GPT parameters from a JSON file
-w, --write-options FILE Save the GPT parameters to a JSON file
General options:
-k, --api-key KEY Set the OpenAI API key (default: OPENAI_API_KEY from env or nil)
-j, --json Return the full raw response as JSON
-l, --stream, --live Stream the response in realtime.
-h, --help Prints this help
To use the CLI, you must have an OpenAI API key. You can set your API key as an environment variable OPENAI_API_KEY, or you can provide it as a command-line option using the -k or --api-key flag.
You must also provide a system prompt and a user prompt. The system prompt is the initial text provided to the model that guides the behavior of the model, and the user prompt is the text that the user would provide to the agent. You can provide the prompts directly as strings using the -s or --system-prompt and -u or --user-prompt flags, respectively. Alternatively, you can provide the prompts using the contents of a file using the -S or --system-prompt-file and -U or --user-prompt-file flags.
You can specify additional parameters to control the behavior of the model, such as the maximum number of tokens to generate (-t or --max-tokens), the number of completions to generate (-n or --n), the stop sequence (--stop), and the temperature for sampling (-p or --temperature).
By default, the CLI will return the generated text for the first completion. You can use the -j or --json flag to return the full raw response as JSON, or the -l or --stream flag to stream the response in real-time.
% export OPENAI_API_KEY=<your-api-key>
% ruby openai.rb -s "You are a helpful agent that answers exclusively in Japanese." -u "What is 1+1?"
1+1は2です。
% ruby openai.rb -s "You are a helpful agent that writes Ruby code." -u "Provide me a function that calculates the sum of two integers." --json
{
"id": "chatcmpl-70HBHGtVMbI8MEi6a9fTQoSKC0b4b",
"object": "chat.completion",
"created": 1680300399,
"model": "gpt-3.5-turbo-0301",
"usage": {
"prompt_tokens": 35,
"completion_tokens": 97,
"total_tokens": 132
},
"choices": [
{
"message": {
"role": "assistant",
"content": "Sure, here's a simple function that takes two integers as arguments and returns their sum:\n\n```ruby\ndef sum(a, b)\n return a + b\nend\n```\n\nYou can call this function by passing in two integers like this:\n\n```ruby\nputs sum(2, 3) # Output: 5\nputs sum(-5, 10) # Output: 5\n```\n\nThis will output the sum of the two integers that you pass in."
},
"finish_reason": "stop",
"index": 0
}
]
}
# In the following example, the response is streamed in realtime as it is generated by the GPT API.
# You may also combine this with --json, which will cause the raw JSON messages to be streamed in realtime.
% ruby openai.rb --stream -s "You are a helpful agent that answers questions as concisely as possible, in language written for a 5th grader." -u "why does the president live in the white house"
The President lives in the White House because it is the official residence of the President of the United States. The White House is also where the President works and meets with important people from around the world. It is a very important and historic building that represents the power and leadership of the United States.
This project and its contributors are not affiliated with or endorsed by OpenAI in any way. The use of OpenAI's name or any reference to OpenAI in the context of this project is solely for the purpose of providing information and does not imply any endorsement or affiliation. The opinions, recommendations, or advice expressed by this project or its contributors are solely their own and do not represent the views or opinions of OpenAI. This project and its contributors acknowledge that "OpenAI" is a registered trademark of OpenAI Inc. and use of the name or logo is subject to OpenAI's trademark guidelines.
This CLI is released under the MIT License. See the LICENSE file for more details.