This package is no longer supported or maintained.
The Tectalic OpenAI REST API Client is a package that provides a convenient and straightforward way to interact with the OpenAI API from your PHP application.
Supports ChatGPT, GPT-4, GPT-3.5, GPT-3, Codex, DALL·E, Whisper, Fine-Tuning, Embeddings and Moderation models, with fully typed Data Transfer Objects (DTOs) for all requests and responses and IDE autocomplete support.
More information is available from https://tectalic.com/apis/openai.
This is an unofficial package and has no affiliations with OpenAI.
Integrating OpenAI into your application is now as simple as a few lines of code.
$openaiClient = \Tectalic\OpenAi\Manager::build(
new \GuzzleHttp\Client(),
new \Tectalic\OpenAi\Authentication(getenv('OPENAI_API_KEY'))
);
/** @var \Tectalic\OpenAi\Models\ChatCompletions\CreateResponse $response */
$response = $openaiClient->chatCompletions()->create(
new \Tectalic\OpenAi\Models\ChatCompletions\CreateRequest([
'model' => 'gpt-4',
'messages' => [
[
'role' => 'user',
'content' => 'Will using a well designed and supported third party package save time?'
],
],
])
)->toModel();
echo $response->choices[0]->message->content;
// Yes, using a well-designed and supported third-party package can save time during software development.
// It allows you to focus on the core functionality of your application without having to reinvent the wheel or spend resources developing the same functionality from scratch.
// A good third-party package can provide reliability, efficiency, and continued support with updates and bug fixes, which in turn facilitates faster development and a more stable final product.
// Additionally, using widely adopted packages can also increase the chances of compatibility with other software components and make it easier for other developers to understand and work with your code.
Learn more about chat completion.
This handler supports both the GPT-3.5 and GPT-4 models:
Supported GPT-3.5 models include gpt-3.5-turbo
and more.
Supported GPT-4 models include gpt-4
and more.
Note: GPT-4 is currently in a limited beta and is only accessible to those who have been granted access. Please see here for details and instructions on how to join the waitlist.
If you receive a 404 error when attempting to use GPT-4, then your OpenAI account has not been granted access.
The following example uses the gpt-3.5-turbo-0613
model to demonstrate function calling.
It converts natural language into a function call, which can then be executed within your application.
$openaiClient = \Tectalic\OpenAi\Manager::build(
new \GuzzleHttp\Client(),
new \Tectalic\OpenAi\Authentication(getenv('OPENAI_API_KEY'))
);
/** @var \Tectalic\OpenAi\Models\ChatCompletions\CreateResponse $response */
$response = $openaiClient->chatCompletions()->create(new CreateRequest([
'model' => 'gpt-3.5-turbo-0613',
'messages' => [
['role' => 'user', 'content' => 'What\'s the weather like in Boston?']
],
'functions' => [
[
'name' => 'get_current_weather',
'description' => 'Get the current weather in a given location',
'parameters' => new \Tectalic\OpenAi\Models\ChatCompletions\CreateRequestFunctionsItemParameters(
[
'type' => 'object',
'properties' => [
'location' => [
'type' => 'string',
'description' => 'The worldwide city and state, e.g. San Francisco, CA',
],
'format' => [
'type' => 'string',
'description' => 'The temperature unit to use. Infer this from the users location.',
'enum' => ['celsius', 'farhenheit'],
],
'num_days' => [
'type' => 'integer',
'description' => 'The number of days to forecast',
],
],
'required' => ['location', 'format', 'num_days'],
]
)
]
],
'function_call' => 'auto',
]))->toModel();
$params = json_decode($response->choices[0]->message->function_call->arguments, true);
var_dump($params);
// array(3) {
// 'location' =>
// string(6) "Boston"
// 'format' =>
// string(7) "celsius"
// 'num_days' =>
// int(1)
//}
Learn more about function calling.
$openaiClient = \Tectalic\OpenAi\Manager::build(new \GuzzleHttp\Client(), new \Tectalic\OpenAi\Authentication(getenv('OPENAI_API_KEY')));
/** @var \Tectalic\OpenAi\Models\Completions\CreateResponse $response */
$response = $openaiClient->completions()->create(
new \Tectalic\OpenAi\Models\Completions\CreateRequest([
'model' => 'text-davinci-003',
'prompt' => 'Will using a third party package save time?',
])
)->toModel();
echo $response->choices[0]->text;
// Using a third party package can save time because you don't have to write the code yourself.
This handler supports all GPT-3 models, including text-davinci-003
, text-davinci-002
and more.
Learn more about text completion.
$openaiClient = \Tectalic\OpenAi\Manager::build(new \GuzzleHttp\Client(), new \Tectalic\OpenAi\Authentication(getenv('OPENAI_API_KEY')));
/** @var \Tectalic\OpenAi\Models\Completions\CreateResponse $response */
$response = $openaiClient->completions()->create(
new \Tectalic\OpenAi\Models\Completions\CreateRequest([
'model' => 'code-davinci-002',
'prompt' => "// PHP 8\n// A variable that saves the current date and time",
'max_tokens' => 256,
'stop' => ";",
])
)->toModel();
echo $response->choices[0]->text;
// $now = date("Y-m-d G:i:s")
Supported Codex models include code-davinci-002
and code-cushman-001
.
Learn more about code completion.
$openaiClient = \Tectalic\OpenAi\Manager::build(new \GuzzleHttp\Client(), new \Tectalic\OpenAi\Authentication(getenv('OPENAI_API_KEY')));
/** @var \Tectalic\OpenAi\Models\ImagesGenerations\CreateResponse $response */
$response = $openaiClient->imagesGenerations()->create(
new \Tectalic\OpenAi\Models\ImagesGenerations\CreateRequest([
'prompt' => 'A cute baby sea otter wearing a hat',
'size' => '256x256',
'n' => 5
])
)->toModel();
foreach ($response->data as $item) {
var_dump($item->url);
}
Learn more about image generation.
$openaiClient = \Tectalic\OpenAi\Manager::build(new \GuzzleHttp\Client(), new \Tectalic\OpenAi\Authentication(getenv('OPENAI_API_KEY')));
/** @var \Tectalic\OpenAi\Models\AudioTranscriptions\CreateResponse $response */
$response = $openaiClient->audioTranscriptions()->create(
new \Tectalic\OpenAi\Models\AudioTranscriptions\CreateRequest([
'file' => '/full/path/to/audio/file.mp3',
'model' => 'whisper-1',
])
)->toModel();
echo $response->text;
// Your audio transcript in your source language...
Supported Whisper models include whisper-1
.
Learn more about speech to text, including the 50+ supported languages.
$openaiClient = \Tectalic\OpenAi\Manager::build(new \GuzzleHttp\Client(), new \Tectalic\OpenAi\Authentication(getenv('OPENAI_API_KEY')));
/** @var \Tectalic\OpenAi\Models\AudioTranslations\CreateResponse $response */
$response = $openaiClient->audioTranslations()->create(
new \Tectalic\OpenAi\Models\AudioTranslations\CreateRequest([
'file' => '/full/path/to/audio/file.mp3',
'model' => 'whisper-1',
])
)->toModel();
echo $response->text;
// Your audio transcript in English...
Supported Whisper models include whisper-1
.
Learn more about speech to text, including the 50+ supported languages.
Need help getting started? See our guide: how to build an app using the OpenAI API.
- PHP version 7.2.5 or newer (including PHP 8.0 and 8.1)
- PHP JSON extension installed if using PHP 7.x. As of PHP 8.0, this extension became a core PHP extension so is always enabled.
- A PSR-18 compatible HTTP client such as 'Guzzle' or the 'Symfony HTTP Client'.
Install the package into your project:
composer require tectalic/openai
After installing the Tectalic OpenAI REST API Client package into your project, ensure you also have a compatible PSR-18 HTTP client such as 'Guzzle' or the Symfony 'HTTP Client'.
You can use the following code sample and customize it to suit your application.
// Load your project's composer autoloader (if you aren't already doing so).
require_once(__DIR__ . '/vendor/autoload.php');
use Symfony\Component\HttpClient\Psr18Client;
use Tectalic\OpenAi\Authentication;
use Tectalic\OpenAi\Client;
use Tectalic\OpenAi\Manager;
// Build a Tectalic OpenAI REST API Client globally.
$auth = new Authentication(getenv('OPENAI_API_KEY'));
$httpClient = new Psr18Client();
Manager::build($httpClient, $auth);
// or
// Build a Tectalic OpenAI REST API Client manually.
$auth = new Authentication(getenv('OPENAI_API_KEY'));
$httpClient = new Psr18Client();
$client = new Client($httpClient, $auth, Manager::BASE_URI);
To authenticate your API requests, you will need to provide an Authentication
($auth
) object when calling Manager::build()
.
Authentication to the OpenAI API is by HTTP Bearer authentication.
Please see the OpenAI API documentation for more details on obtaining your authentication credentials.
In the Usage code above, customize the Authentication
constructor to your needs. For example, will likely need to add a OPENAI_API_KEY
environment variable to your system.
The primary class you will interact with is the Client
class (Tectalic\OpenAi\Client
).
This Client
class also contains the helper methods that let you quickly access the 19 API Handlers.
Please see below for a complete list of supported handlers and methods.
This package supports 28 API Methods, which are grouped into 19 API Handlers.
See the table below for a full list of API Handlers and Methods.
API Handler Class and Method Name | Description | API Verb and URL |
---|---|---|
AudioTranscriptions::create() |
Transcribes audio into the input language. | POST /audio/transcriptions |
AudioTranslations::create() |
Translates audio into English. | POST /audio/translations |
ChatCompletions::create() |
Creates a model response for the given chat conversation. | POST /chat/completions |
Completions::create() |
Creates a completion for the provided prompt and parameters. | POST /completions |
Edits::create() |
POST /edits |
|
Embeddings::create() |
Creates an embedding vector representing the input text. | POST /embeddings |
Files::list() |
Returns a list of files that belong to the user's organization. | GET /files |
Files::create() |
Upload a file that contains document(s) to be used across various endpoints/features. Currently, the size of all the files uploaded by one organization can be up to 1 GB. Please contact us if you need to increase the storage limit. | POST /files |
Files::retrieve() |
Returns information about a specific file. | GET /files/{file_id} |
Files::delete() |
Delete a file. | DELETE /files/{file_id} |
FilesContent::download() |
Returns the contents of the specified file. | GET /files/{file_id}/content |
FineTunes::list() |
GET /fine-tunes |
|
FineTunes::create() |
Response includes details of the enqueued job including job status and the name of the fine-tuned models once complete. Learn more about fine-tuning |
POST /fine-tunes |
FineTunes::retrieve() |
Learn more about fine-tuning |
GET /fine-tunes/{fine_tune_id} |
FineTunesCancel::cancelFineTune() |
POST /fine-tunes/{fine_tune_id}/cancel |
|
FineTunesEvents::listFineTune() |
GET /fine-tunes/{fine_tune_id}/events |
|
FineTuningJobs::listPaginated() |
List your organization's fine-tuning jobs | GET /fine_tuning/jobs |
FineTuningJobs::create() |
Creates a job that fine-tunes a specified model from a given dataset. Response includes details of the enqueued job including job status and the name of the fine-tuned models once complete. Learn more about fine-tuning |
POST /fine_tuning/jobs |
FineTuningJobs::retrieve() |
Get info about a fine-tuning job. Learn more about fine-tuning |
GET /fine_tuning/jobs/{fine_tuning_job_id} |
FineTuningJobsCancel::fineTuning() |
Immediately cancel a fine-tune job. | POST /fine_tuning/jobs/{fine_tuning_job_id}/cancel |
FineTuningJobsEvents::listFineTuning() |
Get status updates for a fine-tuning job. | GET /fine_tuning/jobs/{fine_tuning_job_id}/events |
ImagesEdits::createImage() |
Creates an edited or extended image given an original image and a prompt. | POST /images/edits |
ImagesGenerations::create() |
Creates an image given a prompt. | POST /images/generations |
ImagesVariations::createImage() |
Creates a variation of a given image. | POST /images/variations |
Models::list() |
Lists the currently available models, and provides basic information about each one such as the owner and availability. | GET /models |
Models::retrieve() |
Retrieves a model instance, providing basic information about the model such as the owner and permissioning. | GET /models/{model} |
Models::delete() |
Delete a fine-tuned model. You must have the Owner role in your organization to delete a model. | DELETE /models/{model} |
Moderations::create() |
Classifies if text violates OpenAI's Content Policy | POST /moderations |
Deprecated method(s) are listed with strike-through formatting. Please do not use these methods, as they will be removed in a future release.
There are two ways to make a request to the nominated API Handler and API Method:
If you built the client to be accessible globally, you can use the relevant API Handler Class directly:
use Tectalic\OpenAi\Handlers\AudioTranscriptions;
(new AudioTranscriptions())->create();
Alternatively, you can access all API Handlers from the client class using the Client class:
$client->audioTranscriptions()->create();
Once you have made a request using one of the two methods outlined above, the next step is to access the response.
You can access the response in different ways. Please choose your preferred one.
Model responses are Data Transfer Object (DTO) style PHP classes, with public properties for each API property.
They offer a structured way of retrieving the response from an API request.
All Response Models are an instance of Tectalic\OpenAi\Models\AbstractModel
or Tectalic\OpenAi\Models\AbstractModelCollection
.
After performing the request, use the ->toModel()
fluent method to the API Method:
use Tectalic\OpenAi\Handlers\AudioTranscriptions;
$model = (new AudioTranscriptions())->create()->toModel();
Each API Method's toModel()
call will return the appropriate Model class type for the API Method you have just called.
After performing the request, use the ->toArray()
fluent method to the API Method:
use Tectalic\OpenAi\Handlers\AudioTranscriptions;
$array = (new AudioTranscriptions())->create()->toArray();
In the resulting associative array, the array keys will match the names of the public properties in the relevant Model class.
If you need to access the raw response or inspect the HTTP headers, use the ->getResponse()
fluent method to the API Method. It will return a Psr\Http\Message\ResponseInterface
:
use Tectalic\OpenAi\Handlers\AudioTranscriptions;
$response = (new AudioTranscriptions())->create()->getResponse();
When performing requests with Tectalic OpenAI REST API Client, specific scenarios will cause a Tectalic\OpenAi\Exception\ClientException
to be thrown. Please see below for details.
A \LogicException
will be thrown if the Manager::build()
function is called multiple times, or if Manager::access()
is called before calling Manager::build()
.
The Tectalic OpenAI REST API Client depends on a PSR-18 compatible HTTP client, and that HTTP client should not throw an exception for unsuccessful HTTP response codes.
An unsuccessful response code is classified as one that is not in the range 200
-299
(inclusive). Examples of unsuccessful response codes include:
- Informational responses (
100
-199
) - Redirection responses (
300
-399
) - Client error responses (
400
-499
) - Server error responses (
500
-599
)
If an unsuccessful response code does occur:
- your HTTP Client will not throw an Exception.
- the API Handler's
toModel()
method will throw aClientException
. - the API Handler's
toArray()
method will return the response body and not throw aClientException
. - The API Handler's
getResponse()
method will return the raw response and not throw aClientException
.
Below is an example of how you may wish to use a try
/catch
block when performing a request so that you can detect and handle unexpected errors.
use Tectalic\OpenAi\Authentication;
use Tectalic\OpenAi\Client;
use Tectalic\OpenAi\ClientException;
use Tectalic\OpenAi\Manager;
// Build a Tectalic OpenAI REST API Client globally.
$auth = new Authentication('token');
Manager::build($httpClient, $auth);
$handler = new AudioTranscriptions();
// Perform a request
try {
$model = $handler->create()->toModel();
// Do something with the response model...
} catch (ClientException $e) {
// Error response received. Retrieve the HTTP response code and response body.
$responseBody = $handler->toArray();
$responseCode = $handler->getResponse()->getStatusCode();
// Handle the error...
}
If your HTTP client of choice throws an exception other than ClientException
, the Tectalic OpenAI REST API Client Client
and its API Handler classes will let these exceptions bubble up.
Consult your HTTP client's documentation for more details on exception handling.
The Tectalic OpenAI REST API Client package includes several types of automated PHPUnit tests to verify the correct operation:
- Unit Tests
- Integration Tests
To run these tests, you will need to have installed the Tectalic OpenAI REST API Client package with its dev dependencies (i.e. not using the --no-dev
flag when running composer).
These PHPUnit tests are designed to:
- confirm that each API Method assembles a valid request that matches the OpenAI API OpenAPI specification.
- verify the behaviour of other parts of the package, such as the
Client
andManager
classes.
The unit tests can be run using the following command, which needs to be run from this package's root directory.
composer test:unit
Unit tests do not perform any real requests against the OpenAI API.
Unit tests are located in the tests/Unit
directory.
Integration tests are located in the tests/Integration
directory.
These PHPUnit tests are designed to confirm that each API Method parses a valid response, according to the OpenAI API OpenAPI specification. Out of the box the integration tests are designed to work with the Prism Mock Server.
Make sure Prism is installed. Please see the Prism documentation for details on how to install Prism.
Once Prism is installed, you can run prism and the integration tests side by side in separate terminal windows, or using the following command, which need to be run from this package's root directory.
echo "> Starting Prism server"
prism mock tests/openapi.yaml >/dev/null 2>&1 &
PRISM_PID=$!
sleep 2
echo " => Started"
composer test:integration
kill $PRISM_PID
Those commands will start the Prism mock server, then run the integration tests, and then stop the Prism mock server when the tests are completed.
In this case the integration tests do not perform any real requests against the OpenAI API.
By setting the OPENAI_CLIENT_TEST_BASE_URI
environment variable, you can set a different API endpoint target for the integration tests.
For example, instead of using Prism, you can use a different mocking/staging/test server of your choice, or you can use the OpenAI API's live endpoints.
Do not forget to set the appropriate credentials in the OPENAI_CLIENT_TEST_AUTH_USERNAME
OPENAI_CLIENT_TEST_AUTH_PASSWORD
environment variables.
After your setup is complete simply run the following command.
composer test:integration
We do not recommend running integration tests against the live OpenAI API endpoints. This is because the tests will send example data to all endpoints, which can result in new data being created, or existing data being deleted.
If you are writing your own tests, you will likely need to mock the responses from the OpenAI API.
One way of doing this is to install the php-http/mock-client
package into your project, and then use the \Http\Mock\Client
class (instead of a real PSR-18 client) when instantiating the Tectalic OpenAI REST API Client.
This allows you to mock the responses from the OpenAI API, rather than performing real requests.
Please see the Mock Client documentation for details.
If you have any questions or feedback, please use the discussion board.
This software is copyright (c) 2022-present Tectalic.
For copyright and license information, please view the LICENSE file.