Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release: 0.1.0-alpha.9 #47

Merged
merged 6 commits into from
Nov 11, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "0.1.0-alpha.8"
".": "0.1.0-alpha.9"
}
2 changes: 1 addition & 1 deletion .stats.yml
Original file line number Diff line number Diff line change
@@ -1 +1 @@
configured_endpoints: 13
configured_endpoints: 14
1 change: 1 addition & 0 deletions Brewfile
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
brew "go"
16 changes: 16 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,21 @@
# Changelog

## 0.1.0-alpha.9 (2024-11-11)

Full Changelog: [v0.1.0-alpha.8...v0.1.0-alpha.9](https://github.com/openlayer-ai/openlayer-go/compare/v0.1.0-alpha.8...v0.1.0-alpha.9)

### Features

* **api:** manual updates ([#49](https://github.com/openlayer-ai/openlayer-go/issues/49)) ([0c75483](https://github.com/openlayer-ai/openlayer-go/commit/0c754834bb8e3fbd493a4a656d785547c64bb532))
* **api:** update via SDK Studio ([#46](https://github.com/openlayer-ai/openlayer-go/issues/46)) ([0c269b0](https://github.com/openlayer-ai/openlayer-go/commit/0c269b042bba8934c5973214d2d665dc1cf362fc))
* **api:** update via SDK Studio ([#48](https://github.com/openlayer-ai/openlayer-go/issues/48)) ([27a5a99](https://github.com/openlayer-ai/openlayer-go/commit/27a5a9977c7741bbfcfe7a62d8e32a1a512cabac))


### Chores

* custom code changes ([#50](https://github.com/openlayer-ai/openlayer-go/issues/50)) ([3b63bd4](https://github.com/openlayer-ai/openlayer-go/commit/3b63bd474c0ac3d8404d8ed6196007f6eea1bfd0))
* rebuild project due to codegen change ([#51](https://github.com/openlayer-ai/openlayer-go/issues/51)) ([43b0dae](https://github.com/openlayer-ai/openlayer-go/commit/43b0dae82426763900cdbd37eccf2433c0d093b6))

## 0.1.0-alpha.8 (2024-09-24)

Full Changelog: [v0.1.0-alpha.7...v0.1.0-alpha.8](https://github.com/openlayer-ai/openlayer-go/compare/v0.1.0-alpha.7...v0.1.0-alpha.8)
Expand Down
41 changes: 24 additions & 17 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -1,20 +1,27 @@
## Setting up the environment

### Install Go 1.18+
To set up the repository, run:

Install go by following relevant directions [here](https://go.dev/doc/install).
```sh
$ ./scripts/bootstrap
$ ./scripts/build
```

This will install all the required dependencies and build the SDK.

You can also [install go 1.18+ manually](https://go.dev/doc/install).

## Modifying/Adding code

Most of the SDK is generated code, and any modified code will be overridden on the next generation. The
`examples/` directory is an exception and will never be overridden.
Most of the SDK is generated code. Modifications to code will be persisted between generations, but may
result in merge conflicts between manual patches and changes from the generator. The generator will never
modify the contents of the `lib/` and `examples/` directories.

## Adding and running examples

All files in the `examples/` directory are not modified by the Stainless generator and can be freely edited or
added to.
All files in the `examples/` directory are not modified by the generator and can be freely edited or added to.

```bash
```go
# add an example to examples/<your-example>/main.go

package main
Expand All @@ -24,36 +31,36 @@ func main() {
}
```

```bash
go run ./examples/<your-example>
```sh
$ go run ./examples/<your-example>
```

## Using the repository from source

To use a local version of this library from source in another project, edit the `go.mod` with a replace
directive. This can be done through the CLI with the following:

```bash
go mod edit -replace github.com/openlayer-ai/openlayer-go=/path/to/openlayer-go
```sh
$ go mod edit -replace github.com/openlayer-ai/openlayer-go=/path/to/openlayer-go
```

## Running tests

Most tests require you to [set up a mock server](https://github.com/stoplightio/prism) against the OpenAPI spec to run the tests.

```bash
```sh
# you will need npm installed
npx prism mock path/to/your/openapi.yml
$ npx prism mock path/to/your/openapi.yml
```

```bash
go test ./...
```sh
$ ./scripts/test
```

## Formatting

This library uses the standard gofmt code formatter:

```bash
gofmt -s -w .
```sh
$ ./scripts/format
```
10 changes: 7 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Or to pin the version:
<!-- x-release-please-start-version -->

```sh
go get -u 'github.com/openlayer-ai/openlayer-go@v0.1.0-alpha.8'
go get -u 'github.com/openlayer-ai/openlayer-go@v0.1.0-alpha.9'
```

<!-- x-release-please-end -->
Expand Down Expand Up @@ -52,7 +52,7 @@ func main() {
client := openlayer.NewClient(
option.WithAPIKey("My API Key"), // defaults to os.LookupEnv("OPENLAYER_API_KEY")
)
inferencePipelineDataStreamResponse, err := client.InferencePipelines.Data.Stream(
response, err := client.InferencePipelines.Data.Stream(
context.TODO(),
"182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
openlayer.InferencePipelineDataStreamParams{
Expand All @@ -75,7 +75,7 @@ func main() {
if err != nil {
panic(err.Error())
}
fmt.Printf("%+v\n", inferencePipelineDataStreamResponse.Success)
fmt.Printf("%+v\n", response.Success)
}

```
Expand Down Expand Up @@ -408,3 +408,7 @@ This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) con
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.

We are keen for your feedback; please open an [issue](https://www.github.com/openlayer-ai/openlayer-go/issues) with questions, bugs, or suggestions.

## Contributing

See [the contributing documentation](./CONTRIBUTING.md).
2 changes: 2 additions & 0 deletions api.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,10 +14,12 @@ Methods:

Response Types:

- <a href="https://pkg.go.dev/github.com/openlayer-ai/openlayer-go">openlayer</a>.<a href="https://pkg.go.dev/github.com/openlayer-ai/openlayer-go#ProjectCommitNewResponse">ProjectCommitNewResponse</a>
- <a href="https://pkg.go.dev/github.com/openlayer-ai/openlayer-go">openlayer</a>.<a href="https://pkg.go.dev/github.com/openlayer-ai/openlayer-go#ProjectCommitListResponse">ProjectCommitListResponse</a>

Methods:

- <code title="post /projects/{projectId}/versions">client.Projects.Commits.<a href="https://pkg.go.dev/github.com/openlayer-ai/openlayer-go#ProjectCommitService.New">New</a>(ctx <a href="https://pkg.go.dev/context">context</a>.<a href="https://pkg.go.dev/context#Context">Context</a>, projectID <a href="https://pkg.go.dev/builtin#string">string</a>, body <a href="https://pkg.go.dev/github.com/openlayer-ai/openlayer-go">openlayer</a>.<a href="https://pkg.go.dev/github.com/openlayer-ai/openlayer-go#ProjectCommitNewParams">ProjectCommitNewParams</a>) (<a href="https://pkg.go.dev/github.com/openlayer-ai/openlayer-go">openlayer</a>.<a href="https://pkg.go.dev/github.com/openlayer-ai/openlayer-go#ProjectCommitNewResponse">ProjectCommitNewResponse</a>, <a href="https://pkg.go.dev/builtin#error">error</a>)</code>
- <code title="get /projects/{projectId}/versions">client.Projects.Commits.<a href="https://pkg.go.dev/github.com/openlayer-ai/openlayer-go#ProjectCommitService.List">List</a>(ctx <a href="https://pkg.go.dev/context">context</a>.<a href="https://pkg.go.dev/context#Context">Context</a>, projectID <a href="https://pkg.go.dev/builtin#string">string</a>, query <a href="https://pkg.go.dev/github.com/openlayer-ai/openlayer-go">openlayer</a>.<a href="https://pkg.go.dev/github.com/openlayer-ai/openlayer-go#ProjectCommitListParams">ProjectCommitListParams</a>) (<a href="https://pkg.go.dev/github.com/openlayer-ai/openlayer-go">openlayer</a>.<a href="https://pkg.go.dev/github.com/openlayer-ai/openlayer-go#ProjectCommitListResponse">ProjectCommitListResponse</a>, <a href="https://pkg.go.dev/builtin#error">error</a>)</code>

## InferencePipelines
Expand Down
112 changes: 108 additions & 4 deletions client_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ import (
"context"
"fmt"
"net/http"
"reflect"
"testing"
"time"

Expand Down Expand Up @@ -62,12 +63,12 @@ func TestUserAgentHeader(t *testing.T) {
}

func TestRetryAfter(t *testing.T) {
attempts := 0
retryCountHeaders := make([]string, 0)
client := openlayer.NewClient(
option.WithHTTPClient(&http.Client{
Transport: &closureTransport{
fn: func(req *http.Request) (*http.Response, error) {
attempts++
retryCountHeaders = append(retryCountHeaders, req.Header.Get("X-Stainless-Retry-Count"))
return &http.Response{
StatusCode: http.StatusTooManyRequests,
Header: http.Header{
Expand Down Expand Up @@ -101,8 +102,111 @@ func TestRetryAfter(t *testing.T) {
if err == nil || res != nil {
t.Error("Expected there to be a cancel error and for the response to be nil")
}
if want := 3; attempts != want {
t.Errorf("Expected %d attempts, got %d", want, attempts)

attempts := len(retryCountHeaders)
if attempts != 3 {
t.Errorf("Expected %d attempts, got %d", 3, attempts)
}

expectedRetryCountHeaders := []string{"0", "1", "2"}
if !reflect.DeepEqual(retryCountHeaders, expectedRetryCountHeaders) {
t.Errorf("Expected %v retry count headers, got %v", expectedRetryCountHeaders, retryCountHeaders)
}
}

func TestDeleteRetryCountHeader(t *testing.T) {
retryCountHeaders := make([]string, 0)
client := openlayer.NewClient(
option.WithHTTPClient(&http.Client{
Transport: &closureTransport{
fn: func(req *http.Request) (*http.Response, error) {
retryCountHeaders = append(retryCountHeaders, req.Header.Get("X-Stainless-Retry-Count"))
return &http.Response{
StatusCode: http.StatusTooManyRequests,
Header: http.Header{
http.CanonicalHeaderKey("Retry-After"): []string{"0.1"},
},
}, nil
},
},
}),
option.WithHeaderDel("X-Stainless-Retry-Count"),
)
res, err := client.InferencePipelines.Data.Stream(
context.Background(),
"182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
openlayer.InferencePipelineDataStreamParams{
Config: openlayer.F[openlayer.InferencePipelineDataStreamParamsConfigUnion](openlayer.InferencePipelineDataStreamParamsConfigLlmData{
InputVariableNames: openlayer.F([]string{"user_query"}),
OutputColumnName: openlayer.F("output"),
NumOfTokenColumnName: openlayer.F("tokens"),
CostColumnName: openlayer.F("cost"),
TimestampColumnName: openlayer.F("timestamp"),
}),
Rows: openlayer.F([]map[string]interface{}{{
"user_query": "what is the meaning of life?",
"output": "42",
"tokens": map[string]interface{}{},
"cost": map[string]interface{}{},
"timestamp": map[string]interface{}{},
}}),
},
)
if err == nil || res != nil {
t.Error("Expected there to be a cancel error and for the response to be nil")
}

expectedRetryCountHeaders := []string{"", "", ""}
if !reflect.DeepEqual(retryCountHeaders, expectedRetryCountHeaders) {
t.Errorf("Expected %v retry count headers, got %v", expectedRetryCountHeaders, retryCountHeaders)
}
}

func TestOverwriteRetryCountHeader(t *testing.T) {
retryCountHeaders := make([]string, 0)
client := openlayer.NewClient(
option.WithHTTPClient(&http.Client{
Transport: &closureTransport{
fn: func(req *http.Request) (*http.Response, error) {
retryCountHeaders = append(retryCountHeaders, req.Header.Get("X-Stainless-Retry-Count"))
return &http.Response{
StatusCode: http.StatusTooManyRequests,
Header: http.Header{
http.CanonicalHeaderKey("Retry-After"): []string{"0.1"},
},
}, nil
},
},
}),
option.WithHeader("X-Stainless-Retry-Count", "42"),
)
res, err := client.InferencePipelines.Data.Stream(
context.Background(),
"182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
openlayer.InferencePipelineDataStreamParams{
Config: openlayer.F[openlayer.InferencePipelineDataStreamParamsConfigUnion](openlayer.InferencePipelineDataStreamParamsConfigLlmData{
InputVariableNames: openlayer.F([]string{"user_query"}),
OutputColumnName: openlayer.F("output"),
NumOfTokenColumnName: openlayer.F("tokens"),
CostColumnName: openlayer.F("cost"),
TimestampColumnName: openlayer.F("timestamp"),
}),
Rows: openlayer.F([]map[string]interface{}{{
"user_query": "what is the meaning of life?",
"output": "42",
"tokens": map[string]interface{}{},
"cost": map[string]interface{}{},
"timestamp": map[string]interface{}{},
}}),
},
)
if err == nil || res != nil {
t.Error("Expected there to be a cancel error and for the response to be nil")
}

expectedRetryCountHeaders := []string{"42", "42", "42"}
if !reflect.DeepEqual(retryCountHeaders, expectedRetryCountHeaders) {
t.Errorf("Expected %v retry count headers, got %v", expectedRetryCountHeaders, retryCountHeaders)
}
}

Expand Down
38 changes: 19 additions & 19 deletions inferencepipelinedata.go
Original file line number Diff line number Diff line change
Expand Up @@ -95,8 +95,12 @@ func (r InferencePipelineDataStreamParams) MarshalJSON() (data []byte, err error
// Configuration for the data stream. Depends on your **Openlayer project task
// type**.
type InferencePipelineDataStreamParamsConfig struct {
// Name of the column with the total number of tokens.
NumOfTokenColumnName param.Field[string] `json:"numOfTokenColumnName"`
CategoricalFeatureNames param.Field[interface{}] `json:"categoricalFeatureNames,required"`
ClassNames param.Field[interface{}] `json:"classNames,required"`
FeatureNames param.Field[interface{}] `json:"featureNames,required"`
InputVariableNames param.Field[interface{}] `json:"inputVariableNames,required"`
Metadata param.Field[interface{}] `json:"metadata,required"`
Prompt param.Field[interface{}] `json:"prompt,required"`
// Name of the column with the context retrieved. Applies to RAG use cases.
// Providing the context enables RAG-specific metrics.
ContextColumnName param.Field[string] `json:"contextColumnName"`
Expand All @@ -107,35 +111,31 @@ type InferencePipelineDataStreamParamsConfig struct {
// Name of the column with the inference ids. This is useful if you want to update
// rows at a later point in time. If not provided, a unique id is generated by
// Openlayer.
InferenceIDColumnName param.Field[string] `json:"inferenceIdColumnName"`
InputVariableNames param.Field[interface{}] `json:"inputVariableNames,required"`
// Name of the column with the latencies.
LatencyColumnName param.Field[string] `json:"latencyColumnName"`
Metadata param.Field[interface{}] `json:"metadata,required"`
// Name of the column with the model outputs.
OutputColumnName param.Field[string] `json:"outputColumnName"`
Prompt param.Field[interface{}] `json:"prompt,required"`
// Name of the column with the questions. Applies to RAG use cases. Providing the
// question enables RAG-specific metrics.
QuestionColumnName param.Field[string] `json:"questionColumnName"`
// Name of the column with the timestamps. Timestamps must be in UNIX sec format.
// If not provided, the upload timestamp is used.
TimestampColumnName param.Field[string] `json:"timestampColumnName"`
CategoricalFeatureNames param.Field[interface{}] `json:"categoricalFeatureNames,required"`
ClassNames param.Field[interface{}] `json:"classNames,required"`
FeatureNames param.Field[interface{}] `json:"featureNames,required"`
InferenceIDColumnName param.Field[string] `json:"inferenceIdColumnName"`
// Name of the column with the labels. The data in this column must be
// **zero-indexed integers**, matching the list provided in `classNames`.
LabelColumnName param.Field[string] `json:"labelColumnName"`
// Name of the column with the latencies.
LatencyColumnName param.Field[string] `json:"latencyColumnName"`
// Name of the column with the total number of tokens.
NumOfTokenColumnName param.Field[string] `json:"numOfTokenColumnName"`
// Name of the column with the model outputs.
OutputColumnName param.Field[string] `json:"outputColumnName"`
// Name of the column with the model's predictions as **zero-indexed integers**.
PredictionsColumnName param.Field[string] `json:"predictionsColumnName"`
// Name of the column with the model's predictions as **lists of class
// probabilities**.
PredictionScoresColumnName param.Field[string] `json:"predictionScoresColumnName"`
// Name of the column with the questions. Applies to RAG use cases. Providing the
// question enables RAG-specific metrics.
QuestionColumnName param.Field[string] `json:"questionColumnName"`
// Name of the column with the targets (ground truth values).
TargetColumnName param.Field[string] `json:"targetColumnName"`
// Name of the column with the text data.
TextColumnName param.Field[string] `json:"textColumnName"`
// Name of the column with the timestamps. Timestamps must be in UNIX sec format.
// If not provided, the upload timestamp is used.
TimestampColumnName param.Field[string] `json:"timestampColumnName"`
}

func (r InferencePipelineDataStreamParamsConfig) MarshalJSON() (data []byte, err error) {
Expand Down
Loading