Skip to content

Commit

Permalink
updated instructions for qvikchat starter template
Browse files Browse the repository at this point in the history
  • Loading branch information
pranav-kural committed Jul 9, 2024
1 parent e890587 commit 5d1e80a
Show file tree
Hide file tree
Showing 2 changed files with 17 additions and 4 deletions.
4 changes: 4 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,3 +33,7 @@ This project is licensed under the MIT License.
## Credits

This documentation website was created using [Nextra](https://github.com/shuding/nextra) site generation framework.

## Oconva

QvikChat is a project by [Oconva](https://github.com/oconva). Oconva is an initiative to make conversational AI more open and accessible to all. Oconva's vision is to create a future where any developer, regardless of their available resources, can empower their apps with the power of conversational AI. Whether it's adding an AI assistant to their app or integrating a translation service, Oconva aims to provide developers with open-source tools and frameworks to support them on this journey.
17 changes: 13 additions & 4 deletions pages/getting-started.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -88,12 +88,21 @@ Simply, clone the [QvikChat starter template](https://github.com/oconva/qvikchat
git clone https://github.com/oconva/qvikchat-starter-template.git
```

Once you have cloned the starter template, you can run the following commands to get started:
Once you have cloned the starter template, add the API keys required to access LLM models. By default, QvikChat uses Google's Gemini API for text generation and embedding models. If you don't yet have a Google Gen AI API key, you can get one from [Gemini API - Get an API Key](https://ai.google.dev/gemini-api/docs/api-key).

```bash copy
cd qvikchat-starter-template
`.env` should have:

```bash
GOOGLE_GENAI_API_KEY=
```

You can also use OpenAI API instead of Gemini API. You'll have to provide your OpenAI API key as the `OPENAI_API_KEY` environment variable and configure your chat endpoints to use a custom chat agent configured with the OpenAI model you want to use. To learn more about configuring chat agents with custom LLM model, check [here](https://qvikchat.pkural.ca/chat-agent#llm-model).

You can run the following commands to install the dependencies and start the server:

```bash
npm install # or pnpm install
npm run start # or pnpm start
npm run dev # or pnpm dev
```

To build the project, run:
Expand Down

0 comments on commit 5d1e80a

Please sign in to comment.