Skip to content

Commit

Permalink
Merge pull request #109 from firstbatchxyz/start-go
Browse files Browse the repository at this point in the history
startgo & os/arch releases
  • Loading branch information
erhant authored Sep 3, 2024
2 parents 8962e83 + 9b50867 commit 5754955
Show file tree
Hide file tree
Showing 5 changed files with 805 additions and 18 deletions.
113 changes: 113 additions & 0 deletions .github/workflows/build_startgo.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
name: start.go Build and Launch Releases

on:
push:
branches:
- start-go
- master
paths:
- 'start/start.go'
- 'start/compose.yml'
- 'start/.env.example'
- '.github/workflows/build_startgo.yml'

jobs:
build:
runs-on: ${{ matrix.runner }}

strategy:
matrix:
include:
- { runner: macos-latest, goos: darwin, osname: macOS, arch: amd64, tags: netcgo }
- { runner: macos-latest, goos: darwin, osname: macOS, arch: arm64, tags: netcgo }
- { runner: ubuntu-latest, goos: linux, osname: linux, arch: amd64, env: CGO_ENABLED=0 }
- { runner: ubuntu-latest, goos: linux, osname: linux, arch: arm64, env: CGO_ENABLED=0 }
- { runner: ubuntu-latest, goos: windows, osname: windows, arch: amd64, env: CGO_ENABLED=0, extension: ".exe" }
- { runner: ubuntu-latest, goos: windows, osname: windows, arch: arm64, env: CGO_ENABLED=0, extension: ".exe" }

steps:
- name: Checkout code
uses: actions/checkout@v3

- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.21.5'

- name: Build Go app
working-directory: ./start
env:
GOOS: ${{ matrix.goos }}
GOARCH: ${{ matrix.arch }}
run: |
${{ matrix.env }} go build -tags="${{ matrix.tags }}" -o start${{ matrix.extension }} .
- name: Prepare Launch Release Files
env:
ZIP_NAME: dkn-${{ matrix.osname }}-${{ matrix.arch }}
run: |
mkdir $ZIP_NAME
cp ./start/start${{ matrix.extension }} $ZIP_NAME/start${{ matrix.extension }}
cp ./compose.yml $ZIP_NAME/
cp ./.env.example $ZIP_NAME/
zip -r $ZIP_NAME.zip $ZIP_NAME
- name: Upload Launch Artifacts
uses: actions/upload-artifact@v4
with:
name: dkn-${{ matrix.osname }}-${{ matrix.arch }}
path: dkn-${{ matrix.osname }}-${{ matrix.arch }}.zip

release:
needs: build
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/start-go'

steps:
- name: Checkout code
uses: actions/checkout@v3

- name: Download Launch Artifacts
uses: actions/download-artifact@v4
with:
merge-multiple: true
path: ./artifacts

- name: Get latest code release tag
id: get_latest_tag
run: |
# Fetch all tags
git fetch --tags
# find the latest tag that does not have a -launch suffix
latest_tag=""
for tag in $(git tag --list --sort=-v:refname); do
if [[ "$tag" != *-launch ]]; then
latest_tag=$tag
break
fi
done
if [ -z "$latest_tag" ]; then
echo "Error: No valid code release tag found."
exit 1
fi
echo "Latest code release tag: $latest_tag"
echo "LATEST_TAG=$latest_tag" >> $GITHUB_ENV
- name: Create launch release tag
id: create_launch_tag
run: |
new_tag="${LATEST_TAG}-launch"
echo "New launch release tag: $new_tag"
echo "NEW_TAG=$new_tag" >> $GITHUB_ENV
- name: Create release with artifacts
uses: ncipollo/release-action@v1
with:
name: ${{ env.NEW_TAG }}
tag: ${{ env.NEW_TAG }}
artifacts: "artifacts/*"
artifactContentType: application/zip
draft: true
131 changes: 113 additions & 18 deletions docs/NODE_GUIDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,12 @@ You need the following applications to run compute node:
- **Git**: We will use `git` to clone the repository from GitHub, and pull latest changes for updates later.
- **Docker**: Our services will make use of Docker so that the node can run on any machine.

> [!CAUTION]
>
> In **windows** machines, Docker-Desktop requried to be running with **WSL2**
>
> You can check the Docker Desktop Windows installation guide from [here](https://docs.docker.com/desktop/install/windows-install/)
> [!TIP]
>
> You can check if you have these via:
Expand All @@ -34,17 +40,89 @@ In general, if you are using Ollama you will need the memory to run large models
To be able to run a node, we need to make a few simple preparations. Follow the steps below one by one.
### 1. Clone the repository
### 1. Download and Extract the Node Files
This repository has the necessary setup to run the node, so start by cloning it using the command below:
Download the appropriate ZIP file for your system using the commands below or from [browser](https://github.com/firstbatchxyz/dkn-compute-node/releases/tag/untagged-ad4b77bd5544f4412179). Make sure to replace the URL with the correct version for your operating system and architecture.
```bash
git clone https://github.com/firstbatchxyz/dkn-compute-node
cd dkn-compute-node
```
**macOS:**
1. Check your architecture:
```sh
uname -m
```
- If the output is `arm64`, download the `arm64` version.
- If it's `x86_64`, download the `amd64` version.
2. Download the ZIP file:
```sh
curl -L -o dkn-compute-node.zip https://github.com/firstbatchxyz/dkn-compute-node/releases/download/untagged-54b9278cc672499d9c84/dkn-compute-node-macos-arm64.zip
```
3. Unzip the downloaded file:
```sh
unzip dkn-compute-node-macos-arm64.zip
cd dkn-compute-node-darwin-arm64
```
> [!TIP]
>
> Some devices need you to bypass macOS's security warning. If you see "macOS cannot verify that this app is free from malware," use the following command:
>
> ```sh
> xattr -d com.apple.quarantine start
> ```
**Linux:**
1. Check your architecture:
```sh
uname -m
```
- If the output is `x86_64`, download the `amd64` version.
- If it's `aarch64`, download the `arm64` version.
2. Download the ZIP file:
```sh
curl -L -o dkn-compute-node.zip https://github.com/firstbatchxyz/dkn-compute-node/releases/download/untagged-54b9278cc672499d9c84/dkn-compute-node-linux-amd64.zip
```
3. Unzip the downloaded file:
```sh
unzip dkn-compute-node-linux-amd64.zip
cd dkn-compute-node-linux-amd64
```
**Windows:**
1. Check your architecture:
- Open System Information:
- Press `Win + R` to open the Run dialog.
- Type `msinfo32` and press Enter.
- Look for the line labeled "Processor" or "CPU":
- If it includes "x64" or refers to Intel or AMD, it is likely x86 (amd64).
- If it mentions ARM, then it's an ARM processor.
2. Download the ZIP file using a web browser or in PowerShell:
```cmd
curl -L -o dkn-compute-node.zip https://github.com/firstbatchxyz/dkn-compute-node/releases/download/untagged-54b9278cc672499d9c84/dkn-compute-node-windows-amd64.zip
```
3. Unzip the downloaded file using File Explorer or in PowerShell:
```cmd
Expand-Archive -Path "C:\path\to\your\dkn-compute-node-windows-amd64.zip" -DestinationPath "C:\path\to\destination\dkn-compute-node-windows-amd64"
cd dkn-compute-node-windows-amd64
```
### 2. Prepare Environment Variables
> [!TIP]
>
> Speed-running the node execution:
>
> Optionally, you can also handle the environment variables on the fly by just running the `start` cli-app directly, since it'll ask you to enter the required environment variables.
>
> If you prefer this you can move on to the [Usage](#usage) section
Dria Compute Node makes use of several environment variables. Create a `.env` file, and copy the environment variables as given in [.env.example](./.env.example). We will fill out the missing parts in a moment.
```sh
Expand Down Expand Up @@ -165,29 +243,46 @@ sudo systemctl start docker
### 3. Run Node
It's time to run our compute node. We have a starter script that makes this much easier, you can see available commadns with:
It's time to run our compute node. We have a starter cli app that makes this much easier, you can either run it by double-clicking the `start` app (`start.exe` on windows) from your file explorer or use it from terminal (or cmd/powershell in windows).
See the available commands with:
```sh
chmod +x start.sh
./start.sh --help
# macos or linux
./start --help
# windows
.\start.exe --help
```
Simply run the script with the model names provided, such as:
Then simply run the cli app, it will ask you to enter required inputs:
```sh
./start.sh -m=llama3.1:latest -m=gpt-3.5-turbo
# macos or linux
./start
# windows
.\start.exe
```
Or you can directly pass the running models using `-m` flags
```sh
# macos or linux
./start -m=llama3.1:latest -m=gpt-3.5-turbo
# windows
.\start.exe -m=llama3.1:latest -m=gpt-3.5-turbo
```
Start script will run the containers in the background. You can check their logs either via the terminal or from [Docker Desktop](https://www.docker.com/products/docker-desktop/).
Start app will run the containers in the background. You can check their logs either via the terminal or from [Docker Desktop](https://www.docker.com/products/docker-desktop/).
#### Running in Debug Mode
To print DEBUG-level logs for the compute node, you can add `--dev` argument to the start script. For example:
To print DEBUG-level logs for the compute node, you can add `--dev` argument to the start app. For example:
```sh
./start.sh -m=gpt-4o-mini --dev
./start -m=gpt-4o-mini --dev
```
Running in debug mode will also allow you to see behind the scenes of Ollama Workflows, i.e. you can see the reasoning of the LLM as it executes the task.
Expand Down Expand Up @@ -217,7 +312,7 @@ docker compose logs --since=30m compute
### 5. Stopping the Node
When you start your node with `./start.sh`, it will wait for you in the same terminal to do CTRL+C before stopping. Once you do that, the containers will be stopped and removed. You can also kill the containers manually, doing CTRL+C afterwards will do nothing in such a case.
When you start your node with `start` cli app, it will wait for you in the same terminal to do CTRL+C before stopping. Once you do that, the containers will be stopped and removed. You can also kill the containers manually, doing CTRL+C afterwards will do nothing in such a case.
> [!NOTE]
>
Expand All @@ -229,17 +324,17 @@ When you start your node with `./start.sh`, it will wait for you in the same ter
If you have Ollama installed already (e.g. via `brew install ollama`) then you must indicate that you will be using that Ollama, instead of a Docker container. To do this, we set the provide the argument `--local-ollama=true` which is `true` by default. With this, the compute node will use the Ollama server on your machine, instead of a Docker container.
If the Ollama server is not running, the start script will initiate it with `ollama serve` and terminate it when the node is being stopped.
If the Ollama server is not running, the start app will initiate it with `ollama serve` and terminate it when the node is being stopped.
- If `--local-ollama=false` or the local Ollama server is reachable, the compute node will use a Docker Compose service for it.
> [!TIP]
>
> There are three Docker Compose Ollama options: `ollama-cpu`, `ollama-cuda`, and `ollama-rocm`. The start script will decide which option to use based on the host machine's GPU specifications.
> There are three Docker Compose Ollama options: `ollama-cpu`, `ollama-cuda`, and `ollama-rocm`. The start app will decide which option to use based on the host machine's GPU specifications.
```sh
# Run with local ollama
./start.sh -m=phi3 --local-ollama=true
./start -m=phi3 --local-ollama=true
```
### Additional Static Nodes
Expand Down
5 changes: 5 additions & 0 deletions start/go.mod
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
module github.com/firstbatchxyz/dkn-compute-node-start

go 1.21.5

require github.com/joho/godotenv v1.5.1
2 changes: 2 additions & 0 deletions start/go.sum
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
github.com/joho/godotenv v1.5.1 h1:7eLL/+HRGLY0ldzfGMeQkb7vMd0as4CfYvUVzLqw0N0=
github.com/joho/godotenv v1.5.1/go.mod h1:f4LDr5Voq0i2e/R5DDNOoa2zzDfwtkZa6DnEwAbqwq4=
Loading

0 comments on commit 5754955

Please sign in to comment.