Skip to content

Commit

Permalink
Merge pull request #288 from Altinity/master
Browse files Browse the repository at this point in the history
# v1.2.0

INCOMPATIBLE CHANGES
- REST API `/backup/status` now return only latest executed command with status and error message

IMPROVEMENTS
- Added REST API `/backup/list/local` and `/backup/list/remote` to allow list backup types separately
- Decreased background backup creation time via REST API `/backup/create`, during avoid list remote backups for update metrics value 
- Decreased backup creation time, during avoid scan whole `system.tables` when set `table` query string parameter or `--tables` cli parameter     
- Added `last` and `filter` query string parameters to REST API `/backup/actions`, to avoid pass to client long JSON documents
- Improved `FTP` remote storage parallel upload / download
- Added `FTP_CONCURRENCY` to allow, by default MAX_CPU / 2 
- Added `FTP_DEBUG` setting, to allow debug FTP commands
- Added `FTP` to CI/CD on any commit
- Added race condition check to CI/CD

BUG FIXES
- environment variable `LOG_LEVEL` now apply to `clickhouse-backup server` properly
- fix #280, incorrect prometheus metrics measurement for `/backup/create`, `/backup/upload`, `/backup/download`
- fix #273, return `S3_PART_SIZE` back, but calculates it smartly
- fix #252, now you can pass `last` and `filter` query string parameters
- fix #246, incorrect error messages when use `REMOTE_STORAGE=none`
- fix #283, properly handle error message from `FTP` server 
- fix #268, properly restore legacy backup for schema without database name
- fix #287
  • Loading branch information
Slach authored Oct 24, 2021
2 parents adb5b2d + 5f4328d commit 8567fc6
Show file tree
Hide file tree
Showing 22 changed files with 650 additions and 173 deletions.
145 changes: 106 additions & 39 deletions .github/workflows/build.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,28 +4,21 @@ on:
pull_request:
branches:
- master
- dev

push:
branches:
- master
- dev

jobs:
build:
name: Build
runs-on: ubuntu-latest
strategy:
matrix:
clickhouse:
- '1.1.54390'
- '19.11.12.69'
- '19.15.3.6'
- '19.16.19.85'
- '20.3'
- '20.8'
- '21.3'
- '21.8'
golang-version:
- "1.17"
outputs:
GCS_TESTS: ${{ steps.secrets.outputs.GST_TESTS }}
steps:
- name: Checkout project
uses: actions/checkout@v2
Expand All @@ -34,8 +27,7 @@ jobs:
id: setup-go
uses: actions/setup-go@v2
with:
go-version: '^1.17'

go-version: '^${{ matrix.golang-version }}'

- name: Cache golang
id: cache-golang
Expand All @@ -53,27 +45,18 @@ jobs:
if: |
steps.cache-golang.outputs.cache-hit != 'true'
- name: Setup docker-compose
run: |
sudo apt-get update
sudo apt-get install --no-install-recommends -y make python3-pip
sudo python3 -m pip install -U pip
sudo pip3 install --prefer-binary -U setuptools
sudo pip3 install --prefer-binary -U docker-compose
- name: Extract DOCKER_TAG version
id: docker_tag
- name: Build clickhouse-backup binary
id: make
env:
GOROOT: ${{ env.GOROOT_1_17_X64 }}
run: |
DOCKER_TAG=${GITHUB_REF##*/}
export DOCKER_TAG=${DOCKER_TAG##*\\}
echo "::set-output name=docker_tag::${DOCKER_TAG:-dev}"
make build-race
make config
make test
- run: make build
- run: make config
- run: make test

# be carefull with encrypt with old OpenSSL - https://habr.com/ru/post/535140/
# be careful with encrypt with old OpenSSL - https://habr.com/ru/post/535140/
- name: Decrypting credentials for Google Cloud Storage
id: secrets
env:
Expand All @@ -85,39 +68,123 @@ jobs:
fi
echo "::set-output name=GCS_TESTS::$(if [ -z "${{ secrets.VAULT_PASSWORD }}" ]; then echo "false"; else echo "true"; fi)"
- uses: actions/upload-artifact@v2
with:
name: build-gcp-credentials
path: |
test/integration/credentials.json
if-no-files-found: error
if: |
steps.secrets.outputs.GCS_TESTS == 'true'
- uses: actions/upload-artifact@v2
with:
name: build-artifacts
path: |
./clickhouse-backup/clickhouse-backup
./clickhouse-backup/clickhouse-backup-race
ChangeLog.md
if-no-files-found: error
test:
needs: build
name: Test
runs-on: ubuntu-latest
strategy:
matrix:
golang-version:
- "1.17"
clickhouse:
- '1.1.54390'
- '19.17'
- '20.3'
- '20.8'
- '21.3'
- '21.8'
steps:
- name: Checkout project
uses: actions/checkout@v2

- name: Setup golang
id: setup-go
uses: actions/setup-go@v2
with:
go-version: '^${{ matrix.golang-version }}'

- name: Cache golang
id: cache-golang
uses: actions/cache@v2
with:
path: |
~/go/pkg/mod
~/.cache/go-build
key: ${{ runner.os }}-${{ matrix.golang-version }}-golang-${{ hashFiles('go.sum') }}
restore-keys: |
${{ runner.os }}-${{ matrix.golang-version }}-golang-
- uses: actions/download-artifact@v2
with:
name: build-artifacts

- uses: actions/download-artifact@v2
with:
name: build-gcp-credentials
if: |
needs.build.outputs.GCS_TESTS == 'true'
- name: Running integration tests
env:
CLICKHOUSE_VERSION: ${{ matrix.clickhouse }}
# LOG_LEVEL: debug
# RUN_TESTS: "TestIntegrationFTP"
# LOG_LEVEL: "debug"
# FTP_DEBUG: "true"
CGO_ENABLED: 0
GCS_TESTS: ${{ steps.secrets.outputs.GCS_TESTS }}
GCS_TESTS: ${{ needs.build.outputs.GCS_TESTS }}
run: |
set -x
echo "CLICKHOUSE_VERSION=${CLICKHOUSE_VERSION}"
echo "GCS_TESTS=${GCS_TESTS}"
chmod +x $(pwd)/clickhouse-backup/clickhouse-backup*
if [[ "${CLICKHOUSE_VERSION}" == 2* ]]; then
export COMPOSE_FILE=docker-compose_advanced.yml
else
export COMPOSE_FILE=docker-compose.yml
fi
export CLICKHOUSE_BACKUP_BIN="$(pwd)/clickhouse-backup/clickhouse-backup"
docker-compose -f test/integration/${COMPOSE_FILE} down
docker volume prune -f
docker-compose -f test/integration/${COMPOSE_FILE} up -d --force-recreate
export CLICKHOUSE_BACKUP_BIN="$(pwd)/clickhouse-backup/clickhouse-backup-race"
docker-compose -f test/integration/${COMPOSE_FILE} up -d clickhouse
docker-compose -f test/integration/${COMPOSE_FILE} ps -a
go test -failfast -tags=integration -v test/integration/integration_test.go
go test -timeout 30m -failfast -tags=integration -run "${RUN_TESTS:-.+}" -v test/integration/integration_test.go
docker:
needs: test
name: Docker
runs-on: ubuntu-latest
steps:
- name: Checkout project
uses: actions/checkout@v2

- uses: actions/download-artifact@v2
with:
name: build-artifacts

- name: Extract DOCKER_TAG version
id: docker_tag
run: |
DOCKER_TAG=${GITHUB_REF##*/}
export DOCKER_TAG=${DOCKER_TAG##*\\}
echo "::set-output name=docker_tag::${DOCKER_TAG:-dev}"
- name: Building docker image
env:
CLICKHOUSE_VERSION: ${{ matrix.clickhouse }}
DOCKER_REPO: ${{ secrets.DOCKER_REPO }}
DOCKER_IMAGE: ${{ secrets.DOCKER_IMAGE }}
DOCKER_TOKEN: ${{ secrets.DOCKER_TOKEN }}
DOCKER_USER: ${{ secrets.DOCKER_USER }}
DOCKER_REGISTRY: ${{ secrets.DOCKER_REGISTRY }}
DOCKER_TAG: ${{ steps.docker_tag.outputs.docker_tag }}
run: |
if [[ "${CLICKHOUSE_VERSION}" == "21.3" && "${DOCKER_TOKEN}" != "" ]]; then
if [[ "${DOCKER_TOKEN}" != "" ]]; then
export DOCKER_REGISTRY=${DOCKER_REGISTRY:-docker.io}
echo ${DOCKER_TOKEN} | docker login -u ${DOCKER_USER} --password-stdin ${DOCKER_REGISTRY}
Expand Down
8 changes: 6 additions & 2 deletions .github/workflows/release.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,11 @@ jobs:
release:
name: Release
runs-on: ubuntu-latest
strategy:
matrix:
golang-version:
- "1.17"

steps:
- name: Checkout project
uses: actions/checkout@v2
Expand All @@ -17,11 +22,10 @@ jobs:
id: setup-go
uses: actions/setup-go@v2
with:
go-version: '^1.17'
go-version: '^${{ matrix.golang-version }}'

- name: Setup fpm and make
run: |
sudo apt-get update
sudo apt-get install -y --no-install-recommends ruby ruby-dev gcc g++ rpm
sudo apt-get install --no-install-recommends -y make
sudo gem install --no-document fpm
Expand Down
32 changes: 31 additions & 1 deletion ChangeLog.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,34 @@
# v1.2.0

INCOMPATIBLE CHANGES
- REST API `/backup/status` now return only latest executed command with status and error message

IMPROVEMENTS
- Added REST API `/backup/list/local` and `/backup/list/remote` to allow list backup types separately
- Decreased background backup creation time via REST API `/backup/create`, during avoid list remote backups for update metrics value
- Decreased backup creation time, during avoid scan whole `system.tables` when set `table` query string parameter or `--tables` cli parameter
- Added `last` and `filter` query string parameters to REST API `/backup/actions`, to avoid pass to client long JSON documents
- Improved `FTP` remote storage parallel upload / download
- Added `FTP_CONCURRENCY` to allow, by default MAX_CPU / 2
- Added `FTP_DEBUG` setting, to allow debug FTP commands
- Added `FTP` to CI/CD on any commit
- Added race condition check to CI/CD

BUG FIXES
- environment variable `LOG_LEVEL` now apply to `clickhouse-backup server` properly
- fix [#280](https://github.com/AlexAkulov/clickhouse-backup/issues/280), incorrect prometheus metrics measurement for `/backup/create`, `/backup/upload`, `/backup/download`
- fix [#273](https://github.com/AlexAkulov/clickhouse-backup/issues/273), return `S3_PART_SIZE` back, but calculates it smartly
- fix [#252](https://github.com/AlexAkulov/clickhouse-backup/issues/252), now you can pass `last` and `filter` query string parameters
- fix [#246](https://github.com/AlexAkulov/clickhouse-backup/issues/246), incorrect error messages when use `REMOTE_STORAGE=none`
- fix [#283](https://github.com/AlexAkulov/clickhouse-backup/issues/283), properly handle error message from `FTP` server
- fix [#268](https://github.com/AlexAkulov/clickhouse-backup/issues/268), properly restore legacy backup for schema without database name

# v1.1.1

BUG FIXES
- fix broken `system.backup_list` integration table after add `required field` in https://github.com/AlexAkulov/clickhouse-backup/pull/263
- fix [#274](https://github.com/AlexAkulov/clickhouse-backup/issues/274) invalid `SFTP_PASSWORD` environment usage

# v1.1.0

IMPROVEMENTS
Expand All @@ -12,7 +43,6 @@ IMPROVEMENTS
- Added options for RBAC and CONFIGs backup, look to `clickhouse-backup help create` and `clickhouse-backup help restore` for details
- Add `S3_CONCURRENCY` option to speedup backup upload to `S3`
- Add `SFTP_CONCURRENCY` option to speedup backup upload to `SFTP`
- Add `--diff-from-remote` to `upload` command for avoid store local backup
- Add `AZBLOB_USE_MANAGED_IDENTITY` support for ManagedIdentity for azure remote storage, thanks https://github.com/roman-vynar
- Add clickhouse-operator kubernetes manifest which run `clickhouse-backup` in `server` mode on each clickhouse pod in kubernetes cluster
- Add detailed description and restrictions for incremental backups.
Expand Down
6 changes: 6 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@ test:

build: $(NAME)/$(NAME)


config: $(NAME)/config.yml

$(NAME)/config.yml: $(NAME)/$(NAME)
Expand Down Expand Up @@ -80,3 +81,8 @@ $(PKG_FILES): build/pkg
-v $(VERSION) \
-p build \
build/pkg/=/

build-race: $(NAME)/$(NAME)-race

$(NAME)/$(NAME)-race: $(GO_FILES)
CGO_ENABLED=1 $(GO_BUILD) -race -o $@ ./cmd/$(NAME)
16 changes: 10 additions & 6 deletions ReadMe.md
Original file line number Diff line number Diff line change
Expand Up @@ -220,8 +220,8 @@ Create new backup: `curl -s localhost:7171/backup/create -X POST | jq .`
* Optional query argument `table` works the same as the `--table value` CLI argument.
* Optional query argument `name` works the same as specifying a backup name with the CLI.
* Optional query argument `schema` works the same the `--schema` CLI argument (backup schema only).
* Optional query argument `rbac` works the same the `--rbac` CLI argument (backup RBAC only).
* Optional query argument `configs` works the same the `--configs` CLI argument (backup configs only).
* Optional query argument `rbac` works the same the `--rbac` CLI argument (backup RBAC).
* Optional query argument `configs` works the same the `--configs` CLI argument (backup configs).
* Full example: `curl -s 'localhost:7171/backup/create?table=default.billing&name=billing_test' -X POST`

Note: this operation is async, so the API will return once the operation has been started.
Expand All @@ -233,9 +233,11 @@ Upload backup to remote storage: `curl -s localhost:7171/backup/upload/<BACKUP_N

Note: this operation is async, so the API will return once the operation has been started.

> **GET /backup/list**
> **GET /backup/list/{where}**

Print list of backups: `curl -s localhost:7171/backup/list | jq .`
Print list only local backups: `curl -s localhost:7171/backup/list/local | jq .`
Print list only remote backups: `curl -s localhost:7171/backup/list/remote | jq .`

Note: The `Size` field is not populated for local backups.

Expand All @@ -251,8 +253,8 @@ Create schema and restore data from backup: `curl -s localhost:7171/backup/resto
* Optional query argument `table` works the same as the `--table value` CLI argument.
* Optional query argument `schema` works the same the `--schema` CLI argument (restore schema only).
* Optional query argument `data` works the same the `--data` CLI argument (restore data only).
* Optional query argument `rbac` works the same the `--rbac` CLI argument (restore RBAC only).
* Optional query argument `configs` works the same the `--configs` CLI argument (restore configs only).
* Optional query argument `rbac` works the same the `--rbac` CLI argument (restore RBAC).
* Optional query argument `configs` works the same the `--configs` CLI argument (restore configs).

> **POST /backup/delete**

Expand All @@ -262,7 +264,7 @@ Delete specific local backup: `curl -s localhost:7171/backup/delete/local/<BACKU

> **GET /backup/status**

Display list of current async operations: `curl -s localhost:7171/backup/status | jq .`
Display list of current running async operation: `curl -s localhost:7171/backup/status | jq .`

> **POST /backup/actions**

Expand All @@ -271,6 +273,8 @@ Execute multiple backup actions: `curl -X POST -d '{"command":"create test_backu
> **GET /backup/actions**

Display list of current async operations: `curl -s localhost:7171/backup/actions | jq .`
* Optional query argument `filter` could filter actions on server side.
* Optional query argument `last` could filter show only last `XX` actions.

## Storages

Expand Down
3 changes: 1 addition & 2 deletions cmd/clickhouse-backup/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ func main() {
Usage: "Print list of tables",
UsageText: "clickhouse-backup tables",
Action: func(c *cli.Context) error {
return backup.PrintTables(*getConfig(c), c.Bool("a"))
return backup.PrintTables(getConfig(c), c.Bool("a"))
},
Flags: append(cliapp.Flags,
cli.BoolFlag{
Expand Down Expand Up @@ -329,7 +329,6 @@ func getConfig(ctx *cli.Context) *config.Config {
if err != nil {
log.Fatal(err.Error())
}
log.SetLevelFromString(cfg.General.LogLevel)
return cfg
}

Expand Down
Loading

0 comments on commit 8567fc6

Please sign in to comment.