From 8b36b4fd0e8e34eae2cb9e97adfcf6c4151c0328 Mon Sep 17 00:00:00 2001 From: bartskysql <162469788+bartskysql@users.noreply.github.com> Date: Fri, 28 Jun 2024 13:42:22 +0200 Subject: [PATCH 1/5] Update Migrating your existing Production DB.md --- .../Migrating your existing Production DB.md | 102 ++++++++++++++++++ 1 file changed, 102 insertions(+) diff --git a/docs/Data loading, Migration/Migrating your existing Production DB.md b/docs/Data loading, Migration/Migrating your existing Production DB.md index c43e79a0..19c2331a 100644 --- a/docs/Data loading, Migration/Migrating your existing Production DB.md +++ b/docs/Data loading, Migration/Migrating your existing Production DB.md @@ -64,3 +64,105 @@ For assistance with a migration: - Existing customers can submit a [support case](https://mariadb.com/docs/skysql-previous-release/service-management/support/) to request assistance with a migration - New customers can [contact us](https://mariadb.com/docs/skysql-previous-release/contact/) to begin the migration planning process + +## Best Practices + + +### Live Replication for Minimal Downtime + +To minimize downtime during migration, set up live replication from your source database to the SkySQL database. Follow these steps: + +1. **Obtain Binlog File and Position**: On the source database (MySQL or MariaDB), obtain the Binlog file name and its current position to track all database changes from a specific point in time. + + ```sql + SHOW MASTER STATUS; + ``` + +2. **Dump the Source Database**: Take a dump of your source database using `mysqldump` or `mariadb-dump`. + + ```bash + mysqldump -u [username] -p --all-databases --master-data > dump.sql + ``` + +3. **Import the Dump into SkySQL**: Import the logical dump (SQL file) into your SkySQL database. + + ```bash + mariadb -u [username] -p [database_name] < dump.sql + ``` + +4. **Start Replication**: Turn on replication using the SkySQL `start_replication` procedure. + + ```sql + CALL mysql.rds_start_replication; + ``` + +### Performance Optimization During Migration + +- **Adjust Buffer Sizes**: Temporarily increase buffer sizes to optimize the import performance. + + ```sql + SET GLOBAL innodb_buffer_pool_size = 2G; + SET GLOBAL innodb_log_file_size = 512M; + ``` + +- **Disable Foreign Key Checks**: Temporarily disable foreign key checks during import to speed up the process. + + ```sql + SET foreign_key_checks = 0; + ``` + +- **Disable Binary Logging**: If binary logging is not required during the import process, disable it to improve performance. + + ```sql + SET sql_log_bin = 0; + ``` + +### Data Integrity and Validation + +- **Consistency Checks**: Perform consistency checks on the source database before migration. + + ```sql + CHECK TABLE [table_name] FOR UPGRADE; + ``` + +- **Post-Import Validation**: Validate the data integrity and consistency after the import. + + ```sql + CHECKSUM TABLE [table_name]; + ``` + +### Advanced Migration Techniques + +- **Parallel Dump and Import**: Use tools that support parallel processing for dumping and importing data. + + ```bash + mysqlpump -u [username] -p --default-parallelism=4 --add-drop-database --databases [database_name] > dump.sql + ``` + +- **Incremental Backups**: For large datasets, use incremental backups to minimize the amount of data to be transferred. + + ```bash + mariadb-backup --backup --target-dir=/path/to/backup --incremental-basedir=/path/to/previous/backup + ``` + +### Monitoring and Logging + +- **Enable Detailed Logging**: Enable detailed logging during the migration process to monitor and troubleshoot effectively. + + ```sql + SET GLOBAL general_log = 'ON'; + ``` + +- **Resource Monitoring**: Use monitoring tools to track resource usage (CPU, memory, I/O) during the migration to ensure system stability. + + ```bash + top + iostat -x 5 + ``` + +### Additional Resources + +- [Parallel Processing with mysqlpump](https://dev.mysql.com/doc/mysqlpump/en/) +- [MariaDB Backup Documentation](https://mariadb.com/kb/en/mariadb-backup-overview/) +- [Advanced Backup Techniques](https://mariadb.com/kb/en/backup-and-restore-overview/) + From 028df16ffb01fd6a03cd387d43e0c2811a876d13 Mon Sep 17 00:00:00 2001 From: jzhang-skysql <164920395+jzhang-skysql@users.noreply.github.com> Date: Tue, 2 Jul 2024 11:58:30 -0700 Subject: [PATCH 2/5] Update Migrating your existing Production DB.md Made changes requested in inline comments for PR 22. --- .../Migrating your existing Production DB.md | 56 +++++++++---------- 1 file changed, 27 insertions(+), 29 deletions(-) diff --git a/docs/Data loading, Migration/Migrating your existing Production DB.md b/docs/Data loading, Migration/Migrating your existing Production DB.md index 19c2331a..44b1c521 100644 --- a/docs/Data loading, Migration/Migrating your existing Production DB.md +++ b/docs/Data loading, Migration/Migrating your existing Production DB.md @@ -78,48 +78,45 @@ To minimize downtime during migration, set up live replication from your source SHOW MASTER STATUS; ``` -2. **Dump the Source Database**: Take a dump of your source database using `mysqldump` or `mariadb-dump`. +2. **Dump the Source Database**: Take a dump of your source database using `mysqldump` or `mariadb-dump`, ensuring to skip the `mysql` table and handle the user dump separately to avoid issues with the default SkySQL user. Also, include triggers, procedures, views, and schedules in the dump. ```bash - mysqldump -u [username] -p --all-databases --master-data > dump.sql + mysqldump -u [username] -p --all-databases --ignore-table=mysql.user --routines --triggers --events --skip-lock-tables > dump.sql ``` -3. **Import the Dump into SkySQL**: Import the logical dump (SQL file) into your SkySQL database. +3. **Dump the User Table Separately**: Dump the `mysql.user` table separately to handle user data without affecting the default SkySQL user. ```bash - mariadb -u [username] -p [database_name] < dump.sql + mysqldump -u [username] -p mysql user > mysql_user_dump.sql ``` -4. **Start Replication**: Turn on replication using the SkySQL `start_replication` procedure. +4. **Import the Dumps into SkySQL**: Import the logical dumps (SQL files) into your SkySQL database, ensuring to load the user dump after the main dump. - ```sql - CALL mysql.rds_start_replication; + ```bash + mariadb -u [username] -p [database_name] < dump.sql + mariadb -u [username] -p mysql < mysql_user_dump.sql ``` -### Performance Optimization During Migration - -- **Adjust Buffer Sizes**: Temporarily increase buffer sizes to optimize the import performance. +5. **Start Replication**: Turn on replication using SkySQL stored procedures. There are procedures allowing you to set and start replication. See our [documentation](https://skysqlinc.github.io/skysql-docs/Reference%20Guide/Sky%20Stored%20Procedures/) for details. ```sql - SET GLOBAL innodb_buffer_pool_size = 2G; - SET GLOBAL innodb_log_file_size = 512M; + CALL sky.replication_grants(); + CALL sky.start_replication(); ``` +### Performance Optimization During Migration + - **Disable Foreign Key Checks**: Temporarily disable foreign key checks during import to speed up the process. ```sql SET foreign_key_checks = 0; ``` -- **Disable Binary Logging**: If binary logging is not required during the import process, disable it to improve performance. - - ```sql - SET sql_log_bin = 0; - ``` +- **Disable Binary Logging**: If binary logging is not required during the import process, and you are using a standalone instance, it can potentially be disabled to improve performance. SkyDBA Services can assist with this as part of a detailed migration plan. ### Data Integrity and Validation -- **Consistency Checks**: Perform consistency checks on the source database before migration. +- **Consistency Checks**: Perform consistency checks on the source database before migration. Use a [supported SQL client](https://skysqlinc.github.io/skysql-docs/Connecting%20to%20Sky%20DBs/) to connect to your SkySQL instance and run the following. ```sql CHECK TABLE [table_name] FOR UPGRADE; @@ -133,32 +130,33 @@ To minimize downtime during migration, set up live replication from your source ### Advanced Migration Techniques -- **Parallel Dump and Import**: Use tools that support parallel processing for dumping and importing data. +- **Adjust Buffer Sizes**: Temporarily increase buffer sizes to optimize the import performance. This can be done via the Configuration Manager in the portal. - ```bash - mysqlpump -u [username] -p --default-parallelism=4 --add-drop-database --databases [database_name] > dump.sql + ```sql + innodb_buffer_pool_size = 2G + innodb_log_file_size = 512M ``` -- **Incremental Backups**: For large datasets, use incremental backups to minimize the amount of data to be transferred. +- **Parallel Dump and Import**: Use tools that support parallel processing for dumping and importing data. ```bash - mariadb-backup --backup --target-dir=/path/to/backup --incremental-basedir=/path/to/previous/backup + mysqlpump -u [username] -p --default-parallelism=4 --add-drop-database --databases [database_name] > dump.sql ``` +- **Incremental Backups**: For large datasets, incremental backups can be used to minimize the amount of data to be transferred. SkyDBA Services can assist you with setting these up as part of a custom migration plan. + ### Monitoring and Logging -- **Enable Detailed Logging**: Enable detailed logging during the migration process to monitor and troubleshoot effectively. +- **Enable Detailed Logging**: Enable detailed logging while testing the migration process to monitor and troubleshoot effectively. ```sql SET GLOBAL general_log = 'ON'; ``` -- **Resource Monitoring**: Use monitoring tools to track resource usage (CPU, memory, I/O) during the migration to ensure system stability. +!!! Note + 💡 **Enabling the general log can cause performance issues. It is recommended to enable it only during testing pre-migration, not on a production instance.** - ```bash - top - iostat -x 5 - ``` +- **Resource Monitoring**: Use monitoring tools to track resource usage (CPU, memory, I/O) during the migration to ensure system stability. See our [monitoring documentation](https://skysqlinc.github.io/skysql-docs/Portal%20features/Service%20Monitoring%20Panels/) for details. ### Additional Resources From 8b64e259b743ed7969c2a959ce50bda8af24d27a Mon Sep 17 00:00:00 2001 From: NedPK Date: Mon, 8 Jul 2024 22:26:25 +0300 Subject: [PATCH 3/5] Re-format, examples added as separate pages, APIs reviewed --- ...On-demand or scheduled Snapshot backups.md | 87 +++++++++++++++++++ .../Snapshot Backup and Restore Examples.md | 70 +++++++++++++++ 2 files changed, 157 insertions(+) create mode 100644 docs/Backup and Restore/On-demand or scheduled Snapshot backups.md create mode 100644 docs/Backup and Restore/Snapshot Backup and Restore Examples.md diff --git a/docs/Backup and Restore/On-demand or scheduled Snapshot backups.md b/docs/Backup and Restore/On-demand or scheduled Snapshot backups.md new file mode 100644 index 00000000..d3ad68cd --- /dev/null +++ b/docs/Backup and Restore/On-demand or scheduled Snapshot backups.md @@ -0,0 +1,87 @@ +# On-demand or scheduled Snapshot backups + +## Snapshot Backup Overview +SkySQL database snapshots create a point-in-time copy of the database persistent volume. Compared to full backups, snapshots provide a faster method for restoring your database with the same data. + +Snapshots are incremental in nature. This means that after the initial full snapshot of a database persistent volumes, subsequent snapshots only capture and store the changes made since the last snapshot. This approach saves a lot storage space and reduce the time it takes to create a snapshot and the overall total cost. + +You have the flexibility to trigger a snapshot as per your requirements - either on-demand or according to a pre-established schedule. + +The snapshots use backup stages to create a consistent backup of the database without requiring a global read lock for the entire duration of the backup, while allowing the database to continue processing transactions. Instead, the server read lock is only needed briefly during the BACKUP STAGE FLUSH stage, which flushes the tables to ensure that all of them are in a consistent state at the exact same point in time, independent of storage engine. The database lock temporarily suspends write operations and replication, the duration of the lock is typically just a few seconds. In a Primary/Replica topology, backups are prioritized and performed on the replica node. This approach ensures that the primary server can continue to operate in read/write mode, as the backup process is carried out on the replica node. After the backup process on the replica is completed, replication resumes automatically. + + References: +- +- +- +- + + **Note** : Database snapshots are deleted immedatily upon serivce deletion. + + ## Snapshot Backup Scheduling + +1. Go to SkySQL API Key management page: https://app.skysql.com/user-profile/api-keys and generate an API key + +2. Export the value from the token field to an environment variable $API_KEY + + ```bash + export API_KEY='... key data ...' + ``` + + The `API_KEY` environment variable will be used in the subsequent steps. + +3. Use it on subsequent request, e.g: + ```bash + curl --request GET 'https://api.skysql.com/provisioning/v1/services' \\ + --header "X-API-Key: $API_KEY" + ``` + +!!! Note + You can use the Skysql Backup and Restore API documentation [here](https://api.skysql.com/public/services/dbs/docs/swagger/index.html) and directly try out the backup APIs in your browser. + +### One-time Snapshot + +To set up an one-time snapshot backup: + +```json +curl --location 'https://api.skysql.com/skybackup/v1/backups/schedules' \ +--header 'Content-Type: application/json' \ +--header 'Accept: application/json' \ +--header "X-API-Key: $API_KEY" \ +--data "{ + \"backup_type\": \"snapshot\", + \"schedule\": \"once\", + \"service_id\": \"$SERVICE_ID\" + }" +``` + +Typical API server response should look like: + +```json +{ + "id": 253, + "service_id": "dbtgf28044362", + "schedule": "snapshot", + "type": "full", + "status": "Scheduled", + "message": "Backup is scheduled." +} +``` + +You can fetch the Status of the backup using 'https://api.skysql.com/skybackup/v1/backups/schedules'. See the 'Backup Status' section for an example. The 'status' field will report Success or failure. + +###Cron Snapshot + +To set up an cron snapshot backup: + +```json + curl --location 'https://api.skysql.com/skybackup/v1/backups/schedules' + --header 'Content-Type: application/json' \ + --header 'Accept: application/json' \ + --header "X-API-Key: $API_KEY" \ + --data "{ + \"backup_type\": \"snapshot\", + \"schedule\": \"0 3 * * *\", + \"service_id\": \"$SERVICE_ID\" +}" +``` + diff --git a/docs/Backup and Restore/Snapshot Backup and Restore Examples.md b/docs/Backup and Restore/Snapshot Backup and Restore Examples.md new file mode 100644 index 00000000..dafd6b7b --- /dev/null +++ b/docs/Backup and Restore/Snapshot Backup and Restore Examples.md @@ -0,0 +1,70 @@ + +## Authentication + +To authenticate with the API, do the following: + +1. Go to SkySQL API Key management page: https://app.skysql.com/user-profile/api-keys and generate an API key + +2. Export the value from the token field to an environment variable $API_KEY + + ```bash + export API_KEY='... key data ...' + ``` +3. Use it on subsequent request, e.g: + + ```bash + curl --request GET 'https://api.skysql.com/skybackup/v1/backups/schedules' \\ + --header "X-API-Key: ${API_KEY}" + ``` +## Snapshot Backup Scheduling + + + +1. Go to SkySQL API Key management page: https://app.skysql.com/user-profile/api-keys and generate an API key + +2. Export the value from the token field to an environment variable $API_KEY + + ```bash + export API_KEY='... key data ...' + ``` + + The `API_KEY` environment variable will be used in the subsequent steps. + +3. Use it on subsequent request, e.g: + ```bash + curl --request GET 'https://api.skysql.com/provisioning/v1/services' \\ + --header "X-API-Key: $API_KEY" + ``` + +#### One-time Snapshot Example + + curl --location 'https://api.skysql.com/skybackup/v1/backups/schedules' \ + --header 'Content-Type: application/json' \ + --header 'Accept: application/json' \ + --header "X-API-Key: $API_KEY" \ + --data "{ + \"backup_type\": \"snapshot\", + \"schedule\": \"once\", + \"service_id\": \"$SERVICE_ID\" + }" + + +- API_KEY : SKYSQL API KEY, see [SkySQL API Keys](https://app.skysql.com/user-profile/api-keys/) +- SERVICE_ID : SkySQL serivce identifier, format dbtxxxxxx + +#### Cron Snapshot Example + + + curl --location 'https://api.skysql.com/skybackup/v1/backups/schedules' + --header 'Content-Type: application/json' \ + --header 'Accept: application/json' \ + --header "X-API-Key: $API_KEY" \ + --data "{ + \"backup_type\": \"snapshot\", + \"schedule\": \"0 3 * * *\", + \"service_id\": \"$SERVICE_ID\" + }" + +- API_KEY : SKYSQL API KEY, see [SkySQL API Keys](https://app.skysql.com/user-profile/api-keys/) +- SCHEDULE : Cron schedule, see [Cron](https://en.wikipedia.org/wiki/Cron) +- SERVICE_ID : SkySQL serivce identifier, format dbtxxxxxx \ No newline at end of file From a756110427a6c9ad11d6e162fb8ecf8e0619f27d Mon Sep 17 00:00:00 2001 From: NedPK Date: Tue, 9 Jul 2024 13:34:43 +0300 Subject: [PATCH 4/5] New subfolder strucure added --- .../Binarylog Backup Examples.md | 70 ++ .../Bring Your Own Bucket Examples.md | 98 +++ .../Database Restore Examples.md | 124 +++ .../Delete Restore Examples.md | 33 + .../Incremental Backup Examples.md | 70 ++ .../List Restore Examples.md | 74 ++ .../Logical Backup Examples.md | 64 ++ ...On-demand or scheduled Snapshot backups.md | 87 --- .../Other backup API examples.md | 144 ++++ .../Physical Backup Examples.md | 62 ++ docs/Backup and Restore/README.md | 734 +++++------------- ...xamples.md => Snapshot Backup Examples.md} | 63 +- mkdocs.yml | 10 + 13 files changed, 961 insertions(+), 672 deletions(-) create mode 100644 docs/Backup and Restore/Binarylog Backup Examples.md create mode 100644 docs/Backup and Restore/Bring Your Own Bucket Examples.md create mode 100644 docs/Backup and Restore/Database Restore Examples.md create mode 100644 docs/Backup and Restore/Delete Restore Examples.md create mode 100644 docs/Backup and Restore/Incremental Backup Examples.md create mode 100644 docs/Backup and Restore/List Restore Examples.md create mode 100644 docs/Backup and Restore/Logical Backup Examples.md delete mode 100644 docs/Backup and Restore/On-demand or scheduled Snapshot backups.md create mode 100644 docs/Backup and Restore/Other backup API examples.md create mode 100644 docs/Backup and Restore/Physical Backup Examples.md rename docs/Backup and Restore/{Snapshot Backup and Restore Examples.md => Snapshot Backup Examples.md} (55%) diff --git a/docs/Backup and Restore/Binarylog Backup Examples.md b/docs/Backup and Restore/Binarylog Backup Examples.md new file mode 100644 index 00000000..19c358e3 --- /dev/null +++ b/docs/Backup and Restore/Binarylog Backup Examples.md @@ -0,0 +1,70 @@ +
+ +Authentication + +

+
    +
  1. +Go to the SkySQL API Key management page and generate an API key +
  2. +
  3. +Export the value from the token field to an environment variable $API_KEY + + ``` + export API_KEY='... key data ...' + ``` +
  4. +
  5. +Use it on subsequent request, e.g: + + ```bash + curl --request GET 'https://api.skysql.com/skybackup/v1/backups/schedules' --header "X-API-Key: ${API_KEY}" + ``` +
  6. +
+

+ +## Binarylog Backup + + +### One-time Binarylog + +To set up an one-time *binarylog* backup: + +``` +curl --location 'https://api.skysql.com/skybackup/v1/backups/schedules' \ + --header 'Content-Type: application/json' \ + --header 'Accept: application/json' \ + --header "X-API-Key: $API_KEY" \ + --data "{ + \"backup_type\": \"full\", + \"schedule\": \"once\", + \"service_id\": \"$SERVICE_ID\" + }" + +``` +- API_KEY : SKYSQL API KEY, see [SkySQL API Keys](https://app.skysql.com/user-profile/api-keys/) +- SERVICE_ID : SkySQL serivce identifier, format dbtxxxxxx. You can fetch the service ID from the Fully qualified domain name(FQDN) of your service. E.g: in dbpgf17106534.sysp0000.db2.skysql.com, 'dbpgf17106534' is the service ID.You will find the FQDN in the [Connect window](https://app.skysql.com/dashboard) + +##### Schedule Binarylog backup + +To set up an cron *incremental* backup: + +``` + + curl --location 'https://api.skysql.com/skybackup/v1/backups/schedules' + --header 'Content-Type: application/json' \ + --header 'Accept: application/json' \ + --header "X-API-Key: $API_KEY" \ + --data "{ + \"backup_type\": \"binarylog\", + \"schedule\": \"0 3 * * *\", + \"service_id\": \"$SERVICE_ID\" + }" +``` + +- API_KEY : SKYSQL API KEY, see [SkySQL API Keys](https://app.skysql.com/user-profile/api-keys/) +- SCHEDULE : Cron schedule, see [Cron](https://en.wikipedia.org/wiki/Cron) +- SERVICE_ID : SkySQL serivce identifier, format dbtxxxxxx + +##### Backup status can be fetch using 'https://api.skysql.com/skybackup/v1/backups'. See the 'Backup Status' section for an example. diff --git a/docs/Backup and Restore/Bring Your Own Bucket Examples.md b/docs/Backup and Restore/Bring Your Own Bucket Examples.md new file mode 100644 index 00000000..1a8af6d7 --- /dev/null +++ b/docs/Backup and Restore/Bring Your Own Bucket Examples.md @@ -0,0 +1,98 @@ + +
+ +Authentication + +

+
    +
  1. +Go to the SkySQL API Key management page and generate an API key +
  2. +
  3. +Export the value from the token field to an environment variable $API_KEY + + ``` + export API_KEY='... key data ...' + ``` +
  4. +
  5. +Use it on subsequent request, e.g: + + ```bash + curl --request GET 'https://api.skysql.com/skybackup/v1/backups/schedules' --header "X-API-Key: ${API_KEY}" + ``` +
  6. +
+

+ +## Scheduling Backups to your own bucket (external storage) + +To set up an external storage backup, you need to make the following API call: + +- For *GCP* you need to create an service account key. Please follow the steps from this [documentation](https://cloud.google.com/iam/docs/keys-create-delete). Once you have created the service account key you will need to base64 encode it. You can encode it directly from a command line itself. For example the execution of command ```echo -n 'service-account-key' | base64``` will produce something like ```c2VydmljZS1hY2NvdW50LWtleQ==``` + + ```bash + curl --location 'https://api.skysql.com/skybackup/v1/backups/schedules' \ + --header 'Content-Type: application/json' \ + --header 'Accept: application/json' \ + --header 'X-API-Key: ${API_KEY}' \ + --data '{ + "backup_type": "full", + "schedule": "0 2 * * *", + "service_id": "dbtgf28044362", + "external_storage": { + "bucket": { + "path": "s3://my_backup_bucket", + "credentials": "c2VydmljZS1hY2NvdW50LWtleQ==" + } + } + }' + ``` + + The service account key will be in the following format: + + ```json + { + "type": "service_account", + "project_id": "XXXXXXX", + "private_key_id": "XXXXXXX", + "private_key": "-----BEGIN PRIVATE KEY-----XXXXX-----END PRIVATE KEY-----", + "client_email": "XXXXXXXXXXXXXXXXXXXXXXXXXXXX.iam.gserviceaccount.com", + "client_id": "XXXXXXX", + "auth_uri": "", + "token_uri": "", + "auth_provider_x509_cert_url": "", + "client_x509_cert_url": "", + "universe_domain": "googleapis.com" + } + ``` + +- For AWS, you must provide your own credentials. These include the AWS access key associated with an IAM account and the bucket region. For more information about AWS credentials, please refer to the [documentation](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html). The required credentials are *aws_access_key_id* , *aws_secret_access_key* and *region*. For example your credentials should look like: + + ```bash + [default] + aws_access_key_id = AKIAIOSFODNN7EXAMPLE + aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY + region = us-west-2 + ``` + + You should encode your credentials base64 before passing it to the API. You can encode it directly from a command line itself. For example the execution of command ```echo '[default]\naws_access_key_id = AKIAIOSFODNN7EXAMPLE\naws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\nregion = us-west-2' | base64``` will produce the following ```W2RlZmF1bHRdCmF3c19hY2Nlc3Nfa2V5X2lkID0gQUtJQUlPU0ZPRE5ON0VYQU1QTEUKYXdzX3NlY3JldF9hY2Nlc3Nfa2V5ID0gd0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQpyZWdpb24gPSB1cy13ZXN0LTIK```. + Using encoded credentials you will be able to pass it to the API server. To initiate a new backup to your external storage you need to execute an API call to the backup service: + + ``````bash + curl --location '' \ + --header 'Content-Type: application/json' \ + --header 'Accept: application/json' \ + --header 'X-API-Key: ${API_KEY}' \ + --data '{ + "backup_type": "full", + "schedule": "0 2 ** *", + "service_id": "dbtgf28044362", + "external_storage": { + "bucket": { + "path": "s3://my_backup_bucket", + "credentials": "W2RlZmF1bHRdCmF3c19hY2Nlc3Nfa2V5X2lkID0gQUtJQUlPU0ZPRE5ON0VYQU1QTEUKYXdzX3NlY3JldF9hY2Nlc3Nfa2V5ID0gd0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQpyZWdpb24gPSB1cy13ZXN0LTIK" + } + } + }' + ``` \ No newline at end of file diff --git a/docs/Backup and Restore/Database Restore Examples.md b/docs/Backup and Restore/Database Restore Examples.md new file mode 100644 index 00000000..84688e6d --- /dev/null +++ b/docs/Backup and Restore/Database Restore Examples.md @@ -0,0 +1,124 @@ +
+ +Authentication + +

+
    +
  1. +Go to the SkySQL API Key management page and generate an API key +
  2. +
  3. +Export the value from the token field to an environment variable $API_KEY + + ``` + export API_KEY='... key data ...' + ``` +
  4. +
  5. +Use it on subsequent request, e.g: + + ```bash + curl --request GET 'https://api.skysql.com/skybackup/v1/backups/schedules' --header "X-API-Key: ${API_KEY}" + ``` +
  6. +
+

+ +## Restore From Managed Storage + +You can restore your database from the backup located in the default SkySQL managed backup storage. In this case, you need to provide the backup ID when making the restore API call. Here is an example: + +```bash +curl --location 'https://api.skysql.com/skybackup/v1/restores' \ +--header 'Content-Type: application/json' \ +--header 'Accept: application/json' \ +--header 'X-API-Key: ${API_KEY}' \ +--data '{ + "key": "eda3b72460c8c0d9d61a7f01b6a22e32:dbtgf28216706:tx-filip-mdb-ms-0", + "service_id": "dbtgf28044362" +}' +``` + +Inside the service_id parameter of your restore API request, you need to provide the id of the service, where you want to restore your data. + +## Restore From your Bucket (External Storage) + +You can restore your data from external storage. Your external storage bucket data should be created via one of the following tools: ```mariabackup, mysqldump```. Credentials to external storage access could be fetched from: + +- For *GCP* you need to create an service account key. Please follow the steps from this [documentation](https://cloud.google.com/iam/docs/keys-create-delete). Once you have created the service account key you will need to base64 encode it. You can encode it directly from a command line itself. For example the execution of command ```echo -n 'service-account-key' | base64``` will produce the following ```c2VydmljZS1hY2NvdW50LWtleQ==``` + + ```bash + curl --location 'https://api.skysql.com/skybackup/v1/backups/schedules' \ + --header 'Content-Type: application/json' \ + --header 'Accept: application/json' \ + --header 'X-API-Key: ${API_KEY}' \ + --data '{ + "backup_type": "full", + "schedule": "0 2 * * *", + "service_id": "dbtgf28044362", + "external_storage": { + "bucket": { + "path": "s3://my_backup_bucket", + "credentials": "c2VydmljZS1hY2NvdW50LWtleQ==" + } + } + }' + ``` + + The service account key will be in the following format: + + ```json + { + "type": "service_account", + "project_id": "XXXXXXX", + "private_key_id": "XXXXXXX", + "private_key": "-----BEGIN PRIVATE KEY-----XXXXX-----END PRIVATE KEY-----", + "client_email": "XXXXXXXXXXXXXXXXXXXXXXXXXXXX.iam.gserviceaccount.com", + "client_id": "XXXXXXX", + "auth_uri": "", + "token_uri": "", + "auth_provider_x509_cert_url": "", + "client_x509_cert_url": "", + "universe_domain": "googleapis.com" + } + ``` + +- For AWS, you must provide your own credentials. These include the AWS access key associated with an IAM account and the bucket region. For more information about AWS credentials, please refer to the [documentation](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html). The required credentials are *aws_access_key_id* , *aws_secret_access_key* and *region*. For example your credentials should look like: + + ```bash + [default] + aws_access_key_id = AKIAIOSFODNN7EXAMPLE + aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY + region = us-west-2 + ``` + + You should encode your credentials base64 before passing it to the API. You can encode it directly from a command line itself. For example the execution of command ```echo '[default]\naws_access_key_id = AKIAIOSFODNN7EXAMPLE\naws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\nregion = us-west-2' | base64``` will produce the following ```W2RlZmF1bHRdCmF3c19hY2Nlc3Nfa2V5X2lkID0gQUtJQUlPU0ZPRE5ON0VYQU1QTEUKYXdzX3NlY3JldF9hY2Nlc3Nfa2V5ID0gd0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQpyZWdpb24gPSB1cy13ZXN0LTIK```. + +The following request demonstrates how to restore your data from an external storage: + +```json +{ + "service_id": "dbtgf28044362", + "key": "/backup.tar.gz", + "external_source": { + "bucket": "gs://my_backup_bucket", + "method": "mariabackup", + "credentials" "W2RlZmF1bHRdCmF3c19hY2Nlc3Nfa2V5X2lkID0gQUtJQUlPU0ZPRE5ON0VYQU1QTEUKYXdzX3NlY3JldF9hY2Nlc3Nfa2V5ID0gd0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQpyZWdpb24gPSB1cy13ZXN0LTIK" + } +} +``` + +In case your backup data is encrypted you need to pass encryption key as well: + +```json +{ + "service_id": "dbtgf28044362", + "key": "/backup.tar.gz", + "external_source": { + "bucket": "gs://my_backup_bucket", + "method": "mariabackup", + "credentials": "W2RlZmF1bHRdCmF3c19hY2Nlc3Nfa2V5X2lkID0gQUtJQUlPU0ZPRE5ON0VYQU1QTEUKYXdzX3NlY3JldF9hY2Nlc3Nfa2V5ID0gd0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQpyZWdpb24gPSB1cy13ZXN0LTIK", + "encryption_key": "my_encryption_key" + } +} +``` \ No newline at end of file diff --git a/docs/Backup and Restore/Delete Restore Examples.md b/docs/Backup and Restore/Delete Restore Examples.md new file mode 100644 index 00000000..b6f2c278 --- /dev/null +++ b/docs/Backup and Restore/Delete Restore Examples.md @@ -0,0 +1,33 @@ +details> + +Authentication + +

+
    +
  1. +Go to the SkySQL API Key management page and generate an API key +
  2. +
  3. +Export the value from the token field to an environment variable $API_KEY + + ``` + export API_KEY='... key data ...' + ``` +
  4. +
  5. +Use it on subsequent request, e.g: + + ```bash + curl --request GET 'https://api.skysql.com/skybackup/v1/backups/schedules' --header "X-API-Key: ${API_KEY}" + ``` +
  6. +
+ +In order to delete an already scheduled Restore, users need to make the following API call: + + +```bash +curl --location --request DELETE 'https://api.skysql.com/skybackup/v1/restores/12' \ +--header 'Accept: application/json' \ +--header 'X-API-Key: ${API_KEY}' +``` \ No newline at end of file diff --git a/docs/Backup and Restore/Incremental Backup Examples.md b/docs/Backup and Restore/Incremental Backup Examples.md new file mode 100644 index 00000000..6add0193 --- /dev/null +++ b/docs/Backup and Restore/Incremental Backup Examples.md @@ -0,0 +1,70 @@ +
+ +Authentication + +

+
    +
  1. +Go to the SkySQL API Key management page and generate an API key +
  2. +
  3. +Export the value from the token field to an environment variable $API_KEY + + ``` + export API_KEY='... key data ...' + ``` +
  4. +
  5. +Use it on subsequent request, e.g: + + ```bash + curl --request GET 'https://api.skysql.com/skybackup/v1/backups/schedules' --header "X-API-Key: ${API_KEY}" + ``` +
  6. +
+

+ +## Incremental Backup + +Incremental backups can be taken once you have full backup. Read [here](https://mariadb.com/kb/en/incremental-backup-and-restore-with-mariabackup/) for more details. + +### One-time Incremental + +To set up an one-time *incremental* backup, you need to make the following API call: + +```bash +curl --location 'https://api.skysql.com/skybackup/v1/backups/schedules' \ +--header 'Content-Type: application/json' \ +--header 'Accept: application/json' \ +--header 'X-API-Key: ${API_KEY}' \ +--data '{ + "backup_type": "incremental", + "schedule": "once", + "service_id": "dbtgf28044362" +}' +``` + +- API_KEY : SKYSQL API KEY, see [SkySQL API Keys](https://app.skysql.com/user-profile/api-keys/) +- SERVICE_ID : SkySQL serivce identifier, format dbtxxxxxx. You can fetch the service ID from the Fully qualified domain name(FQDN) of your service. E.g: in dbpgf17106534.sysp0000.db2.skysql.com, 'dbpgf17106534' is the service ID.You will find the FQDN in the [Connect window](https://app.skysql.com/dashboard) + +### Cron Incremental + +To set up an cron *incremental* backup, you need to make the following API call: + +```bash +curl --location 'https://api.skysql.com/skybackup/v1/backups/schedules' \ +--header 'Content-Type: application/json' \ +--header 'Accept: application/json' \ +--header 'X-API-Key: ${API_KEY}' \ +--data '{ + "backup_type": "incremental", + "schedule": "0 3 * * *", + "service_id": "dbtgf28044362" +}' +``` +- API_KEY : SKYSQL API KEY, see [SkySQL API Keys](https://app.skysql.com/user-profile/api-keys/) +- SCHEDULE : Cron schedule, see [Cron](https://en.wikipedia.org/wiki/Cron) +- SERVICE_ID : SkySQL serivce identifier, format dbtxxxxxx + + +##### Backup status can be fetch using 'https://api.skysql.com/skybackup/v1/backups'. See the 'Backup Status' section for an example. \ No newline at end of file diff --git a/docs/Backup and Restore/List Restore Examples.md b/docs/Backup and Restore/List Restore Examples.md new file mode 100644 index 00000000..2cec338a --- /dev/null +++ b/docs/Backup and Restore/List Restore Examples.md @@ -0,0 +1,74 @@ +details> + +Authentication + +

+
    +
  1. +Go to the SkySQL API Key management page and generate an API key +
  2. +
  3. +Export the value from the token field to an environment variable $API_KEY + + ``` + export API_KEY='... key data ...' + ``` +
  4. +
  5. +Use it on subsequent request, e.g: + + ```bash + curl --request GET 'https://api.skysql.com/skybackup/v1/backups/schedules' --header "X-API-Key: ${API_KEY}" + ``` +
  6. +
+ +In order to get all Restores scheduled in the past you need to make api call: + +```bash +curl --location 'https://api.skysql.com/skybackup/v1/restores' \ +--header 'Accept: application/json' \ +--header 'X-API-Key: ${API_KEY}' +``` + +#### Get Restore by ID + +```bash +curl --location 'https://api.skysql.com/skybackup/v1/restores/12' \ +--header 'Accept: application/json' \ +--header 'X-API-Key: ${API_KEY}' +``` + +Typical response of those two apis should look like: + +In case restore is in progress: + +```json +[ + { + "id": 12, + "service_id": "dbtgf28216706", + "bucket": "gs://sky-syst0000-backup-us-84e9d84ecf265a/orgpxw1x", + "key": "eda3b72460c8c0d9d61a7f01b6a22e32:dbtgf28216706:tx-filip-mdb-ms-0", + "type": "physical", + "status": "Running", + "message": "server is not-ready" + } +] +``` + +In case restore completed: + +```json +[ + { + "id": 13, + "service_id": "dbtgf28216706", + "bucket": "gs://sky-syst0000-backup-us-84e9d84ecf265a/orgpxw1x", + "key": "dda9b72460c9c0d9d61a7f01b6a33e39:dbtgf28216706:tx-filip-mdb-ms-0", + "type": "physical", + "status": "Succeeded", + "message": "Restore has succeeded!" + } +] +``` \ No newline at end of file diff --git a/docs/Backup and Restore/Logical Backup Examples.md b/docs/Backup and Restore/Logical Backup Examples.md new file mode 100644 index 00000000..3656965c --- /dev/null +++ b/docs/Backup and Restore/Logical Backup Examples.md @@ -0,0 +1,64 @@ +
+ +Authentication + +

+
    +
  1. +Go to the SkySQL API Key management page and generate an API key +
  2. +
  3. +Export the value from the token field to an environment variable $API_KEY + + ``` + export API_KEY='... key data ...' + ``` +
  4. +
  5. +Use it on subsequent request, e.g: + + ```bash + curl --request GET 'https://api.skysql.com/skybackup/v1/backups/schedules' --header "X-API-Key: ${API_KEY}" + ``` +
  6. +
+

+ +#### Logical(dump) Backup + +```bash +curl --location 'https://api.skysql.com/skybackup/v1/backups/schedules' \ +--header 'Content-Type: application/json' \ +--header 'Accept: application/json' \ +--header 'X-API-Key: ${API_KEY}' \ +--data '{ + "backup_type": "dump", + "schedule": "once", + "service_id": "dbtgf28044362" +}' +``` + +- API_KEY : SKYSQL API KEY, see [SkySQL API Keys](https://app.skysql.com/user-profile/api-keys/) +- SERVICE_ID : SkySQL serivce identifier, format dbtxxxxxx. You can fetch the service ID from the Fully qualified domain name(FQDN) of your service. E.g: in dbpgf17106534.sysp0000.db2.skysql.com, 'dbpgf17106534' is the service ID.You will find the FQDN in the [Connect window](https://app.skysql.com/dashboard) + +### Logical(dump) Backup + +To set up an cron *Logical(dump)* backup, you need to make the following API call: + +```bash +curl --location 'https://api.skysql.com/skybackup/v1/backups/schedules' \ +--header 'Content-Type: application/json' \ +--header 'Accept: application/json' \ +--header 'X-API-Key: ${API_KEY}' \ +--data '{ + "backup_type": "dump", + "schedule": "0 3 * * *", + "service_id": "dbtgf28044362" +}' +``` + +- API_KEY : SKYSQL API KEY, see [SkySQL API Keys](https://app.skysql.com/user-profile/api-keys/) +- SCHEDULE : Cron schedule, see [Cron](https://en.wikipedia.org/wiki/Cron) +- SERVICE_ID : SkySQL serivce identifier, format dbtxxxxxx + +##### Backup status can be fetch using 'https://api.skysql.com/skybackup/v1/backups'. See the 'Backup Status' section for an example. \ No newline at end of file diff --git a/docs/Backup and Restore/On-demand or scheduled Snapshot backups.md b/docs/Backup and Restore/On-demand or scheduled Snapshot backups.md deleted file mode 100644 index d3ad68cd..00000000 --- a/docs/Backup and Restore/On-demand or scheduled Snapshot backups.md +++ /dev/null @@ -1,87 +0,0 @@ -# On-demand or scheduled Snapshot backups - -## Snapshot Backup Overview -SkySQL database snapshots create a point-in-time copy of the database persistent volume. Compared to full backups, snapshots provide a faster method for restoring your database with the same data. - -Snapshots are incremental in nature. This means that after the initial full snapshot of a database persistent volumes, subsequent snapshots only capture and store the changes made since the last snapshot. This approach saves a lot storage space and reduce the time it takes to create a snapshot and the overall total cost. - -You have the flexibility to trigger a snapshot as per your requirements - either on-demand or according to a pre-established schedule. - -The snapshots use backup stages to create a consistent backup of the database without requiring a global read lock for the entire duration of the backup, while allowing the database to continue processing transactions. Instead, the server read lock is only needed briefly during the BACKUP STAGE FLUSH stage, which flushes the tables to ensure that all of them are in a consistent state at the exact same point in time, independent of storage engine. The database lock temporarily suspends write operations and replication, the duration of the lock is typically just a few seconds. In a Primary/Replica topology, backups are prioritized and performed on the replica node. This approach ensures that the primary server can continue to operate in read/write mode, as the backup process is carried out on the replica node. After the backup process on the replica is completed, replication resumes automatically. - - References: -- -- -- -- - - **Note** : Database snapshots are deleted immedatily upon serivce deletion. - - ## Snapshot Backup Scheduling - -1. Go to SkySQL API Key management page: https://app.skysql.com/user-profile/api-keys and generate an API key - -2. Export the value from the token field to an environment variable $API_KEY - - ```bash - export API_KEY='... key data ...' - ``` - - The `API_KEY` environment variable will be used in the subsequent steps. - -3. Use it on subsequent request, e.g: - ```bash - curl --request GET 'https://api.skysql.com/provisioning/v1/services' \\ - --header "X-API-Key: $API_KEY" - ``` - -!!! Note - You can use the Skysql Backup and Restore API documentation [here](https://api.skysql.com/public/services/dbs/docs/swagger/index.html) and directly try out the backup APIs in your browser. - -### One-time Snapshot - -To set up an one-time snapshot backup: - -```json -curl --location 'https://api.skysql.com/skybackup/v1/backups/schedules' \ ---header 'Content-Type: application/json' \ ---header 'Accept: application/json' \ ---header "X-API-Key: $API_KEY" \ ---data "{ - \"backup_type\": \"snapshot\", - \"schedule\": \"once\", - \"service_id\": \"$SERVICE_ID\" - }" -``` - -Typical API server response should look like: - -```json -{ - "id": 253, - "service_id": "dbtgf28044362", - "schedule": "snapshot", - "type": "full", - "status": "Scheduled", - "message": "Backup is scheduled." -} -``` - -You can fetch the Status of the backup using 'https://api.skysql.com/skybackup/v1/backups/schedules'. See the 'Backup Status' section for an example. The 'status' field will report Success or failure. - -###Cron Snapshot - -To set up an cron snapshot backup: - -```json - curl --location 'https://api.skysql.com/skybackup/v1/backups/schedules' - --header 'Content-Type: application/json' \ - --header 'Accept: application/json' \ - --header "X-API-Key: $API_KEY" \ - --data "{ - \"backup_type\": \"snapshot\", - \"schedule\": \"0 3 * * *\", - \"service_id\": \"$SERVICE_ID\" -}" -``` - diff --git a/docs/Backup and Restore/Other backup API examples.md b/docs/Backup and Restore/Other backup API examples.md new file mode 100644 index 00000000..4c014b46 --- /dev/null +++ b/docs/Backup and Restore/Other backup API examples.md @@ -0,0 +1,144 @@ +
+ +Authentication + +

+
    +
  1. +Go to the SkySQL API Key management page and generate an API key +
  2. +
  3. +Export the value from the token field to an environment variable $API_KEY + + ``` + export API_KEY='... key data ...' + ``` +
  4. +
  5. +Use it on subsequent request, e.g: + + ```bash + curl --request GET 'https://api.skysql.com/skybackup/v1/backups/schedules' --header "X-API-Key: ${API_KEY}" + ``` +
  6. +
+

+ +## Working with Backup Schedules + +### Get backup schedules inside the Organization : + +```bash +curl --location '' \ +--header 'Accept: application/json' \ +--header 'X-API-Key: ${API_KEY}' +``` + +- API_KEY : SKYSQL API KEY, see [SkySQL API Keys](https://app.skysql.com/user-profile/api-keys/) + +#### Get all Backup Schedules per service + +To get backup schedules for specific service : + +```bash +curl --location '' \ +--header 'Accept: application/json' \ +--header 'X-API-Key: ${API_KEY}' +``` +- API_KEY : SKYSQL API KEY, see [SkySQL API Keys](https://app.skysql.com/user-profile/api-keys/) + +#### Get Backup Schedule by ID + +To get specific backup schedule by id : + +```bash +curl --location 'https://api.skysql.com/skybackup/v1/backups/schedules/200' \ +--header 'Accept: application/json' \ +--header 'X-API-Key: ${API_KEY}' +``` + +- API_KEY : SKYSQL API KEY, see [SkySQL API Keys](https://app.skysql.com/user-profile/api-keys/) + + +#### Update Backup Schedule + +In the following example, we update the backup schedule to 9 AM UTC. Remember, you cannot change the schedules for one-time backups. +To update specific backup schedule you need to make the following API call: + +```bash +curl --location --request PATCH '' \ +--header 'Content-Type: application/json' \ +--header 'Accept: application/json' \ +--header 'X-API-Key: ${API_KEY}' \ +--data '{ + "schedule": "0 9 ** *" +}' +``` +- API_KEY : SKYSQL API KEY, see [SkySQL API Keys](https://app.skysql.com/user-profile/api-keys/) +- SCHEDULE : Cron schedule, see [Cron](https://en.wikipedia.org/wiki/Cron) + +#### Delete Backup Schedule + +To delete a backup schedule you need to provide the backup schedule id. Example of the api call below: + +```bash +curl --location --request DELETE 'https://api.skysql.com/skybackup/v1/backups/schedules/215' \ +--header 'Accept: application/json' \ +--header 'X-API-Key: ${API_KEY}' +``` + +- API_KEY : SKYSQL API KEY, see [SkySQL API Keys](https://app.skysql.com/user-profile/api-keys/) + + +## Backup Status + +The following API illustrates how to get the available backups and status of backup jobs . + +### List all backups inside the organization + +Here is an example to fetch all the available Backups in your org: + +```bash +curl --location 'https://api.skysql.com/skybackup/v1/backups' \ +--header 'Accept: application/json' \ +--header 'X-API-Key: ${API_KEY}' +``` +- API_KEY : SKYSQL API KEY, see [SkySQL API Keys](https://app.skysql.com/user-profile/api-keys/) + +### List all backups by service + +To list all backups available for your service : + +```bash +curl --location 'https://api.skysql.com/skybackup/v1/backups?service_id=dbtgf28216706' \ +--header 'Accept: application/json' \ +--header 'X-API-Key: ${API_KEY}' +``` +- API_KEY : SKYSQL API KEY, see [SkySQL API Keys](https://app.skysql.com/user-profile/api-keys/) + + +The typical response of either of two calls should look like: + +```json +{ + "backups": [ + { + "id": "eda3b72460c8c0d9d61a7f01b6a22e32:dbtgf28216706:tx-filip-mdb-ms-0", + "service_id": "dbtgf28216706", + "type": "full", + "method": "skybucket", + "server_pod": "tx-filip-mdb-ms-0", + "backup_size": 5327326, + "reference_full_backup": "", + "point_in_time": "2024-03-26 17:18:21", + "start_time": "2024-03-26T17:18:57Z", + "end_time": "2024-03-26T17:19:01Z", + "status": "Succeeded" + } + ], + "backups_count": 1, + "pages_count": 1 +} +``` + +> The ** Backup id is the most important part of this data as you need to provide it in the restore api call** to schedule restore execution. diff --git a/docs/Backup and Restore/Physical Backup Examples.md b/docs/Backup and Restore/Physical Backup Examples.md new file mode 100644 index 00000000..fa3badfb --- /dev/null +++ b/docs/Backup and Restore/Physical Backup Examples.md @@ -0,0 +1,62 @@ +
+ +Authentication + +

+
    +
  1. +Go to the SkySQL API Key management page and generate an API key +
  2. +
  3. +Export the value from the token field to an environment variable $API_KEY + + ``` + export API_KEY='... key data ...' + ``` +
  4. +
  5. +Use it on subsequent request, e.g: + + ```bash + curl --request GET 'https://api.skysql.com/skybackup/v1/backups/schedules' --header "X-API-Key: ${API_KEY}" + ``` +
  6. +
+

+ +## Full(physical) Backup Scheduling + +#### One-time Full(physical) Backup Example + + curl --location 'https://api.skysql.com/skybackup/v1/backups/schedules' \ + --header 'Content-Type: application/json' \ + --header 'Accept: application/json' \ + --header "X-API-Key: $API_KEY" \ + --data "{ + \"backup_type\": \"full\", + \"schedule\": \"once\", + \"service_id\": \"$SERVICE_ID\" + }" + + +- API_KEY : SKYSQL API KEY, see [SkySQL API Keys](https://app.skysql.com/user-profile/api-keys/) +- SERVICE_ID : SkySQL serivce identifier, format dbtxxxxxx. You can fetch the service ID from the Fully qualified domain name(FQDN) of your service. E.g: in dbpgf17106534.sysp0000.db2.skysql.com, 'dbpgf17106534' is the service ID.You will find the FQDN in the [Connect window](https://app.skysql.com/dashboard) + +#### Cron Full(physical) Example + + + curl --location 'https://api.skysql.com/skybackup/v1/backups/schedules' + --header 'Content-Type: application/json' \ + --header 'Accept: application/json' \ + --header "X-API-Key: $API_KEY" \ + --data "{ + \"backup_type\": \"full\", + \"schedule\": \"0 3 * * *\", + \"service_id\": \"$SERVICE_ID\" + }" + +- API_KEY : SKYSQL API KEY, see [SkySQL API Keys](https://app.skysql.com/user-profile/api-keys/) +- SCHEDULE : Cron schedule, see [Cron](https://en.wikipedia.org/wiki/Cron) +- SERVICE_ID : SkySQL serivce identifier, format dbtxxxxxx + +##### Backup status can be fetch using 'https://api.skysql.com/skybackup/v1/backups'. See the 'Backup Status' section for an example. \ No newline at end of file diff --git a/docs/Backup and Restore/README.md b/docs/Backup and Restore/README.md index 9024f722..e9d3edc3 100644 --- a/docs/Backup and Restore/README.md +++ b/docs/Backup and Restore/README.md @@ -1,582 +1,214 @@ -# Backup and Restore +# **Backup and Restore** +The Backup and Restore service provides SkySQL customers with a comprehensive list of features through a secure API and a user-friendly portal. The service extends the automated nightly backups with a number of self-service features. Users can automatically create and store backups of their databases to ensure additional data safety or provide a robust disaster recovery solution. The backups are stored on reliable and secure cloud storage, ensuring they are readily available when needed. The backup process is seamless and does not affect the database performance. SkySQL also offers the flexibility to customize backup schedule according to your specific needs. Backups on large data sets can take time. + +You instruct the creation of a backup using a "schedule". You can either schedule a one-time backup (schedule now) or set up automatic backups using a cron schedule. A backup schedule results in a backup job which can be tracked using the status API. We support the following types of backups: snapshot, full (physical), incremental (physical), binary log, and dump (logical). + +## Backup +### **SkySQL Snapshot Backups** + +
+ +Overview + + +

+
  • +SkySQL database snapshots create a point-in-time copy of the database persistent volume. Compared to full backups, snapshots provide a faster method for restoring your database with the same data. +
  • + +
  • +Snapshots are incremental in nature. After the initial full snapshot of a database persistent volumes, subsequent snapshots only capture and store the changes made since the last snapshot. This approach saves a lot of storage space and reduces the time it takes to create a snapshot database backup and the related cloud storage cost. +
  • +
  • +Users have the flexibility to trigger a snapshot as per their scheduling requirements - either on-demand or according to a pre-defined schedule. +
  • +
  • +The SkySQL snapshots benefit from MariaDB's [backup stage flush](https://mariadb.com/kb/en/backup-stage/#:~:text=active%20DDL%20commands.-,BACKUP%20STAGE%20FLUSH,as%20closed%20for%20the%20backup.) to create a consistent backup of the database - database lock temporarily suspends write operations and replication for just a few seconds. In a Primary/Replica topology, snapshot backups are prioritized and performed on the replica node. This is to ensure that the primary server can continue to operate in read/write mode, as the backup process is carried out on the replica node. After the backup process on the replica is completed, replication resumes automatically. +
  • +

    +
    + +##### Snapshot Backup Examples + +SkySQL supports database snapshot backups either on-demand or according to a pre-established schedule. +Below are examples of how to schedule a snapshot backup using the SkySQL API. + +- [Examples](Snapshot Backup Examples.md) + +***Important:*** Database snapshots are deleted immediately upon service deletion. + +
    + +References + +

    + +

    +
    + +### **Full (physical) Backups** + +
    + +Overview + + +

    +
  • +Full backups create a complete backup of the database server into a new backup folder. It uses [mariabackup](https://mariadb.com/kb/en/full-backup-and-restore-with-mariabackup/) under the hood. Physical backups are performed by copying the individual data files or directories. +
  • + +
  • +The physical backup uses backup stages to create a consistent backup of the database without requiring a global read lock for the entire duration of the backup, while allowing the database to continue processing transactions. Instead, the server read lock is only needed briefly during the [BACKUP STAGE FLUSH](https://mariadb.com/kb/en/backup-stage/#:~:text=active%20DDL%20commands.-,BACKUP%20STAGE%20FLUSH,as%20closed%20for%20the%20backup.) stage, which flushes the tables to ensure that all of them are in a consistent state at the exact same point in time, independent of storage engine. The database lock temporarily suspends write operations and replication; the duration of the lock is typically just a few seconds. In a Primary/Replica topology, backups are prioritized and performed on the replica node. This approach ensures that the primary server can continue to operate in read/write mode, as the backup process is carried out on the replica node. After the backup process on the replica is completed, replication resumes automatically. +
  • +

    +
    +#### Full (physical) Backup Examples + +SkySQL supports database physical backups either on-demand or according to a pre-established schedule. Below are examples of how to schedule a physical backup using the SkySQL API. + +- [Examples](Physical Backup Examples.md) + +
    + +References + +

    + +

    +
    + +### **Incremental Backups** + +
    + +Overview + + +

    +Incremental backups update a previous backup with any changes to the data that have occurred since the initial backup was taken. + +InnoDB pages contain log sequence numbers, or LSN's. Whenever you modify a row on any InnoDB table in the database, the storage engine increments this number. When performing an incremental backup, Mariabackup checks the most recent LSN for the backup against the LSN's contained in the database. It then updates any of the backup files that have fallen behind. + +

    +
    + +##### Incremental Backup Examples +SkySQL supports database incremental backups either on-demand or according to a pre-established schedule. +Below are examples of how to schedule an incremental backup using the SkySQL API. + +- [Examples](Incremental Backup Examples.md) + +### **Logical (Mariadb-dump) Backups** -The SkySQL Backup service provides comprehensive Backup and Restore features through a secure API. We extend the automated nightly backups with a number of self service features. You can automatically create and store backups of your databases to ensure additional data safety or provide a robust disaster recovery solution. The backups are stored on reliable and secure cloud storage, ensuring they are readily available when needed. The backup process is seamless and does not affect the performance of your databases. SkySQL also offers the flexibility to customize your backup schedule according to your specific needs. +
    + +Overview + -Here is the list of features offered: +

    +Logical backups consist of the SQL statements necessary to restore the data, such as CREATE DATABASE, CREATE TABLE, and INSERT. This is done using mariadb-dump ([mariadb-dump](https://mariadb.com/kb/en/mariadb-dump/)) and is the most flexible way to perform a backup and restore, and a good choice when the data size is relatively small. +

    +
    -- **On-demand or scheduled Snapshot backups**: Snapshots allows you to create a point-in-time copy of a database persistent volume of your database. Compared to full backups, snapshots provide a faster method for restoring your database with the same data. Snapshots are incremental in nature. This means that after the initial full snapshot of a database persistent volumes, subsequent snapshots only capture and store the changes made since the last snapshot. This approach saves a lot storage space and reduce the time it takes to create a snapshot and the overall total cost. You have the flexibility to trigger a snapshot as per your requirements - either on-demand or according to a pre-established schedule. The snapshots use backup stages to create a consistent backup of the database without requiring a global read lock for the entire duration of the backup, while allowing the database to continue processing transactions. Instead, the server read lock is only needed briefly during the BACKUP STAGE FLUSH stage, which flushes the tables to ensure that all of them are in a consistent state at the exact same point in time, independent of storage engine. The database lock temporarily suspends write operations and replication, the duration of the lock is typically just a few seconds. In a Primary/Replica topology, backups are prioritized and performed on the replica node. This approach ensures that the primary server can continue to operate in read/write mode, as the backup process is carried out on the replica node. After the backup process on the replica is completed, replication resumes automatically. - References: -- -- -- -- +#### Logical Backup Examples - **Note** : Backups created by snapshots are deleted immedatily upon serivce deletion. - -- **Full (physical) backups** : Full backups create a complete backup of the database server into a new backup folder. It uses [mariabackup](https://mariadb.com/kb/en/full-backup-and-restore-with-mariabackup/) under the hood. Physical backups are performed by copying the individual data files or directories.The physican backup uses backup stages to create a consistent backup of the database without requiring a global read lock for the entire duration of the backup, while allowing the database to continue processing transactions. Instead, the server read lock is only needed briefly during the BACKUP STAGE FLUSH stage, which flushes the tables to ensure that all of them are in a consistent state at the exact same point in time, independent of storage engine. The database lock temporarily suspends write operations and replication, the duration of the lock is typically just a few seconds. In a Primary/Replica topology, backups are prioritized and performed on the replica node. This approach ensures that the primary server can continue to operate in read/write mode, as the backup process is carried out on the replica node. After the backup process on the replica is completed, replication resumes automatically. -- References: -- https://mariadb.com/kb/en/full-backup-and-restore-with-mariabackup/ +SkySQL supports database logical backups either on-demand or according to a pre-established schedule. Below are examples of how to schedule a logical backup using the SkySQL API. -- **Incremental backups** : Incremental backups update a previous backup with any changes to the data that have occurred since the inital backup was taken. -- InnoDB pages contain log sequence numbers, or LSN's. Whenever you modify a row on any InnoDB table on the database, the storage engine increments this number. When performing an incremental backup, Mariabackup checks the most recent LSN for the backup against the LSN's contained in the database. It then updates any of the backup files that have fallen behind. -- References: -- https://mariadb.com/kb/en/incremental-backup-and-restore-with-mariabackup/ +- [Examples](Logical Backup Examples.md) -- **Logical backups** : Logical backups consist of the SQL statements necessary to restore the data, such as CREATE DATABASE, CREATE TABLE and INSERT. This is done using mariadb-dump() and is the most flexible way to perform a backup and restore, and a good choice when the data size is relatively small. +
    + +References + -- **Replication as Backup** : In situations where the service cannot be locked or stopped, or is under heavy load, performing backups directly on a primary server may not be the preferred option. Using a replica database instace for backups allows the replica to be shut down or locked, enabling backup operations without impacting the primary server. - - The approach is commonly implemented in the following manner: - - The primary server replicates data to a replica. - - Backups are then initiated from the replica, ensuring no disruption to the primary server. - - Detail on how to setup replicaion with you SkySQL instance can be found [here](https://github.com/skysqlinc/skysql-docs/blob/main/docs/Data%20loading%2C%20Migration/Replicating%20data%20from%20external%20DB.md) +

    +[mariadb-dump](https://mariadb.com/kb/en/mariadb-dump/) -- **Backup your binlogs** : Binlogs record database changes (data modifications, table structure changes) in a sequential, binary format. You can preserve binlogs for setting up replication or to recover to a certain point-in-time. - -- **Automatic nightly backups**: Automated nightly backups include a full backup of every database in the service to ensure that your SkySQL Database service is backed up regularly. - -- **On-demand or scheduled backups** : you can initiate backups whenever needed - on demand or based on you pre-defined schedule. - -- **Bring your own Bucket (BYOB)** : you can backup or restore data to/from your own bucket in either GCP or AWS. +

    +
    -- **Point-in-time recovery** : you can restore from a full or a logical backup and then use a binlog backup to restore to a point-in-time. - -- **Secure backup/restores** : Control backup/restore privileges by granting roles to users in SkySQL. +### **BinaryLog Backups** -## Pricing -while the daily automated backups are included the use of this API will incur nominal additional charges. Please contact info@skysql.com for details. +
    + +Overview + -The following documentation describes the API for the SkySQL Backup Service. This can be used directly with any HTTP client. +

    +Binlogs record database changes (data modifications, table structure changes) in a sequential, binary format. You can preserve binlogs for setting up replication or to recover to a certain point-in-time. -!!! Note - Please refer to the API docs (swagger) for the latest API. - The information below might be slightly outdated. +

    +
    -## Authentication +#### BinaryLog Backup Examples -To authenticate with the API, do the following: +- [Examples](BinaryLog Backup Examples.md) -1. Go to SkySQL API Key management page: https://app.skysql.com/user-profile/api-keys and generate an API key +### **Additional Backup Options (with Examples)** + +
      +
    • Replication as Backup : In situations where the service cannot be locked or stopped, or is under heavy load, performing backups directly on a primary server may not be the preferred option. Using a replica database instance for backups allows the replica to be shut down or locked, enabling backup operations without impacting the primary server. + + The approach is commonly implemented in the following manner: + - The primary server replicates data to a replica. + - Backups are then initiated from the replica, ensuring no disruption to the primary server. + + Details on how to set up replication with your SkySQL instance can be found [here](../Data%20loading%2C%20Migration/Replicating%20data%20from%20external%20DB/). +
    • +
    • Automatic Nightly Backups : Automated nightly backups include a full backup of every database in the service to ensure that your SkySQL Database service is backed up regularly. Nightly backups are running for every SkySQL database by default. +
    • +
    • Bring Your Own Bucket (BYOB) : You can backup or restore data to/from your own bucket in either GCP or AWS. Sample GCP and AWS scripts can be found [here](../Backup%20and%20Restore/Bring%20Your%20Own%20Bucket%20Examples/). +
    • +
    • Point-in-time Recovery : You can restore from a full or a logical backup and then use a binlog backup to restore to a point-in-time. +
    • +
    • Secure Backup/Restores : Control backup/restore privileges by granting roles to users in SkySQL. +
    • + +
    • Other Backup API Examples : Various API scripts providing examples of listing backups, checking backup statuses, and working with backup schedules can be found [here](../Backup%20and%20Restore/Other%20backup%20API%20examples/). +
    • +
    -2. Export the value from the token field to an environment variable $API_KEY +## Restores -3. Use it on subsequent request, e.g: +**WARNING** +> Restoring from a Backup will likely wipe out all data in your target DB service. If you aren't sure, first take a backup of the DB service where a restore is being performed. The DB being restored will also be stopped during a Restore. You will need to restart it. - ```bash - curl --request GET 'https://api.skysql.com/skybackup/v1/backups/schedules' \\ - --header "X-API-Key: ${API_KEY}" - ``` +Users can instruct the restore of their SkySQL Database from their own SkySQL storage or from an external storage they own. The restore API provides options for listing, adding, and deleting a scheduled restore operation. -## Scheduling backups +### **List Restore Schedules** -Backups on large data sets can take time. You instruct the creation of a backup using a "schedule". You can either schedule a one-time backup (schedule now) or set up automatic backups using a cron schedule. A backup schedule results in a backup job which can be tracked using the status API. We support the following types of backups : full(physical), incremental(physical), binary log, and dump(logical). +SkySQL Users can fetch their already existing database restore schedules using the backup API. Check the provided API examples for details. -### Create a backup +#### List Restore Examples -To create a backup schedule, you need to have the "administrator" role. You can add members and configure roles using the [SkySQL portal](https://app.skysql.com/settings/user-management). +- [Examples](List Restore Examples.md) -#### Full Backup +### **Create a Restore** -##### One-time Full +SkySQL Users can restore their databases using their own SkySQL managed backup storage or using an external storage they own. Check the provided service API examples for details. -To create a *full* backup you need to make the following API call: +#### Database Restore Examples -```bash -curl --location 'https://api.skysql.com/skybackup/v1/backups/schedules' \ ---header 'Content-Type: application/json' \ ---header 'Accept: application/json' \ ---header 'X-API-Key: ${API_KEY}' \ ---data '{ - "backup_type": "full", - "schedule": "once", - "service_id": "dbtgf28044362" -}' -``` -**NOTE** -> note that each launched database is tracked with a service ID in SkySQL. You can fetch the service ID from the Fully qualified domain name(FQDN) of your service. For instance in dbpgf17106534.sysp0000.db2.skysql.com, 'dbpgf17106534' is the service ID. -> You will find the FQDN in the [Connect window](https://app.skysql.com/dashboard) - -Typical API server response should look like: +- [Examples](Database Restore Examples.md) + +### **Delete Restore Schedule** -```json -{ - "id": 253, - "service_id": "dbtgf28044362", - "schedule": "once", - "type": "full", - "status": "Scheduled", - "message": "Backup is scheduled." -} -``` -> you can fetch the Status of the backup using 'https://api.skysql.com/skybackup/v1/backups'. See the 'Backup Status' section for an example. The 'status' field will report Success or failure. +SkySQL Users can delete their already defined database restore schedules with the provided service API. -##### Schedule a backup job using Cron - -To set up an automatic periodic *full* backup at 3 am UTC, you need to make the following API call: - -```bash -curl --location 'https://api.skysql.com/skybackup/v1/backups/schedules' \ ---header 'Content-Type: application/json' \ ---header 'Accept: application/json' \ ---header 'X-API-Key: ${API_KEY}' \ ---data '{ - "backup_type": "full", - "schedule": "0 3 * * *", - "service_id": "dbtgf28044362" -}' -``` - -For more information about cron schedules take a look at this [document](https://en.wikipedia.org/wiki/Cron). - -#### Incremental Backup - -Incremental backups can be taken once you have full backup. Read [here](https://mariadb.com/kb/en/incremental-backup-and-restore-with-mariabackup/) for more details. - -##### One-time Incremental - -To set up an one-time *incremental* backup, you need to make the following API call: - -```bash -curl --location 'https://api.skysql.com/skybackup/v1/backups/schedules' \ ---header 'Content-Type: application/json' \ ---header 'Accept: application/json' \ ---header 'X-API-Key: ${API_KEY}' \ ---data '{ - "backup_type": "incremental", - "schedule": "once", - "service_id": "dbtgf28044362" -}' -``` - -##### Cron Incremental - -To set up an cron *incremental* backup, you need to make the following API call: - -```bash -curl --location 'https://api.skysql.com/skybackup/v1/backups/schedules' \ ---header 'Content-Type: application/json' \ ---header 'Accept: application/json' \ ---header 'X-API-Key: ${API_KEY}' \ ---data '{ - "backup_type": "incremental", - "schedule": "0 3 * * *", - "service_id": "dbtgf28044362" -}' -``` - -#### Binarylog Backup - -##### One-time Binarylog - -To set up an one-time *binarylog* backup: - -```bash -curl --location 'https://api.skysql.com/skybackup/v1/backups/schedules' \ ---header 'Content-Type: application/json' \ ---header 'Accept: application/json' \ ---header 'X-API-Key: ${API_KEY}' \ ---data '{ - "backup_type": "binarylog", - "schedule": "once", - "service_id": "dbtgf28044362" -}' -``` - -##### Schedule Binarylog backup - -To set up an cron *incremental* backup: - -```bash -curl --location 'https://api.skysql.com/skybackup/v1/backups/schedules' \ ---header 'Content-Type: application/json' \ ---header 'Accept: application/json' \ ---header 'X-API-Key: ${API_KEY}' \ ---data '{ - "backup_type": "binarylog", - "schedule": "0 3 * * *", - "service_id": "dbtgf28044362" -}' -``` - -#### Take a Logical backup (Mariadb-dump) - -##### One-time Dump - -To set up an one-time *dump* backup: - -```bash -curl --location 'https://api.skysql.com/skybackup/v1/backups/schedules' \ ---header 'Content-Type: application/json' \ ---header 'Accept: application/json' \ ---header 'X-API-Key: ${API_KEY}' \ ---data '{ - "backup_type": "dump", - "schedule": "once", - "service_id": "dbtgf28044362" -}' -``` - -##### Schedule a Logical backup (Dump) - -To set up an cron *dump* backup: - -```bash -curl --location 'https://api.skysql.com/skybackup/v1/backups/schedules' \ ---header 'Content-Type: application/json' \ ---header 'Accept: application/json' \ ---header 'X-API-Key: ${API_KEY}' \ ---data '{ - "backup_type": "dump", - "schedule": "0 3 * * *", - "service_id": "dbtgf28044362" -}' -``` - -#### Scheduling Backups to your own bucket (external storage) - -To set up an external storage backup, you need to make the following API call: - -- For *GCP* you need to create an service account key. Please follow the steps from this [documentation](https://cloud.google.com/iam/docs/keys-create-delete). Once you have created the service account key you will need to base64 encode it. You can encode it directly from a command line itself. For example the execution of command ```echo -n 'service-account-key' | base64``` will produce something like ```c2VydmljZS1hY2NvdW50LWtleQ==``` - - ```bash - curl --location 'https://api.skysql.com/skybackup/v1/backups/schedules' \ - --header 'Content-Type: application/json' \ - --header 'Accept: application/json' \ - --header 'X-API-Key: ${API_KEY}' \ - --data '{ - "backup_type": "full", - "schedule": "0 2 * * *", - "service_id": "dbtgf28044362", - "external_storage": { - "bucket": { - "path": "s3://my_backup_bucket", - "credentials": "c2VydmljZS1hY2NvdW50LWtleQ==" - } - } - }' - ``` - - The service account key will be in the following format: - - ```json - { - "type": "service_account", - "project_id": "XXXXXXX", - "private_key_id": "XXXXXXX", - "private_key": "-----BEGIN PRIVATE KEY-----XXXXX-----END PRIVATE KEY-----", - "client_email": "XXXXXXXXXXXXXXXXXXXXXXXXXXXX.iam.gserviceaccount.com", - "client_id": "XXXXXXX", - "auth_uri": "", - "token_uri": "", - "auth_provider_x509_cert_url": "", - "client_x509_cert_url": "", - "universe_domain": "googleapis.com" - } - ``` - -- For AWS, you must provide your own credentials. These include the AWS access key associated with an IAM account and the bucket region. For more information about AWS credentials, please refer to the [documentation](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html). The required credentials are *aws_access_key_id* , *aws_secret_access_key* and *region*. For example your credentials should look like: - - ```bash - [default] - aws_access_key_id = AKIAIOSFODNN7EXAMPLE - aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY - region = us-west-2 - ``` - - You should encode your credentials base64 before passing it to the API. You can encode it directly from a command line itself. For example the execution of command ```echo '[default]\naws_access_key_id = AKIAIOSFODNN7EXAMPLE\naws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\nregion = us-west-2' | base64``` will produce the following ```W2RlZmF1bHRdCmF3c19hY2Nlc3Nfa2V5X2lkID0gQUtJQUlPU0ZPRE5ON0VYQU1QTEUKYXdzX3NlY3JldF9hY2Nlc3Nfa2V5ID0gd0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQpyZWdpb24gPSB1cy13ZXN0LTIK```. - Using encoded credentials you will be able to pass it to the API server. To initiate a new backup to your external storage you need to execute an API call to the backup service: - - ``````bash - curl --location '' \ - --header 'Content-Type: application/json' \ - --header 'Accept: application/json' \ - --header 'X-API-Key: ${API_KEY}' \ - --data '{ - "backup_type": "full", - "schedule": "0 2 ** *", - "service_id": "dbtgf28044362", - "external_storage": { - "bucket": { - "path": "s3://my_backup_bucket", - "credentials": "W2RlZmF1bHRdCmF3c19hY2Nlc3Nfa2V5X2lkID0gQUtJQUlPU0ZPRE5ON0VYQU1QTEUKYXdzX3NlY3JldF9hY2Nlc3Nfa2V5ID0gd0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQpyZWdpb24gPSB1cy13ZXN0LTIK" - } - } - }' - ``` - -### Working with Backup Schedules - -To get backup schedules inside the Organization : - -```bash -curl --location '' \ ---header 'Accept: application/json' \ ---header 'X-API-Key: ${API_KEY}' -``` - -#### Get all Backup Schedules per service - -To get backup schedules for specific service : - -```bash -curl --location '' \ ---header 'Accept: application/json' \ ---header 'X-API-Key: ${API_KEY}' -``` - -#### Get Backup Schedule by ID - -To get specific backup schedule by id : - -```bash -curl --location 'https://api.skysql.com/skybackup/v1/backups/schedules/200' \ ---header 'Accept: application/json' \ ---header 'X-API-Key: ${API_KEY}' -``` - -#### Update Backup Schedule - -In the following example, we update the backup schedule to 9 AM UTC. Remember, you cannot change the schedules for one-time backups. -To update specific backup schedule you need to make the following API call: - -```bash -curl --location --request PATCH '' \ ---header 'Content-Type: application/json' \ ---header 'Accept: application/json' \ ---header 'X-API-Key: ${API_KEY}' \ ---data '{ - "schedule": "0 9 ** *" -}' -``` - -#### Delete Backup Schedule - -To delete a backup schedule you need to provide the backup schedule id. Example of the api call below: - -```bash -curl --location --request DELETE 'https://api.skysql.com/skybackup/v1/backups/schedules/215' \ ---header 'Accept: application/json' \ ---header 'X-API-Key: ${API_KEY}' -``` - -## Backup Status - -The following API illustrates how to get the available backups and status of backup jobs . - -### List all backups inside the organization - -Here is an example to fetch all the available Backups in your org: - -```bash -curl --location 'https://api.skysql.com/skybackup/v1/backups' \ ---header 'Accept: application/json' \ ---header 'X-API-Key: ${API_KEY}' -``` - -### List all backups by service - -To list all backups available for your service : - -```bash -curl --location 'https://api.skysql.com/skybackup/v1/backups?service_id=dbtgf28216706' \ ---header 'Accept: application/json' \ ---header 'X-API-Key: ${API_KEY}' -``` - -The typical response of either of two calls should look like: - -```json -{ - "backups": [ - { - "id": "eda3b72460c8c0d9d61a7f01b6a22e32:dbtgf28216706:tx-filip-mdb-ms-0", - "service_id": "dbtgf28216706", - "type": "full", - "method": "skybucket", - "server_pod": "tx-filip-mdb-ms-0", - "backup_size": 5327326, - "reference_full_backup": "", - "point_in_time": "2024-03-26 17:18:21", - "start_time": "2024-03-26T17:18:57Z", - "end_time": "2024-03-26T17:19:01Z", - "status": "Succeeded" - } - ], - "backups_count": 1, - "pages_count": 1 -} -``` +#### Delete Restore Examples -> The ** Backup id is the most important part of this data as you need to provide it in the restore api call** to schedule restore execution. +- [Examples](Delete Restore Examples.md) + +## Pricing -## Restores +While the daily automated backups are included, the use of this API will incur nominal additional charges. Please contact info@skysql.com for details. -**WARNING** -> restoring from a Backup will likely wipe out all data in your target DB service. If you aren't sure, first take a backup of the Db service where a restore is being performed. The DB being restored will also be stopped during a Restore. You will need restart it. - -### Creating Restore Job - -#### Restore From Managed Storage - -You can restore your database from the backup located in the default SkySQL managed backup storage. In this case, you need to provide the backup ID when making the restore API call. Here is an example: - -```bash -curl --location 'https://api.skysql.com/skybackup/v1/restores' \ ---header 'Content-Type: application/json' \ ---header 'Accept: application/json' \ ---header 'X-API-Key: ${API_KEY}' \ ---data '{ - "key": "eda3b72460c8c0d9d61a7f01b6a22e32:dbtgf28216706:tx-filip-mdb-ms-0", - "service_id": "dbtgf28044362" -}' -``` - -Inside the service_id parameter of your restore API request, you need to provide the id of the service, where you want to restore your data. - -#### Restore From your Bucket (External Storage) - -You can restore your data from external storage. Your external storage bucket data should be created via one of the following tools: ```mariabackup, mysqldump```. Credentials to external storage access could be fetched from: - -- For *GCP* you need to create an service account key. Please follow the steps from this [documentation](https://cloud.google.com/iam/docs/keys-create-delete). Once you have created the service account key you will need to base64 encode it. You can encode it directly from a command line itself. For example the execution of command ```echo -n 'service-account-key' | base64``` will produce the following ```c2VydmljZS1hY2NvdW50LWtleQ==``` - - ```bash - curl --location 'https://api.skysql.com/skybackup/v1/backups/schedules' \ - --header 'Content-Type: application/json' \ - --header 'Accept: application/json' \ - --header 'X-API-Key: ${API_KEY}' \ - --data '{ - "backup_type": "full", - "schedule": "0 2 * * *", - "service_id": "dbtgf28044362", - "external_storage": { - "bucket": { - "path": "s3://my_backup_bucket", - "credentials": "c2VydmljZS1hY2NvdW50LWtleQ==" - } - } - }' - ``` - - The service account key will be in the following format: - - ```json - { - "type": "service_account", - "project_id": "XXXXXXX", - "private_key_id": "XXXXXXX", - "private_key": "-----BEGIN PRIVATE KEY-----XXXXX-----END PRIVATE KEY-----", - "client_email": "XXXXXXXXXXXXXXXXXXXXXXXXXXXX.iam.gserviceaccount.com", - "client_id": "XXXXXXX", - "auth_uri": "", - "token_uri": "", - "auth_provider_x509_cert_url": "", - "client_x509_cert_url": "", - "universe_domain": "googleapis.com" - } - ``` - -- For AWS, you must provide your own credentials. These include the AWS access key associated with an IAM account and the bucket region. For more information about AWS credentials, please refer to the [documentation](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html). The required credentials are *aws_access_key_id* , *aws_secret_access_key* and *region*. For example your credentials should look like: - - ```bash - [default] - aws_access_key_id = AKIAIOSFODNN7EXAMPLE - aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY - region = us-west-2 - ``` - - You should encode your credentials base64 before passing it to the API. You can encode it directly from a command line itself. For example the execution of command ```echo '[default]\naws_access_key_id = AKIAIOSFODNN7EXAMPLE\naws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\nregion = us-west-2' | base64``` will produce the following ```W2RlZmF1bHRdCmF3c19hY2Nlc3Nfa2V5X2lkID0gQUtJQUlPU0ZPRE5ON0VYQU1QTEUKYXdzX3NlY3JldF9hY2Nlc3Nfa2V5ID0gd0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQpyZWdpb24gPSB1cy13ZXN0LTIK```. - -The following request demonstrates how to restore your data from an external storage: - -```json -{ - "service_id": "dbtgf28044362", - "key": "/backup.tar.gz", - "external_source": { - "bucket": "gs://my_backup_bucket", - "method": "mariabackup", - "credentials" "W2RlZmF1bHRdCmF3c19hY2Nlc3Nfa2V5X2lkID0gQUtJQUlPU0ZPRE5ON0VYQU1QTEUKYXdzX3NlY3JldF9hY2Nlc3Nfa2V5ID0gd0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQpyZWdpb24gPSB1cy13ZXN0LTIK" - } -} -``` - -In case your backup data is encrypted you need to pass encryption key as well: - -```json -{ - "service_id": "dbtgf28044362", - "key": "/backup.tar.gz", - "external_source": { - "bucket": "gs://my_backup_bucket", - "method": "mariabackup", - "credentials": "W2RlZmF1bHRdCmF3c19hY2Nlc3Nfa2V5X2lkID0gQUtJQUlPU0ZPRE5ON0VYQU1QTEUKYXdzX3NlY3JldF9hY2Nlc3Nfa2V5ID0gd0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQpyZWdpb24gPSB1cy13ZXN0LTIK", - "encryption_key": "my_encryption_key" - } -} -``` - -### Fetching Restore Job information - -#### Get all Restores Schedules - -In order to get all Restores scheduled in the past you need to make api call: - -```bash -curl --location 'https://api.skysql.com/skybackup/v1/restores' \ ---header 'Accept: application/json' \ ---header 'X-API-Key: ${API_KEY}' -``` - -#### Get Restore by ID - -```bash -curl --location 'https://api.skysql.com/skybackup/v1/restores/12' \ ---header 'Accept: application/json' \ ---header 'X-API-Key: ${API_KEY}' -``` - -Typical response of those two apis should look like: - -In case restore is in progress: - -```json -[ - { - "id": 12, - "service_id": "dbtgf28216706", - "bucket": "gs://sky-syst0000-backup-us-84e9d84ecf265a/orgpxw1x", - "key": "eda3b72460c8c0d9d61a7f01b6a22e32:dbtgf28216706:tx-filip-mdb-ms-0", - "type": "physical", - "status": "Running", - "message": "server is not-ready" - } -] -``` - -In case restore completed: - -```json -[ - { - "id": 13, - "service_id": "dbtgf28216706", - "bucket": "gs://sky-syst0000-backup-us-84e9d84ecf265a/orgpxw1x", - "key": "dda9b72460c9c0d9d61a7f01b6a33e39:dbtgf28216706:tx-filip-mdb-ms-0", - "type": "physical", - "status": "Succeeded", - "message": "Restore has succeeded!" - } -] -``` - -#### Delete Restore Schedules - -You can delete older completed restore schedules. To clean up your auditing history you need to execute the following api call: - - -```bash -curl --location --request DELETE 'https://api.skysql.com/skybackup/v1/restores/12' \ ---header 'Accept: application/json' \ ---header 'X-API-Key: ${API_KEY}' -``` +The following documentation describes the API for the SkySQL Backup Service. This can be used directly with any HTTP client. \ No newline at end of file diff --git a/docs/Backup and Restore/Snapshot Backup and Restore Examples.md b/docs/Backup and Restore/Snapshot Backup Examples.md similarity index 55% rename from docs/Backup and Restore/Snapshot Backup and Restore Examples.md rename to docs/Backup and Restore/Snapshot Backup Examples.md index dafd6b7b..720785b6 100644 --- a/docs/Backup and Restore/Snapshot Backup and Restore Examples.md +++ b/docs/Backup and Restore/Snapshot Backup Examples.md @@ -1,40 +1,30 @@ +
    + +Authentication + +

    +
      +
    1. +Go to the SkySQL API Key management page and generate an API key +
    2. +
    3. +Export the value from the token field to an environment variable $API_KEY -## Authentication - -To authenticate with the API, do the following: - -1. Go to SkySQL API Key management page: https://app.skysql.com/user-profile/api-keys and generate an API key - -2. Export the value from the token field to an environment variable $API_KEY - - ```bash - export API_KEY='... key data ...' ``` -3. Use it on subsequent request, e.g: - - ```bash - curl --request GET 'https://api.skysql.com/skybackup/v1/backups/schedules' \\ - --header "X-API-Key: ${API_KEY}" - ``` -## Snapshot Backup Scheduling - - - -1. Go to SkySQL API Key management page: https://app.skysql.com/user-profile/api-keys and generate an API key - -2. Export the value from the token field to an environment variable $API_KEY - - ```bash export API_KEY='... key data ...' ``` - - The `API_KEY` environment variable will be used in the subsequent steps. +
    4. +
    5. +Use it on subsequent request, e.g: -3. Use it on subsequent request, e.g: - ```bash - curl --request GET 'https://api.skysql.com/provisioning/v1/services' \\ - --header "X-API-Key: $API_KEY" - ``` + ```bash + curl --request GET 'https://api.skysql.com/skybackup/v1/backups/schedules' --header "X-API-Key: ${API_KEY}" + ``` +
    6. +
    +

    + +## Snapshot Backup Scheduling #### One-time Snapshot Example @@ -50,7 +40,7 @@ To authenticate with the API, do the following: - API_KEY : SKYSQL API KEY, see [SkySQL API Keys](https://app.skysql.com/user-profile/api-keys/) -- SERVICE_ID : SkySQL serivce identifier, format dbtxxxxxx +- SERVICE_ID : SkySQL serivce identifier, format dbtxxxxxx. You can fetch the service ID from the Fully qualified domain name(FQDN) of your service. E.g: in dbpgf17106534.sysp0000.db2.skysql.com, 'dbpgf17106534' is the service ID.You will find the FQDN in the [Connect window](https://app.skysql.com/dashboard) #### Cron Snapshot Example @@ -67,4 +57,9 @@ To authenticate with the API, do the following: - API_KEY : SKYSQL API KEY, see [SkySQL API Keys](https://app.skysql.com/user-profile/api-keys/) - SCHEDULE : Cron schedule, see [Cron](https://en.wikipedia.org/wiki/Cron) -- SERVICE_ID : SkySQL serivce identifier, format dbtxxxxxx \ No newline at end of file +- SERVICE_ID : SkySQL serivce identifier, format dbtxxxxxx + + +##### Backup status can be fetch using 'https://api.skysql.com/skybackup/v1/backups'. See the 'Backup Status' section for an example. + + diff --git a/mkdocs.yml b/mkdocs.yml index 24157d82..c38ec66c 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -114,6 +114,16 @@ nav: - 'Configure your Database Server(s)' : 'Configure your Database Server(s).md' - 'Backup and Restore' : - 'Backup and Restore/README.md' + - 'Backup and Restore/Snapshot Backup Examples.md' + - 'Backup and Restore/Physical Backup Examples.md' + - 'Backup and Restore/Logical Backup Examples.md' + - 'Backup and Restore/Incremental Backup Examples.md' + - 'Backup and Restore/Binarylog Backup Examples.md' + - 'Backup and Restore/Bring Your Own Bucket Examples.md' + - 'Backup and Restore/Other backup API examples.md' + - 'Backup and Restore/List Restore Examples.md' + - 'Backup and Restore/Database Restore Examples.md' + - 'Backup and Restore/Delete Restore Examples.md' - 'Backup and Restore/MariaDB Enterprise Backup.md' - 'Using AWS/GCP Private VPC Connections' : - 'Using AWS GCP private VPC connections/README.md' From e9c47970f83a02d3b9b20aacd1f2cd87d39ff608 Mon Sep 17 00:00:00 2001 From: NedPK Date: Tue, 9 Jul 2024 13:39:56 +0300 Subject: [PATCH 5/5] commit test --- docs/Backup and Restore/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/Backup and Restore/README.md b/docs/Backup and Restore/README.md index e9d3edc3..cd63e3da 100644 --- a/docs/Backup and Restore/README.md +++ b/docs/Backup and Restore/README.md @@ -35,7 +35,7 @@ Below are examples of how to schedule a snapshot backup using the SkySQL API. - [Examples](Snapshot Backup Examples.md) -***Important:*** Database snapshots are deleted immediately upon service deletion. +***Important:*** Database snapshots are deleted immediately upon service deletion.