diff --git a/docs/api_docs/alarm-and-monitor/observability/alert.md b/docs/api_docs/alarm-and-monitor/observability/alert.md index 8fbd7fe75bc..88be8199534 100644 --- a/docs/api_docs/alarm-and-monitor/observability/alert.md +++ b/docs/api_docs/alarm-and-monitor/observability/alert.md @@ -1,14 +1,12 @@ --- title: Configure alert description: How to enable alert -keywords: [mysql, alert, alert message, email alert] +keywords: [alert, alert message, email alert] sidebar_position: 2 --- # Configure alert -## Configure alert - Alerts are mainly used for daily error response to improve system availability. KubeBlocks uses the open-source version of Prometheus to configure alert rules and multiple notification channels. The alert capability of KubeBlocks can meet the operation and maintenance requirements of production-level online clusters. :::note @@ -17,7 +15,7 @@ The alert function is the same for all. ::: -### Alert rules +## Alert rules KubeBlocks uses the open-source version of Prometheus to meet the needs of various data products. These alert rules provide the best practice for cluster operation and maintenance, which further improve alert accuracy and reduce the probability of false negatives and false positives by experience-based smoothing windows, alert thresholds, alert levels, and alert indicators. @@ -41,7 +39,7 @@ alert: PostgreSQLTooManyConnections Configure alert rules as needed. For more details, please refer to [Prometheus alerting rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/#defining-alerting-rules). -### Notifications +## Notifications The alert message notification of KubeBlocks mainly adopts the AlertManager native capability. After receiving the Prometheus alert, KubeBlocks performs steps including deduplication, grouping, silence, suppression, and routing, and finally sends it to the corresponding notification channel. diff --git a/docs/api_docs/alarm-and-monitor/observability/monitor-database.md b/docs/api_docs/alarm-and-monitor/observability/monitor-database.md index e900296858c..3e7d1d6df5b 100644 --- a/docs/api_docs/alarm-and-monitor/observability/monitor-database.md +++ b/docs/api_docs/alarm-and-monitor/observability/monitor-database.md @@ -56,9 +56,9 @@ Here is an example of enabling the `prometheus` addon. You can enable other moni ```bash helm list -A > - NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION + NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION ...... - prometheus kb-system 1 2024-05-31 12:01:52.872584 +0800 CST deployed prometheus-15.16.1 2.39.1 + prometheus kb-system 1 2024-05-31 12:01:52.872584 +0800 CST deployed prometheus-15.16.1 2.39.1 ``` ### Enable the monitoring function for a database @@ -92,7 +92,7 @@ componentSpecs: monitor: false # Change this value ``` -### View the dashboardß +### View the dashboard Use the `grafana` addon provided by KubeBlocks to view the dashboard. @@ -117,6 +117,14 @@ Use the `grafana` addon provided by KubeBlocks to view the dashboard. 3. Open the web browser and enter the address `127.0.0.1:3000` to visit the dashboard. 4. Enter the username and password obtained from step 1. +:::note + +If there is no data in the dashboard, you can check whether the job is `kubeblocks-service`. Enter `kubeblocks-service` in the job field and press the enter button. + +![monitoring](./../../img/api-monitoring.png) + +::: + ### (Optional) Enable remote write KubeBlocks supports the `victoria-metrics-agent` addon to enable you to remotely write the data to your VM. Compared with the native Prometheus, [vmgent](https://docs.victoriametrics.com/vmagent.html) is lighter. diff --git a/docs/api_docs/handle-an-exception/full-disk-lock.md b/docs/api_docs/handle-an-exception/full-disk-lock.md index 84932bef082..016a2c3412d 100644 --- a/docs/api_docs/handle-an-exception/full-disk-lock.md +++ b/docs/api_docs/handle-an-exception/full-disk-lock.md @@ -9,7 +9,7 @@ sidebar_label: Full disk lock The full disk lock function of KubeBlocks ensures the stability and availability of a database. This function triggers a disk lock when the disk usage reaches a set threshold, thereby pausing write operations and only allowing read operations. Such a mechanism prevents a database from being affected by disk space exhaustion. -## Lock/unlock mechanism +## Mechanism of lock/unlock When the space water level of any configured volume exceeds the defined threshold, the instance is locked (read-only). Meanwhile, the system sends a related warning event, including the specific threshold and space usage information of each volume. @@ -19,7 +19,7 @@ When the space water level of all configured volumes falls below the defined thr 1. The full disk lock function currently supports global (ClusterDefinition) enabling or disabling and does not support Cluster dimension control. Dynamically enabling or disabling this function may affect the existing Cluster instances that use this ClusterDefinition and cause them to restart. Please operate with caution. -2. The full disk locking function relies on the read permission (get & list) of the two system resource nodes and nodes/stats. If you create an instance via kbcli, make sure to grant the controller administrative rights to the ClusterRoleBinding. +2. The full disk lock function relies on the read permission (get & list) of the two system resource nodes and nodes/stats. If you create an instance via kbcli, make sure to grant the controller administrative rights to the ClusterRoleBinding. 3. Currently, full disk lock is available for ApeCloud MySQL, PostgreSQL and MongoDB. diff --git a/docs/api_docs/instance-template/how-to-use-instance-template.md b/docs/api_docs/instance-template/how-to-use-instance-template.md index b4d98c73225..48ddb64fe3c 100644 --- a/docs/api_docs/instance-template/how-to-use-instance-template.md +++ b/docs/api_docs/instance-template/how-to-use-instance-template.md @@ -1,14 +1,24 @@ +--- +title: Apply instance template +description: Apply instance template +keywords: [apply instance template, instance template] +sidebar_position: 2 +sidebar_label: Apply instance template +--- + # Apply instance template -Instance template can be applied to many scenarios. In this section, we take RisingWave cluster as an example. -KubeBlocks supports the management of RisingWave clusters. The RisingWave addon is contributed by the RisingWave official team. For RisingWave to function optimally, it relies on an external storage solution, such as AWS S3 or Alibaba Cloud OSS, to serve as its state backend. When creating a RisingWave cluster, it is necessary to configure credentials and other information for the external storage to ensure normal operation, and these information may vary for each cluster. +Instance templates can be applied to many scenarios. In this section, we take a RisingWave cluster as an example. + +KubeBlocks supports the management of RisingWave clusters. The RisingWave addon is contributed by the RisingWave official team. For RisingWave to function optimally, it relies on an external storage solution, such as AWS S3 or Alibaba Cloud OSS, to serve as its state backend. When creating a RisingWave cluster, it is necessary to configure credentials and other information for the external storage to ensure normal operation, and this information may vary for each cluster. -In the official image of RisingWave, these information can be injected via environment variables. Therefore, in KubeBlocks 0.9, we can configure corresponding environment variables in the instance template and set the values of these environment variables each time a cluster is created, so as to inject credential information into the container of RisingWave. +In the official image of RisingWave, this information can be injected via environment variables. Therefore, in KubeBlocks 0.9, we can configure corresponding environment variables in the instance template and set the values of these environment variables each time a cluster is created, so as to inject credential information into the container of RisingWave. ## An example -In the default template of RisingWave addon, the environment viriables are configured as follows: -``` +In the default template of RisingWave addon, the environment variables are configured as follows: + +```yaml apiVersion: apps.kubeblocks.io/v1alpha1 kind: ClusterDefinition metadata: @@ -49,8 +59,10 @@ spec: value: 0.0.0.0:1222 # ... ``` + After adding an instance template to the cluster resources: -``` + +```yaml apiVersion: apps.kubeblocks.io/v1alpha1 kind: Cluster metadata: @@ -94,59 +106,78 @@ spec: value: "{{ .Values.risingwave.metaStore.etcd.authentication.enabled}}" # ... ``` + In the example above, we added an instance template through the `instances` field, named `instance`. This template defines several environment variables such as `RW_STATE_STORE` and `AWS_REGION`. These environment variables will be appended by KubeBlocks to the list of environment variables defined in the default template. Consequently, the rendered instance will contain both the default template and all the environment variables defined in this instance template. Additionally, the `replicas` field in the instance template is identical to that in the `componentSpec` (both are `{{ .Values.risingwave.compute.replicas }}`), indicating that after overriding the default template, this instance template will be used to render all instances within this component. -## Detailed information of instance template +## Detailed information on instance template -- `Name` field: For each component, multiple instance templates can be defined. Template name is configured with `Name` field, and must remain unique within the same component. +- `Name` field: For each component, multiple instance templates can be defined. The template name is configured with the `Name` field and must remain unique within the same component. - `Replica` field: Each template can set the number of instances rendered based on that template via the `Replicas` field, of which the default value is 1. The sum of `Replicas` for all instance templates within the same component must be less than or equal to the `Replicas` value of the component. If the number of instances rendered based on the instance templates is less than the total number required by the component, the remaining instances will be rendered using the default template. -The pattern for the names of instances rendered based on instance templates is `$(cluster name)-$(component name)-$(instance template name)-ordinal`. For example, in the above RisingWave cluster, the cluster name is `risingwave`, the component name is `compute`, the instance template name is `instance`, and the number of `Replicas` is 3. Therefore, the rendered instance names are: risingwave-compute-instance-0, risingwave-compute-instance-1, risingwave-compute-instance-2. +The pattern for the names of instances rendered based on instance templates is `$(cluster name)-$(component name)-$(instance template name)-ordinal`. For example, in the above RisingWave cluster, the cluster name is `risingwave`, the component name is `compute`, the instance template name is `instance`, and the number of `Replicas` is 3. Therefore, the rendered instance names are risingwave-compute-instance-0, risingwave-compute-instance-1, and risingwave-compute-instance-2. -Instance templates can be used during cluster creation and can be updated during operations period. Specifically, this includes adding, deleting, or updating instance templates. Updating instance templates may update, delete, or reconstruct instances. You are recommended to carefully evaluate whether the final changes meet expectations before performing updates. +Instance templates can be used during cluster creation and can be updated during the operations period. Specifically, this includes adding, deleting, or updating instance templates. Updating instance templates may update, delete, or reconstruct instances. You are recommended to carefully evaluate whether the final changes meet expectations before performing updates. ### Annotations -The `Annotations` in the instance template are used to override the `Annotations` field in the default template. If a Key in the `Annotations` of the instance template already exists in the default template, the `value` corresponding to the Key will use the value in the instance template; if the Key does not exist in the default template, the Key and Value will be added to the final `Annotations`. + +The `Annotations` in the instance template are used to override the `Annotations` field in the default template. If a Key in the `Annotations` of the instance template already exists in the default template, the `value` corresponding to the Key will use the value in the instance template; if the Key does not exist in the default template, the Key and Value will be added to the final `Annotations`. + ***Example:*** + The `annotations` in the default template are: -``` + +```yaml annotations: "foo0": "bar0" "foo1": "bar" ``` + And `annotations` in the instance templates are: -``` + +```yaml annotations: "foo1": "bar1" "foo2": "bar2" ``` + Then, after rendering, the actual annotations are: -``` + +```yaml annotations: "foo0": "bar0" "foo1": "bar1" "foo2": "bar2" ``` -...note + +:::note + KubeBlocks adds system `Annotations`, and do not overwrite them. -... +::: ### Labels -You can also set `Labels` with instance template. + +You can also set `Labels` with the instance template. + Similar to `Annotations`, `Labels` in instance templates follow the same overriding logic applied to existing labels. -...note + +:::note + KubeBlocks adds system `Labels`, and do not overwrite them. -... + +::: ### Image + The `Image` field in the instance template is used to override the `Image` field of the first container in the default template. -...note: -`Image` field should be used with caution: for statefulset like databases, changing the `Image` often involves compatibility issues with data formats. When changing this field, please ensure that the image version in the instance template is fully compatible with that in the default template. -... +:::note + +`Image` field should be used with caution: for the StatefulSet like databases, changing the `Image` often involves compatibility issues with data formats. When changing this field, please ensure that the image version in the instance template is fully compatible with that in the default template. + +::: With KubeBlocks version 0.9 and above, detailed design for image versions is provided through `ComponentVersion`. It is recommended to manage versions using `ComponentVersion`. @@ -186,4 +217,4 @@ Used to override the `VolumeMounts` field of the first container in the default ### VolumeClaimTemplates -Used to override the `VolumeClaimTemplates` generated by `ClusterComponentVolumeClaimTemplate` within the Component. The overriding logic is similar to `Volumes`, meaning if the `PersistentVolumeClaim` `Name` is the same, the `PersistentVolumeClaimSpec` values from the instance template will be used; otherwise, it will be added as a new `PersistentVolumeClaim`. \ No newline at end of file +Used to override the `VolumeClaimTemplates` generated by `ClusterComponentVolumeClaimTemplate` within the Component. The overriding logic is similar to `Volumes`, meaning if the `PersistentVolumeClaim` `Name` is the same, the `PersistentVolumeClaimSpec` values from the instance template will be used; otherwise, it will be added as a new `PersistentVolumeClaim`. diff --git a/docs/api_docs/instance-template/introduction.md b/docs/api_docs/instance-template/introduction.md index 07c30337529..af93f1d9fda 100644 --- a/docs/api_docs/instance-template/introduction.md +++ b/docs/api_docs/instance-template/introduction.md @@ -1,21 +1,27 @@ +--- +title: Introduction of Instance Template +description: Introduction of Instance Template +keywords: [instance template] +sidebar_position: 1 +sidebar_label: Introduction of instance template +--- -# Introduction of Instance Template - -## What is instance template +# Introduction of instance template +## What is an instance template An *instance* serves as the fundamental unit in KubeBlocks, comprising a Pod along with several auxiliary objects. To simplify, you can initially think of it as a Pod, and henceforth, we'll consistently refer to it as an "Instance." Starting from version 0.9, we're able to establish multiple instance templates for a particular component within a cluster. These instance templates include several fields such as Name, Replicas, Annotations, Labels, Env, Tolerations, NodeSelector, etc. These fields will ultimately override the corresponding ones in the default template (originating from ClusterDefinition and ComponentDefinition) to generate the final template for rendering the instance. - -## Why we design instance template +## Why do we the instance template In KubeBlocks, a *Cluster* is composed of several *Components*, where each *Component* ultimately oversees multiple *Pods* and auxiliary objects. Prior to version 0.9, these pods were rendered from a shared PodTemplate, as defined in either ClusterDefinition or ComponentDefinition. However, this design can’t meet the following demands: + - For Clusters rendered from the same addon, setting separate scheduling configurations such as *NodeName*, *NodeSelector*, or *Tolerations*. - For Components rendered from the same addon, adding custom *Annotations*, *Labels*, or ENV to the Pods they manage. -- For Pods managed by the same Component, configuring different *CPU*, *Memory*, and other *Resource Requests* and *Limits*. + - For Pods managed by the same Component, configuring different *CPU*, *Memory*, and other *Resource Requests* and *Limits*. -With various similar requirements emerging, the Cluster API introduced the Instance Template feature from version 0.9 onwards to cater to these needs. \ No newline at end of file +With various similar requirements emerging, the Cluster API introduced the Instance Template feature from version 0.9 onwards to cater to these needs. diff --git a/docs/api_docs/kubeblocks-for-apecloud-mysql/cluster-management/create-and-connect-a-mysql-cluster.md b/docs/api_docs/kubeblocks-for-apecloud-mysql/cluster-management/create-and-connect-a-mysql-cluster.md index 18713ad470f..fef5395e940 100644 --- a/docs/api_docs/kubeblocks-for-apecloud-mysql/cluster-management/create-and-connect-a-mysql-cluster.md +++ b/docs/api_docs/kubeblocks-for-apecloud-mysql/cluster-management/create-and-connect-a-mysql-cluster.md @@ -18,7 +18,7 @@ This tutorial shows how to create and connect to an ApeCloud MySQL cluster. * Make sure the `apecloud-mysql` cluster definition is installed. If the cluster definition is not available, refer to [this doc](./../../overview/supported-addons.md#install-addons) to enable it first. ```bash - kubectl get clusterdefinition mysql + kubectl get clusterdefinition apecloud-mysql > NAME TOPOLOGIES SERVICEREFS STATUS AGE apecloud-mysql Available 27m @@ -49,7 +49,7 @@ cat <labels,

spec.tolerations,

spec.componentSpecs[*].serviceVersion,

spec.componentSpecs[*].tolerations,

spec.componentSpecs[*].resources,

spec.componentSpecs[*].volumeClaimTemplates,

spec.componentSpecs[*].instances[*].annotations,

spec.componentSpecs[*].instances[*].labels,

spec.componentSpecs[*].instances[*].image,

spec.componentSpecs[*].instances[*].tolerations,

spec.componentSpecs[*].instances[*].resources,

spec.componentSpecs[*].instances[*].volumeClaimTemplates,

spec.shardingSpecs[*].template.serviceVersion,

spec.shardingSpecs[*].template.tolerations,

spec.shardingSpecs[*].template.resources,

spec.shardingSpecs[*].template.volumeClaimTemplates

| Resources related fields means:

requests["cpu"],

requests["memory"],

limits["cpu"],

limits["memory"] | -| ComponentVersion | spec.releases[*].images | Whether in-place update is triggered depends on whether the corresponding image is changed. | -| KubeBlocks Built-in | annotations, labels | | \ No newline at end of file +|:-----|:-------|:-----------| +|Cluster| `annotations`,

`labels`,

`spec.tolerations`,

`spec.componentSpecs[*].serviceVersion`,

`spec.componentSpecs[*].tolerations`,

`spec.componentSpecs[*].resources`,

`spec.componentSpecs[*].volumeClaimTemplates`,

`spec.componentSpecs[*].instances[*].annotations`,

`spec.componentSpecs[*].instances[*].labels`,

`spec.componentSpecs[*].instances[*].image`,

`spec.componentSpecs[*].instances[*].tolerations`,

`spec.componentSpecs[*].instances[*].resources`,

`spec.componentSpecs[*].instances[*].volumeClaimTemplates`,

`spec.shardingSpecs[*].template.serviceVersion`,

`spec.shardingSpecs[*].template.tolerations`,

`spec.shardingSpecs[*].template.resources`,

`spec.shardingSpecs[*].template.volumeClaimTemplates`

| Resources related fields means:

`requests["cpu"]`,

`requests["memory"]`,

`limits["cpu"]`,

`limits["memory"]` | +| ComponentVersion | `spec.releases[*].images` | Whether in-place update is triggered depends on whether the corresponding image is changed. | +| KubeBlocks Built-in | `annotations`, `labels` | | \ No newline at end of file diff --git a/docs/api_docs/maintenance/scale/scale-for-specified-instance.md b/docs/api_docs/maintenance/scale/scale-for-specified-instance.md index 288f02a1a2b..3f5e1e2f3a9 100644 --- a/docs/api_docs/maintenance/scale/scale-for-specified-instance.md +++ b/docs/api_docs/maintenance/scale/scale-for-specified-instance.md @@ -19,8 +19,9 @@ To specify the instance to be offloaded, use `OfflineInstances`. ***Steps:*** -Use OpsRequest to specify the instance to scale. -``` +Use an OpsRequest to specify the instance to scale. + +```yaml apiVersion: apps.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: @@ -35,12 +36,14 @@ spec: ttlSecondsAfterSucceed: 0 type: HorizontalScaling ``` + The OpsRequest Controller directly overrides the values of `replicas` and `offlineInstances` in the request, mapping them to the corresponding fields in the Cluster object. Eventually, the Cluster Controller completes the task of offlining the instance named `foo-bar-1`. ***Example:*** + In the scenario of the above section, the PostgreSQL instance status is as follows: -``` +```yaml apiVersion: apps.kubeblocks.io/v1alpha1 kind: Cluster metadata: @@ -51,6 +54,7 @@ spec: replicas: 3 # ... ``` + When we scale it down to 2 replica and offload the `foo-bar-1`, we can update as follows: ``` apiVersion: apps.kubeblocks.io/v1alpha1 diff --git a/docs/api_docs/maintenance/scale/vertical-and-horizontal-scale.md b/docs/api_docs/maintenance/scale/vertical-and-horizontal-scale.md index 2f9f0a70d22..0d5e9f57ce9 100644 --- a/docs/api_docs/maintenance/scale/vertical-and-horizontal-scale.md +++ b/docs/api_docs/maintenance/scale/vertical-and-horizontal-scale.md @@ -6,11 +6,14 @@ sidebar_position: 2 sidebar_label: Scale --- +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + # Scale for a MySQL cluster You can scale a MySQL cluster in two ways, vertical scaling and horizontal scaling. -From v0.9.0, for MySQL and PostgreSQL, after vertical scaling or horizontal scaling is performed, KubeBlocks automatically matches the appropriate configuration template based on the new specification. This is the KubeBlocks dynamic configuration feature. This feature simplifies the process of configuring parameters, saves time and effort and reduces performance issues caused by incorrect configuration. For detailed instructions, refer to [Configuration](./../configuration/configuration.md). +From v0.9.0, for MySQL and PostgreSQL, after vertical scaling or horizontal scaling is performed, KubeBlocks automatically matches the appropriate configuration template based on the new specification. This is the KubeBlocks dynamic configuration feature. This feature simplifies the process of configuring parameters, saves time and effort and reduces performance issues caused by incorrect configuration. For detailed instructions, refer to [Configuration](./../../kubeblocks-for-apecloud-mysql/configuration/configuration.md). ## Vertical scaling @@ -134,12 +137,6 @@ There are two ways to apply vertical scaling. Horizontal scaling changes the amount of pods. For example, you can apply horizontal scaling to scale pods up from three to five. The scaling process includes the backup and restore of data. -:::note - -From v0.9.0, after horizontal scaling is performed, KubeBlocks will automatically adjust part of the parameters of the database instance to more appropriate values. It is recommended to back up and record your custom parameter settings so that you can restore them if needed. - -::: - ### Before you start Check whether the cluster STATUS is `Running`. Otherwise, the following operations may fail. diff --git a/docs/img/api-monitoring.png b/docs/img/api-monitoring.png new file mode 100644 index 00000000000..f9e1e86127b Binary files /dev/null and b/docs/img/api-monitoring.png differ diff --git a/docs/user_docs/kubeblocks-for-kafka/configuration/configuration.md b/docs/user_docs/kubeblocks-for-kafka/configuration/configuration.md index 79b6c180574..036a1727ab5 100644 --- a/docs/user_docs/kubeblocks-for-kafka/configuration/configuration.md +++ b/docs/user_docs/kubeblocks-for-kafka/configuration/configuration.md @@ -162,7 +162,7 @@ For Linux and macOS, you can edit configuration files by vi. For Windows, you ca :::note - If there are multiple components in a cluster, use `--component` to specify a component. + If there are multiple components in a cluster, use `--components` to specify a component. ::: diff --git a/docs/user_docs/kubeblocks-for-mongodb/cluster-management/switchover.md b/docs/user_docs/kubeblocks-for-mongodb/cluster-management/switchover.md index ef3ccb19806..17647e3d6c3 100644 --- a/docs/user_docs/kubeblocks-for-mongodb/cluster-management/switchover.md +++ b/docs/user_docs/kubeblocks-for-mongodb/cluster-management/switchover.md @@ -42,10 +42,10 @@ You can switch over a secondary of a MongoDB ReplicaSet to the primary role, and kbcli cluster promote mycluster --instance='mycluster-mongodb-2' ``` -* If there are multiple components, you can use `--component` to specify a component. +* If there are multiple components, you can use `--components` to specify a component. ```bash - kbcli cluster promote mycluster --instance='mycluster-mongodb-2' --component='mongodb' + kbcli cluster promote mycluster --instance='mycluster-mongodb-2' --components='mongodb' ``` diff --git a/docs/user_docs/kubeblocks-for-mongodb/configuration/configuration.md b/docs/user_docs/kubeblocks-for-mongodb/configuration/configuration.md index 2538b74304c..db11a436a1a 100644 --- a/docs/user_docs/kubeblocks-for-mongodb/configuration/configuration.md +++ b/docs/user_docs/kubeblocks-for-mongodb/configuration/configuration.md @@ -47,7 +47,7 @@ The example below configures systemLog.verbosity to 1. 1. Adjust the values of `systemLog.verbosity` to 1. ```bash - kbcli cluster configure mongodb-cluster --component mongodb --config-spec mongodb-config --config-file mongodb.conf --set systemLog.verbosity=1 + kbcli cluster configure mongodb-cluster --components mongodb --config-spec mongodb-config --config-file mongodb.conf --set systemLog.verbosity=1 > Warning: The parameter change you modified needs to be restarted, which may cause the cluster to be unavailable for a period of time. Do you need to continue... Please type "yes" to confirm: yes @@ -95,7 +95,7 @@ For Linux and macOS, you can edit configuration files by vi. For Windows, you ca :::note - If there are multiple components in a cluster, use `--component` to specify a component. + If there are multiple components in a cluster, use `--components` to specify a component. ::: diff --git a/docs/user_docs/kubeblocks-for-mysql/cluster-management/switchover.md b/docs/user_docs/kubeblocks-for-mysql/cluster-management/switchover.md index a7ca669d415..e1cbf64b12d 100644 --- a/docs/user_docs/kubeblocks-for-mysql/cluster-management/switchover.md +++ b/docs/user_docs/kubeblocks-for-mysql/cluster-management/switchover.md @@ -46,10 +46,10 @@ You can switch over a follower of an ApeCloud MySQL RaftGroup to the leader role kbcli cluster promote mycluster --instance='mycluster-mysql-2' ``` -* If there are multiple components, you can use `--component` to specify a component. +* If there are multiple components, you can use `--components` to specify a component. ```bash - kbcli cluster promote mycluster --instance='mycluster-mysql-2' --component='apecloud-mysql' + kbcli cluster promote mycluster --instance='mycluster-mysql-2' --components='apecloud-mysql' ``` diff --git a/docs/user_docs/kubeblocks-for-mysql/configuration/configuration.md b/docs/user_docs/kubeblocks-for-mysql/configuration/configuration.md index 7a4a8b6a1bc..1e26458d9be 100644 --- a/docs/user_docs/kubeblocks-for-mysql/configuration/configuration.md +++ b/docs/user_docs/kubeblocks-for-mysql/configuration/configuration.md @@ -208,7 +208,7 @@ The following steps take configuring MySQL Standalone as an example. :::note * Since ApeCloud MySQL currently supports multiple templates, it is required to use `--config-spec` to specify a configuration template. You can run `kbcli cluster describe-config mysql-cluster` to view all template names. - * If there are multiple components in a cluster, use `--component` to specify a component. + * If there are multiple components in a cluster, use `--components` to specify a component. ::: diff --git a/docs/user_docs/kubeblocks-for-mysql/proxy/apecloud-mysql-proxy.md b/docs/user_docs/kubeblocks-for-mysql/proxy/apecloud-mysql-proxy.md index 110a1e75dc6..a8139ada4bf 100644 --- a/docs/user_docs/kubeblocks-for-mysql/proxy/apecloud-mysql-proxy.md +++ b/docs/user_docs/kubeblocks-for-mysql/proxy/apecloud-mysql-proxy.md @@ -67,7 +67,7 @@ ApeCloud MySQL Proxy is routed through the `vtgate` component, and the way the M Run the command below to connect to the Proxy Cluster. ```bash -kbcli cluster connect myproxy --component vtgate +kbcli cluster connect myproxy --components vtgate ``` ### Connect Proxy Cluster by MySQL Server @@ -100,7 +100,7 @@ while true; do date; kubectl port-forward svc/vt-mysql 3306:3306; sleep 0.5; don ## Configure Proxy Cluster parameters -VTGate, VTConsensus, and VTTablet support parameter configuration. You can configure VTGate and VTConsensus by using `--component` to specify a component and configure VTTablet by using `--component=mysql --config-specs=vttablet-config` to specify both a component and a configuration file template since VTTablet is the sidecar of the MySQL component. +VTGate, VTConsensus, and VTTablet support parameter configuration. You can configure VTGate and VTConsensus by using `--components` to specify a component and configure VTTablet by using `--components=mysql --config-specs=vttablet-config` to specify both a component and a configuration file template since VTTablet is the sidecar of the MySQL component. ### View parameter details @@ -108,29 +108,29 @@ VTGate, VTConsensus, and VTTablet support parameter configuration. You can confi ```bash # vtgate - kbcli cluster describe-config myproxy --component vtgate --show-detai + kbcli cluster describe-config myproxy --components vtgate --show-detai # vtcontroller - kbcli cluster describe-config myproxy --component vtcontroller --show-detail + kbcli cluster describe-config myproxy --components vtcontroller --show-detail # vttablet - kbcli cluster describe-config myproxy --component mysql --show-detail --config-specs vttablet-config + kbcli cluster describe-config myproxy --components mysql --show-detail --config-specs vttablet-config ``` * View the parameter descriptions. ```bash # vtgate - kbcli cluster explain-config myproxy --component vtgate + kbcli cluster explain-config myproxy --components vtgate # vttablet - kbcli cluster explain-config myproxy --component mysql --config-specs=vttablet-config + kbcli cluster explain-config myproxy --components mysql --config-specs=vttablet-config ``` * View the definition of a specified parameter. ```bash - kbcli cluster explain-config myproxy --component vtgate --param=healthcheck_timeout + kbcli cluster explain-config myproxy --components vtgate --param=healthcheck_timeout ``` ### Reconfigure parameters @@ -138,7 +138,7 @@ VTGate, VTConsensus, and VTTablet support parameter configuration. You can confi 1. View the current values in the MySQL Server. ```bash - kbcli cluster connect myproxy --component=vtgate + kbcli cluster connect myproxy --components=vtgate ``` ```bash @@ -157,16 +157,16 @@ VTGate, VTConsensus, and VTTablet support parameter configuration. You can confi ```bash # vtgate - kbcli cluster configure myproxy --component vtgate --set=healthcheck_timeout=2s + kbcli cluster configure myproxy --components vtgate --set=healthcheck_timeout=2s # vttablet - kbcli cluster configure myproxy --set=health_check_interval=4s --component=mysql --config-spec=vttablet-config + kbcli cluster configure myproxy --set=health_check_interval=4s --components=mysql --config-spec=vttablet-config ``` * By editing the parameter configuration file ```bash - kbcli cluster edit-config myproxy --component vtgate + kbcli cluster edit-config myproxy --components vtgate ``` :::note @@ -199,9 +199,9 @@ View the log of different components. ```bash kbcli cluster list-logs myproxy -kbcli cluster list-logs myproxy --component vtgate -kbcli cluster list-logs myproxy --component vtcontroller -kbcli cluster list-logs myproxy --component mysql +kbcli cluster list-logs myproxy --components vtgate +kbcli cluster list-logs myproxy --components vtcontroller +kbcli cluster list-logs myproxy --components mysql ``` View the log of a Pod. @@ -256,13 +256,13 @@ In the production environment, all monitoring addons are disabled by default whe You can enable the read-write splitting function. ```bash -kbcli cluster configure myproxy --component vtgate --set=read_write_splitting_policy=random +kbcli cluster configure myproxy --components vtgate --set=read_write_splitting_policy=random ``` You can also set the ratio for read-write splitting and here is an example of directing 70% flow to the read-only node. ```bash -kbcli cluster configure myproxy --component vtgate --set=read_write_splitting_ratio=70 +kbcli cluster configure myproxy --components vtgate --set=read_write_splitting_ratio=70 ``` Moreover, you can [use Grafana](#monitoring) or run `show workload` to view the flow distribution. @@ -276,5 +276,5 @@ show workload; Run the command below to implement transparent failover. ```bash -kbcli cluster configure myproxy --component vtgate --set=enable_buffer=true +kbcli cluster configure myproxy --components vtgate --set=enable_buffer=true ``` diff --git a/docs/user_docs/kubeblocks-for-postgresql/cluster-management/switchover.md b/docs/user_docs/kubeblocks-for-postgresql/cluster-management/switchover.md index 020dde453b7..052ba63b17c 100644 --- a/docs/user_docs/kubeblocks-for-postgresql/cluster-management/switchover.md +++ b/docs/user_docs/kubeblocks-for-postgresql/cluster-management/switchover.md @@ -42,10 +42,10 @@ You can switch over a secondary of a PostgreSQL PrimaeySecondary database to the kbcli cluster promote mycluster --instance='mycluster-postgresql-2' ``` -* If there are multiple components, you can use `--component` to specify a component. +* If there are multiple components, you can use `--components` to specify a component. ```bash - kbcli cluster promote mycluster --instance='mycluster-postgresql-2' --component='postgresql' + kbcli cluster promote mycluster --instance='mycluster-postgresql-2' --components='postgresql' ``` diff --git a/docs/user_docs/kubeblocks-for-postgresql/configuration/configuration.md b/docs/user_docs/kubeblocks-for-postgresql/configuration/configuration.md index 9adb63d3798..e6f84145d7d 100644 --- a/docs/user_docs/kubeblocks-for-postgresql/configuration/configuration.md +++ b/docs/user_docs/kubeblocks-for-postgresql/configuration/configuration.md @@ -176,7 +176,7 @@ For Linux and macOS, you can edit configuration files by vi. For Windows, you ca :::note - If there are multiple components in a cluster, use `--component` to specify a component. + If there are multiple components in a cluster, use `--components` to specify a component. ::: diff --git a/docs/user_docs/kubeblocks-for-pulsar/configuration/configuration.md b/docs/user_docs/kubeblocks-for-pulsar/configuration/configuration.md index 7d0f7898b75..2fbf31a0de3 100644 --- a/docs/user_docs/kubeblocks-for-pulsar/configuration/configuration.md +++ b/docs/user_docs/kubeblocks-for-pulsar/configuration/configuration.md @@ -75,7 +75,7 @@ kbcli cluster describe-config pulsar We take `zookeeper` as an example. ```bash - kbcli cluster configure pulsar --component=zookeeper --set PULSAR_MEM="-XX:MinRAMPercentage=50 -XX:MaxRAMPercentage=70" + kbcli cluster configure pulsar --components=zookeeper --set PULSAR_MEM="-XX:MinRAMPercentage=50 -XX:MaxRAMPercentage=70" ``` 3. Verify the configuration. @@ -101,7 +101,7 @@ The following steps take the configuration of dynamic parameter `brokerShutdownT 1. Get configuration information. ```bash - kbcli cluster desc-config pulsar --component=broker + kbcli cluster desc-config pulsar --components=broker ConfigSpecs Meta: CONFIG-SPEC-NAME FILE ENABLED TEMPLATE CONSTRAINT RENDERED COMPONENT CLUSTER @@ -113,7 +113,7 @@ The following steps take the configuration of dynamic parameter `brokerShutdownT 2. Configure parameters. ```bash - kbcli cluster configure pulsar --component=broker --config-spec=broker-config --set brokerShutdownTimeoutMs=66600 + kbcli cluster configure pulsar --components=broker --config-spec=broker-config --set brokerShutdownTimeoutMs=66600 > Will updated configure file meta: ConfigSpec: broker-config ConfigFile: broker.conf ComponentName: broker ClusterName: pulsar @@ -157,7 +157,7 @@ For Linux and macOS, you can edit configuration files by vi. For Windows, you ca :::note - If there are multiple components in a cluster, use `--component` to specify a component. + If there are multiple components in a cluster, use `--components` to specify a component. ::: @@ -189,7 +189,7 @@ Using kubectl to configure pulsar cluster requires modifying the configuration f 1. Get the configmap where the configuration file is located. Take `broker` component as an example. ```bash - kbcli cluster desc-config pulsar --component=broker + kbcli cluster desc-config pulsar --components=broker ConfigSpecs Meta: CONFIG-SPEC-NAME FILE ENABLED TEMPLATE CONSTRAINT RENDERED COMPONENT CLUSTER diff --git a/docs/user_docs/observability/monitor-database.md b/docs/user_docs/observability/monitor-database.md index 94f2746f2eb..4b4672c3bb2 100644 --- a/docs/user_docs/observability/monitor-database.md +++ b/docs/user_docs/observability/monitor-database.md @@ -131,7 +131,7 @@ The monitoring function is enabled by default when a database is created. The op * For the existing cluster with the monitoring function disabled, you can update it to enable the monitor function by the `update` command. ```bash - kbcli cluster update mycluster --monitoring-interval=15s + kbcli cluster update mycluster --monitoring-interval=15 ``` You can view the dashboard of the corresponding cluster via Grafana Web Console. For more detailed information, see the [Grafana dashboard documentation](https://grafana.com/docs/grafana/latest/dashboards/). diff --git a/i18n/zh-cn/developer-docs/integration/parameter-template.md b/i18n/zh-cn/developer-docs/integration/parameter-template.md index 668a70ebcbc..f3d7cb2b18f 100644 --- a/i18n/zh-cn/developer-docs/integration/parameter-template.md +++ b/i18n/zh-cn/developer-docs/integration/parameter-template.md @@ -142,7 +142,7 @@ KubeBlocks 具有强大的渲染能力,能让你快速定制一个 ***自适 kbcli 提供了 `describe-config` 子命令来查看集群的配置信息。 ```bash - kbcli cluster describe-config mycluster --component mysql-compdef + kbcli cluster describe-config mycluster --components mysql-compdef > ConfigSpecs Meta: CONFIG-SPEC-NAME FILE ENABLED TEMPLATE CONSTRAINT RENDERED COMPONENT CLUSTER diff --git a/i18n/zh-cn/user-docs/kubeblocks-for-kafka/cluster-management/expand-volume.md b/i18n/zh-cn/user-docs/kubeblocks-for-kafka/cluster-management/expand-volume.md index 6c76c9061ab..639f2328cda 100644 --- a/i18n/zh-cn/user-docs/kubeblocks-for-kafka/cluster-management/expand-volume.md +++ b/i18n/zh-cn/user-docs/kubeblocks-for-kafka/cluster-management/expand-volume.md @@ -26,7 +26,7 @@ kbcli cluster list kafka kbcli cluster volume-expand --storage=30G --components=kafka --volume-claim-templates=data kafka ``` -- `--component-names` 表示需扩容的组件名称。 +- `--components` 表示需扩容的组件名称。 - `--volume-claim-templates` 表示组件中的 VolumeClaimTemplate 名称。 - `--storage` 表示磁盘需扩容至的大小。