diff --git a/docs/user_docs/overview/about-this-manual.md b/docs/user_docs/overview/about-this-manual.md index febac8ea4ab1..37fcc3e0aafa 100644 --- a/docs/user_docs/overview/about-this-manual.md +++ b/docs/user_docs/overview/about-this-manual.md @@ -2,6 +2,6 @@ title: About this manual description: KubeBlocks, kbcli, how to keywords: [kubeblocks, overview, introduction] -sidebar_position: 1 +sidebar_position: 4 --- -This manual introduces how to operate KubeBlocks with `kbcli`. \ No newline at end of file +This manual introduces how to operate KubeBlocks with `kbcli`. For advanced users familiar with Kubernetes, this manual also includes guidance on how to operate KubeBlocks using `helm` and `kubectl`. \ No newline at end of file diff --git a/docs/user_docs/overview/concept.md b/docs/user_docs/overview/concept.md index 8c8bddd95b0b..0f744d0fbb65 100644 --- a/docs/user_docs/overview/concept.md +++ b/docs/user_docs/overview/concept.md @@ -1,39 +1,157 @@ --- title: Concepts -description: KubeBlocks, kbcli, multicloud, containerized database, -keywords: [kubeblocks, overview, introduction] -sidebar_position: 1 +description: KubeBlocks, CRD +keywords: [kubeblocks, concepts] +sidebar_position: 2 --- -- Node: In a distributed database, each computer is referred to as a node, and each node has its own storage and processing capabilities. By adding new nodes, the storage and processing capacity of the distributed database can be easily expanded to accommodate the growing volume of data and concurrent access demands. Distributed databases can distribute read and write requests to different nodes for processing, achieving load balancing and improving the system's concurrent processing capabilities. -- Data Sharding: To achieve distributed storage of data, it is necessary to divide the data into multiple parts, with each part being called a data shard. Common data sharding strategies include: - - Range Sharding: The data is divided into multiple shards based on the key value range, with each shard responsible for a continuous key value range. - - Hash Sharding: A hash function is used to map the data's key values to different shards, with each shard being responsible for a hash value range. - - Composite Sharding: Multiple sharding strategies are combined, such as first sharding based on range and then sharding based on hash, to optimize the distribution and access efficiency of data. - -- Pod: A Pod is the smallest deployable and manageable unit in K8s. It consists of one or more closely related containers that share network and storage resources and are scheduled and managed as a single entity. In K8s, Pod's utilization of node resources (CPU, memory) can be managed and controlled by configuring resource requests and limits. - - Resource requests define the minimum amount of resources that a Pod requires at runtime. The K8s scheduler selects nodes that can satisfy the Pod's resource requests, ensuring that the nodes have sufficient available resources to meet the Pod's needs. - - Resource limits define the maximum amount of resources that a Pod can use at runtime. They are used to prevent the Pod from consuming excessive resources and protect nodes and other Pods from being affected. - -- Replication - - To improve the availability and fault tolerance of data, distributed databases typically replicate data across multiple nodes, with each node having a complete or partial copy of the data. Through data replication and failover mechanisms, distributed databases can continue to provide service even when nodes fail, thereby increasing the system's availability. Common replication strategies include: - - Primary-Replica Replication: - - Each partition has a single primary node and multiple replica nodes. - - Write operations are executed on the primary node and then synchronized to the replica nodes. - - Common primary-replica replication protocols include strong synchronous, semi-synchronous, asynchronous, and Raft/Paxos-based replication protocols. - - Multi-Primary Replication: - - Each partition has multiple primary nodes. - - Write operations can be executed on any of the primary nodes, and then synchronized to the other primary nodes and replica nodes. - - Data consistency is maintained through the replication protocol, combined with global locks, optimistic locking, and other mechanisms. - -Overall, data replication is a key technology used by distributed databases to improve availability and fault tolerance. Different replication strategies involve different trade-offs between consistency, availability, and performance, and the choice should be made based on the specific application requirements. - -The management of containerized distributed database by KubeBlocks is mapped to objects at four levels: Cluster, Component, InstanceSet, and Instance, forming a layered architecture: - -- Cluster layer: A Cluster object represents a complete distributed database cluster. Cluster is the top-level abstraction, including all components and services of the database. -- Component layer: A Component represents logical components that make up the Cluster object, such as metadata management, data storage, query engine, etc. Each Component object has its specific task and functions. A Cluster object contains one or more Component objects. -- InstanceSet layer: An InstanceSet object manages the workload required for multiple replicas inside a Component object, perceiving the roles of the replicas. A Component object contains an InstanceSet object. -- Instance layer: An Instance object represents an actual running instance within an InstanceSet object, corresponding to a Pod in Kubernetes. An InstanceSet object can manage zero to multiple Instance objects. -- ComponentDefinition is an API used to define components of a distributed database, describing the implementation details and behavior of the components. With ComponentDefinition, you can define key information about components such as container images, configuration templates, startup scripts, storage volumes, etc. They can also set the behavior and logic of components for different events (e.g., node joining, node leaving, addition of components, removal of components, role switching, etc.). Each component can have its own independent ComponentDefinition or share the same ComponentDefinition. -- ClusterDefinition is an API used to define the overall structure and topology of a distributed database cluster. Within ClusterDefinition, you can reference ComponentDefinitions of its included components, and define dependencies and references between components. +# Concepts + +You've already seen the benefits of using unified APIs to represent various databases in the section ["How Unified APIs Reduces Your Learning Curve"](./introduction.md#how-unified-apis-reduces-your-learning-curve). If you take a closer look at those examples, you'll notice two key concepts in the sample YAML files: **Cluster** and **Component**. For instance, `test-mysql` is a Cluster that includes a Component called `mysql` (with a componentDef of `apecloud-mysql`). Similarly, `test-redis` is also a Cluster, and it includes two Components: one called `redis` (with a componentDef of `redis-7`), which has two replicas, and another called `redis-sentinel` (with a componentDef of `redis-sentinel`), which has three replicas. + +In this document, we will explain the reasons behind these two concepts and provide a brief introduction to the underlying API (i.e., CRD). + +## Motivation of KubeBlocks’ Layered API +In KubeBlocks, to support the management of various databases through a unified API, we need to abstract the topologies and characteristics of different databases. + +We’ve observed that database systems deployed in production environments often use a topology composed of multiple components. For example, a production MySQL cluster might include several Proxy nodes (such as ProxySQL, MaxScale, Vitess, WeScale, etc.) alongside multiple MySQL server nodes (like MySQL Community Edition, Percona, MariaDB, ApeCloud MySQL, etc.) to achieve higher availability and read-write separation. Similarly, Redis deployments typically consist of a primary node and multiple read replicas, managed for high availability via Sentinel. Some users even use twemproxy for horizontal sharding to achieve greater capacity and throughput. + +This modular approach is even more pronounced in distributed database systems, where the entire system is divided into distinct components with clear and singular responsibilities, such as data storage, query processing, transaction management, logging, and metadata management. These components interact over the network to ensure strong consistency and transactional guarantees similar to those of a single-node database, enabling complex operations such as load balancing, distributed transactions, and disaster recovery with failover capabilities. + +So KubeBlocks employs a design of layered API (i.e. CRDs), consisting of **Cluster** and **Component**, to accommodate the multi-component and highly variable deployment topology of database systems. These abstractions allow us to flexibly represent and manage the diverse and dynamic topologies of database systems when deployed on Kubernetes, and to easily assemble Components into Clusters with the chosen topology. + +Components serve as the building blocks of a Cluster. Actually Addon developers can define how multiple Components are assembled into different topologies within the ClusterDefinition (But wait, does that sound complicated? If you're not an Addon developer, you don't need to worry about the details of ClusterDefinition; you just need to know that Addons can provide different topologies for you to choose from). For example, the Redis Addon provides three topologies: "standalone" "replication" and "replication-twemproxy". Users can specify the desired topology when creating a Cluster. +Here is an example that creates a Redis Cluster with `clusterDefinitionRef` and `topology`: + +```yaml +apiVersion: apps.kubeblocks.io/v1alpha1 +kind: Cluster +metadata: + name: test-redis-use-topology + namespace: default +spec: + clusterDefinitionRef: redis + topology: replication + terminationPolicy: Delete + componentSpecs: + - name: redis + replicas: 2 + disableExporter: true + resources: + limits: + cpu: '0.5' + memory: 0.5Gi + requests: + cpu: '0.5' + memory: 0.5Gi + volumeClaimTemplates: + - name: data + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi + - name: redis-sentinel + replicas: 3 + resources: + limits: + cpu: '0.5' + memory: 0.5Gi + requests: + cpu: '0.5' + memory: 0.5Gi + volumeClaimTemplates: + - name: data + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi +``` +If you have a sharp eye, you'll notice that by specifying `clusterDefinitionRef` and `topology` in Cluster, you no longer need to specify `componentDef` for each Component. + +Lastly, here’s an interesting fact: do you know why this project is called KubeBlocks? You see, through the Component API, we package database containers into standardized building blocks that can be assembled into a database Cluster according to the specified topology and run on Kubernetes. We think this process feels a lot like building with Lego blocks. + +## Take a closer look at the KubeBlocks API + +The major KubeBlocks CRDs are illustrated in the diagram below. We have specifically highlighted the layered structure of the API. Other important APIs, such as OpsRequest, Backup, and Restore, are not shown in this diagram. They were omitted to keep the focus on the layering, making the diagram clearer. We will explain these additional APIs in other documents. + +![KubeBlocks API Layers](kubeblocks_api_layers.png) + +KubeBlocks' CRDs can be categorized into two major classes: those for users and those for Addons. + +**CRDs for users** + +The CRDs for users include Cluster, Component, and InstanceSet. When creating a database cluster with KubeBlocks, these CRs will be generated. Specifically: +- The Cluster object is created by the user. +- The Component object is a child resource recursively created by the KubeBlocks Cluster Controller when it detects the Cluster object. +- The InstanceSet object is a child resource recursively created by the KubeBlocks Component Controller when it detects the Component object. The InstanceSet Controller then recursively creates the Pod and PVC objects. + +**CRDs for Addons** + +The CRDs for Addons include ClusterDefinition, ComponentDefinition, and ComponentVersion. These CRs are written by Addon developers and bundled within the Addon's Helm chart. + +:::note + +Although users do not need to write CRs for ClusterDefinition and ComponentDefinition, they do need to use these CRs. As seen in the previous examples of creating a Redis Cluster, when users create a Cluster, they either specify the name of the corresponding ComponentDefinition CR in each Component's `componentDef` or specify the name of the corresponding ClusterDefinition CR in `clusterDefinitionRef` and the desired topology. +::: + + +### KubeBlocks API for User + +#### Cluster +A Cluster object represents an entire database cluster managed by KubeBlocks. A Cluster can include multiple Components. Users specify the configuration for each Component here, and the Cluster Controller will generate and reconcile corresponding Component objects. Additionally, the Cluster Controller manages all Service addresses that are exposed at the Cluster level. + +For distributed databases with a sharded-nothing architecture, like Redis Cluster, the Cluster supports managing multiple shards, with each shard managed by a separate Component. This architecture also supports dynamic resharding: if you need to scale out and add a new shard, you simply add a new Component; conversely, if you need to scale in and reduce the number of shards, you remove a Component. + +#### Component +Component is a fundamental building block of a Cluster object. For example, a Redis Cluster can include Components like ‘redis’, ‘sentinel’, and potentially a proxy like ‘twemproxy’. + +The Component object is responsible for managing the lifecycle of all replicas within a Component, It supports a wide range of operations including provisioning, stopping, restarting, termination, upgrading, configuration changes, vertical and horizontal scaling, failover, switchover, scheduling configuration, exposing Services, managing system accounts. + +Component is an internal sub-object derived from the user-submitted Cluster object. It is designed primarily to be used by the KubeBlocks controllers, users are discouraged from modifying Component objects directly and should use them only for monitoring Component statuses. + +#### InstanceSet +Starting from KubeBlocks v0.9, we have replaced StatefulSet with InstanceSet. + +A database instance, or replica, consists of a Pod and several other auxiliary objects (PVC, Service, ConfigMap, Secret). InstanceSet is a Workload CRD responsible for managing a group of instances. In KubeBlocks, all workloads are ultimately managed through InstanceSet. Compared to Kubernetes native Workload CRDs like StatefulSet and Deployment, InstanceSet incorporates more considerations and designs specific to the database domain, such as each replica's role, higher availability requirements, and operational needs like taking specific nodes offline. + +### KubeBlocks API for Addon + +::: note + +Only Addon developers need to understand the ClusterDefinition and ComponentDefinition APIs. As a result, KubeBlocks users can easily bypass these two APIs. +::: + +#### ClusterDefinition +ClusterDefinition is an API used to define all available topologies of a database cluster, offering a variety of topological configurations to meet diverse deployment needs and scenarios. + +Each topology includes a list of component, each linked to a ComponentDefinition, which enhances reusability and reduce redundancy. For example, widely used components such as etcd and Zookeeper can be defined once and reused across multiple ClusterDefinitions, simplifying the setup of new systems. + +Additionally, ClusterDefinition also specifies the sequence of startup, upgrade, and shutdown for components, ensuring a controlled and predictable management of component lifecycles. + +#### ComponentDefinition +ComponentDefinition serves as a reusable blueprint or template for creating Components, encapsulating essential static settings such as Component description, Pod templates, configuration file templates, scripts, parameter lists, injected environment variables and their sources, and event handlers. ComponentDefinition works in conjunction with dynamic settings from the Component, to instantiate Components during Cluster creation. + +Key aspects that can be defined in a ComponentDefinition include: + +- PodSpec template: Specifies the PodSpec template used by the Component. +- Configuration templates: Specify the configuration file templates required by the Component. +- Scripts: Provide the necessary scripts for Component management and operations. +- Storage volumes: Specify the storage volumes and their configurations for the Component. +- Pod roles: Outlines various roles of Pods within the Component along with their capabilities. +- Exposed Kubernetes Services: Specify the Services that need to be exposed by the Component. +- System accounts: Define the system accounts required for the Component. + +ComponentDefinitions also enable defining reactive behaviors of the Component in response to events, such as member join/leave, Component addition/deletion, role changes, switch over, and more. This allows for automatic event handling, thus encapsulating complex behaviors within the Component. + + +## What's Addon + +KubeBlocks uses Addons to extend support for various database engines. An Addon represents an extension for a specific database engine, such as MySQL Addon, PostgreSQL Addon, Redis Addon, MongoDB Addon, and Kafka Addon. There are currently over 30 Addons available in the KubeBlocks repository. + +An Addon includes CRs (Custom Resources) based on the ClusterDefinition, ComponentDefinition, and ComponentVersion CRDs, as well as some ConfigMaps (used as configuration templates or script file templates), script files, CRs defining how to perform backup and restore operations, and Grafana dashboard JSON objects. + +The Addon will be packaged and installed in the form of a Helm chart. After the user installs a certain database engine's addon, they can reference the ClusterDefinition CR and ComponentDefinition CR included in the Addon when creating a Cluster, allowing them to create a Cluster for the corresponding database engine. diff --git a/docs/user_docs/overview/introduction.md b/docs/user_docs/overview/introduction.md index 57c781bae09d..08babb27c60a 100644 --- a/docs/user_docs/overview/introduction.md +++ b/docs/user_docs/overview/introduction.md @@ -9,30 +9,223 @@ sidebar_position: 1 ## What is KubeBlocks -KubeBlocks is an open-source control plane software that runs and manages databases, message queues and other data infrastructure on K8s. The name KubeBlocks is inspired by Kubernetes and LEGO blocks, signifying that running and managing data infrastructure on K8s can be standard and productive, like playing with LEGO blocks. +KubeBlocks is an open-source Kubernetes operator for databases, enabling users to run and manage multiple types of databases on Kubernetes. As far as we know, most database operators typically manage only one specific type of database. For example: +- CloudNativePG, Zalando, CrunchyData, StackGres operator can manage PostgreSQL +- Strimzi manages Kafka +- Oracle and Percona MySQL operator manage MySQL -KubeBlocks could manage various type of engines, including RDBMSs (MySQL, PostgreSQL), Caches(Redis), NoSQLs (MongoDB), MQs(Kafka, Pulsar), and vector databases(Milvus, Qdrant, Weaviate), and the community is actively integrating more types of engines into KubeBlocks. Currently it has supported 36 types of engines! +In contrast, KubeBlocks is designed to be a **general-purpose database operator**. This means that when designing the KubeBlocks API, we didn’t tie it to any specific database. Instead, we abstracted the common features of various databases, resulting in a universal, engine-agnostic API. Consequently, the operator implementation developed around this abstract API is also agnostic to the specific database engine. -The core of KubeBlocks is a K8s operator, which defines a set of CRDs to abstract the common attributes of various engines. KubeBlocks helps developers, SREs, and platform engineers deploy and maintain dedicated DBPaaS, and supports both public cloud vendors and on-premise environments. +![Design of KubeBlocks, a general purpose database operator](kubeblocks_general_purpose_arch.png) -## Why you need KubeBlocks +In above diagram, Cluster, Component, and InstanceSet are all CRDs provided by KubeBlocks. If you'd like to learn more about them, please refer to [concepts](concept.md). -Kubernetes has become the de facto standard for container orchestration. It manages an ever-increasing number of stateless workloads with the scalability and availability provided by ReplicaSet and the rollout and rollback capabilities provided by Deployment. However, managing stateful workloads poses great challenges for Kubernetes. Although StatefulSet provides stable persistent storage and unique network identifiers, these abilities are far from enough for complex stateful workloads. +KubeBlocks offers an Addon API to support the integration of various databases. For instance, we currently have the following KubeBlocks Addons for mainstream open-source database engines: +- MySQL +- PostgreSQL +- Redis +- MongoDB +- Kafka +- RabbitMQ +- Minio +- Elasticsearch +- StarRocks +- Qdrant +- Milvus +- ZooKeeper +- etcd +- ... -To address these challenges, and solve the problem of complexity, KubeBlocks introduces ReplicationSet and ConsensusSet, with the following capabilities: +For a detailed list of Addons and their features, please refer to [supported addons](supported-addons.md). -- Role-based update order reduces downtime caused by upgrading versions, scaling, and rebooting. -- Maintains the status of data replication and automatically repairs replication errors or delays. +The unified API makes KubeBlocks an excellent choice if you need to run multiple types of databases on Kubernetes. It can significantly reduce the learning curve associated with mastering multiple operators. + +## How unified APIs reduces your learning curve + +Here is an example of how to use KubeBlocks' Cluster API to write a YAML file and create a MySQL Cluster with three replicas. + +```yaml +apiVersion: apps.kubeblocks.io/v1alpha1 +kind: Cluster +metadata: + name: test-mysql + namespace: default +spec: + terminationPolicy: Delete + componentSpecs: + - name: mysql + componentDef: apecloud-mysql + replicas: 3 + resources: + limits: + cpu: '0.5' + memory: 0.5Gi + requests: + cpu: '0.5' + memory: 0.5Gi + volumeClaimTemplates: + - name: data + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi +``` +Then, here comes the magic: with just a few modifications to some fields, you can create a PostgreSQL Cluster with two replicas! The same applies to MongoDB and Redis (the Redis example is slightly longer because it creates two components: redis-server and sentinel), and this approach works with a long list of engines. + + + + +```yaml +apiVersion: apps.kubeblocks.io/v1alpha1 +kind: Cluster +metadata: + name: test-postgresql + namespace: default +spec: + terminationPolicy: Delete + componentSpecs: + - name: postgresql + componentDef: postgresql + replicas: 2 + resources: + limits: + cpu: '0.5' + memory: 0.5Gi + requests: + cpu: '0.5' + memory: 0.5Gi + volumeClaimTemplates: + - name: data + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi +``` + + + + + +```yaml +apiVersion: apps.kubeblocks.io/v1alpha1 +kind: Cluster +metadata: + name: test-mongodb + namespace: default +spec: + terminationPolicy: Delete + componentSpecs: + - name: mongodb + componentDef: mongodb + replicas: 3 + resources: + limits: + cpu: '0.5' + memory: 0.5Gi + requests: + cpu: '0.5' + memory: 0.5Gi + volumeClaimTemplates: + - name: data + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi +``` + + + + +```yaml +apiVersion: apps.kubeblocks.io/v1alpha1 +kind: Cluster +metadata: + name: test-redis + namespace: default +spec: + terminationPolicy: Delete + componentSpecs: + - name: redis + componentDef: redis-7 + replicas: 2 + resources: + limits: + cpu: '0.5' + memory: 0.5Gi + requests: + cpu: '0.5' + memory: 0.5Gi + volumeClaimTemplates: + - name: data + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi + - name: redis-sentinel + componentDef: redis-sentinel + replicas: 3 + resources: + limits: + cpu: '0.5' + memory: 0.5Gi + requests: + cpu: '0.5' + memory: 0.5Gi + volumeClaimTemplates: + - name: data + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi +``` + + + + +This means that managing multiple databases on Kubernetes becomes simple, efficient, and standardized, saving you a lot of time that would otherwise be spent searching through manuals and API references. ## Key features -- Be compatible with AWS, GCP, Azure, and Alibaba Cloud. -- Supports MySQL, PostgreSQL, Redis, MongoDB, Kafka, and more. -- Provides production-level performance, resilience, scalability, and observability. -- Simplifies day-2 operations, such as upgrading, scaling, monitoring, backup, and restore. -- Contains a powerful and intuitive command line tool. -- Sets up a full-stack, production-ready data infrastructure in minutes. +- Provisioning and destroy database clusters. +- Start, stop, and restart database clusters +- Supports selecting a deployment topology provided by the engine's Addon when creating a cluster, such as Redis with options for Sentinel-based read-write separation or Redis Cluster; MySQL with optional Proxy for read-write separation and HA solutions, e.g. the built-in Raft consensus plugin, external etcd as the coordinator, or Orchestrator. +- Supports having different configurations for multiple replicas within a single database cluster. This is common, for example, in a MySQL cluster where the primary instance uses 8 CPUs while the read replicas use 4 CPUs. Kubernetes' StatefulSet does not support this capability. +- Flexible Network Management: + - Expose database access endpoints as Services (ClusterIP, LoadBalancer, NodePort) Dynamically. + - Support for HostNetwork. + - Some databases support access through a so-called Smart Client, which redirects requests to other nodes or handles read-write separation based on the node addresses returned by the server. Databases that with the Smart Client access mode include Redis, MongoDB, and Kafka. Additionally, some databases, such as etcd, have clients that implement automatic failover between replicas. For these databases, KubeBlocks supports assigning a service address to each Pod (Pod Service). +- Supports a Wide Range of Day-2 Operations: + - Horizontal scaling (increasing and decreasing the number of replicas) + - Vertical scaling (adjusting CPU and memory resources for each replica) + - PVC Volume capacity expansion + - Backup and restore capabilities + - Configuration changes (and hot reload, if possible) + - Parameter modification + - Switchover + - Rolling upgrades + - Decommission a specific replica + - Minor version upgrades +- In addition to the declarative API, KubeBlocks also offers an Ops API for executing one-time operational tasks on database clusters. The Ops API supports additional features such as queuing, concurrency control, progress tracking, and operation rollback. +- Observability: Supports integration with Prometheus and Grafana. +- Includes a powerful and intuitive command-line tool `kbcli`, which makes operating KubeBlocks CRs on Kubernetes more straightforward and reduces keystrokes. For those well-versed in Kubernetes, kbcli can be used alongside kubectl to provide a more streamlined way of performing operations. + +## Deployment Architecture +Below is a typical diagram illustrating the deployment of KubeBlocks in a cloud environment. + +Kubernetes should be deployed in an environment where nodes can communicate with each other over the network (e.g., within a VPC). The KubeBlocks Operator is deployed in a dedicated namespace (kb-system), while database instances are deployed in user-specified namespaces. + +In a production environment, we recommend deploying the KubeBlocks Operator (along with Prometheus and Grafana, if installed) on different nodes from the databases. By default, multiple replicas of a database cluster are scheduled to run on different nodes using anti-affinity rules to ensure high availability. Users can also configure AZ-level anti-affinity to distribute database replicas across different availability zones (AZs), thereby enhancing disaster recovery capabilities. -## Architecture +Each database replica runs within its own Pod. In addition to the container running the database process, the Pod includes several sidecar containers: one called `lorry` (which will be renamed to kbagent starting from KubeBlocks v1.0) that executes Action commands from the KubeBlocks controller, and another called `config-manager` that manages database configuration files and supports hot updates. Optionally, The engine's Addon may have an exporter container to collect metrics for Prometheus monitoring. ![KubeBlocks Architecture](../../img/kubeblocks-architecture.png) diff --git a/docs/user_docs/overview/kubeblocks_api_layers.png b/docs/user_docs/overview/kubeblocks_api_layers.png new file mode 100644 index 000000000000..f263ec7d6b28 Binary files /dev/null and b/docs/user_docs/overview/kubeblocks_api_layers.png differ diff --git a/docs/user_docs/overview/kubeblocks_general_purpose_arch.png b/docs/user_docs/overview/kubeblocks_general_purpose_arch.png new file mode 100644 index 000000000000..b5dbb9331159 Binary files /dev/null and b/docs/user_docs/overview/kubeblocks_general_purpose_arch.png differ diff --git a/docs/user_docs/overview/supported-addons.md b/docs/user_docs/overview/supported-addons.md index f7e2b33278e0..2767127df56d 100644 --- a/docs/user_docs/overview/supported-addons.md +++ b/docs/user_docs/overview/supported-addons.md @@ -2,7 +2,7 @@ title: Supported addons description: Addons supported by KubeBlocks keywords: [addons, enable, KubeBlocks, prometheus, s3, alertmanager,] -sidebar_position: 2 +sidebar_position: 3 sidebar_label: Supported addons ---