Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

disaster-recovery tutorial for HUB and DPS #156

Merged
merged 13 commits into from
Jun 27, 2024
Original file line number Diff line number Diff line change
Expand Up @@ -82,5 +82,5 @@ If your device is unable to connect to the Hub, follow these steps:

If your device can connect to the DPS service but is unable to retrieve certificates from the certificate authority or obtain an authorization code due to lack of trust, follow these steps:

- For the certificate authority, you need to append the certificate authority for that endpoint to the `global.authorizationCAPool` and set `deviceProvisioningService.enrollmentGroups[].hub.certificateAuthority.grpc.tls.caPool` to `/certs/extra/ca.crt` as described in the [Customize client certificates for DPS](/docs/deployment/device-provisioning-service/advanced#customize-client-certificates-for-dps) section. Alternatively, you can create an extra volume, mount it, and set the `deviceProvisioningService.enrollmentGroups[].hub.certificateAuthority.grpc.tls.caPool` field to the CA in that volume.
- For the certificate authority, you need to append the certificate authority for that endpoint to the `global.extraCAPool.authorization` and set `deviceProvisioningService.enrollmentGroups[].hub.certificateAuthority.grpc.tls.caPool` to `/certs/extra/ca.crt` as described in the [Customize client certificates for DPS](/docs/deployment/device-provisioning-service/advanced#customize-client-certificates-for-dps) section. Alternatively, you can create an extra volume, mount it, and set the `deviceProvisioningService.enrollmentGroups[].hub.certificateAuthority.grpc.tls.caPool` field to the CA in that volume.
- For the authorization provider, follow similar steps as for the certificate authority, but set `enrollmentGroups.[].hub.authorization.provider.http.tls.caPool`.
11 changes: 6 additions & 5 deletions content/en/docs/deployment/hub/advanced.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,11 +47,12 @@ used by plgd hub services. For including custom authorization CA pool into autho

```yaml
global:
# -- Custom CA certificate for authorization endpoint in PEM format
authorizationCAPool: |-
-----BEGIN CERTIFICATE-----
your custom authorization CA pool in PEM format
-----END CERTIFICATE-----
extraCAPool:
# -- Custom CA certificate for authorization endpoint in PEM format
authorization: |-
-----BEGIN CERTIFICATE-----
your custom authorization CA pool in PEM format
-----END CERTIFICATE-----
```

{{< warning >}}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ To back up the database, two approaches can be used:

![active-backup-replica-set](/docs/features/monitoring-and-diagnostics/static/disaster-recovery-active-replica-set-backup.drawio.svg)

The primary and standby cluster MongoDB members are in the same MongoDB replica set. The standby cluster members are configured as [hidden](https://www.mongodb.com/docs/manual/core/replica-set-hidden-member), [delayed](https://www.mongodb.com/docs/manual/core/replica-set-delayed-member/), and with [zero priority](https://www.mongodb.com/docs/manual/core/replica-set-priority-0-member/). When the primary cluster goes down, the standby cluster MongoDB members are promoted to standby state—one of them will become primary by administrator. After the primary is back online, the primary cluster members will be demoted to hidden. For switching back, the primary cluster members will be promoted to secondary MongoDB members and standby cluster members will be demoted. **This approach is supported by the plgd hub helm chart because it complies with the MongoDB Community Server license.** For setup instructions, please refer to this [tutorial]().
The primary and standby cluster MongoDB members are in the same MongoDB replica set. The standby cluster members are configured as [hidden](https://www.mongodb.com/docs/manual/core/replica-set-hidden-member), [delayed](https://www.mongodb.com/docs/manual/core/replica-set-delayed-member/), and with [zero priority](https://www.mongodb.com/docs/manual/core/replica-set-priority-0-member/). When the primary cluster goes down, the standby cluster MongoDB members are promoted to standby state—one of them will become primary by administrator. After the primary is back online, the primary cluster members will be demoted to hidden. For switching back, the primary cluster members will be promoted to secondary MongoDB members and standby cluster members will be demoted. **This approach is supported by the plgd hub helm chart because it complies with the MongoDB Community Server license.** For setup instructions, please refer to this [tutorial](/docs/tutorials/disaster-recovery-replica-set/).

* **Cluster to cluster synchronization**

Expand Down
Loading
Loading