Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove nested tabs from docs pages #47862

Merged
merged 2 commits into from
Oct 30, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
79 changes: 36 additions & 43 deletions docs/pages/admin-guides/access-controls/guides/locking.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -194,51 +194,27 @@ the last known locks. This decision strategy is encoded as one of the two modes:

The cluster-wide mode defaults to `best_effort`. You can set up the default
locking mode via API or CLI using a `cluster_auth_preference` resource or static
configuration file:
configuration file.

<Tabs>
<TabItem label="API or CLI">

Create a YAML file called `cap.yaml` or get the existing file using
`tctl get cap`.

```yaml
kind: cluster_auth_preference
metadata:
name: cluster-auth-preference
spec:
locking_mode: best_effort
version: v2
```

Create a resource:

```code
$ tctl create -f cap.yaml
# cluster auth preference has been updated
```
</TabItem>
<TabItem label="Static Config">
Edit `/etc/teleport.yaml` on the Auth Server:
If your Auth Service configuration (`/etc/teleport.yaml` by default) contains
an `auth_service.authentication` section, edit the Teleport configuration
file to contain the following:

```yaml
auth_service:
authentication:
locking_mode: best_effort
```
```yaml
auth_service:
authentication:
locking_mode: best_effort
```

Restart the Auth Server for the change to take effect.
</TabItem>
</Tabs>
Restart or redeploy the Auth Service for the change to take effect.

</TabItem>
<TabItem scope={["Enterprise"]} label="Teleport Enterprise">
If not, edit your cluster authentication preference resource:

The cluster-wide mode defaults to `best_effort`. You can set up the default
locking mode via API or CLI using a `cluster_auth_preference` resource:
```code
$ tctl edit cap
```

Create a YAML file called `cap.yaml` or get the existing file using
`tctl get cap`.
Adjust the file in your editor to include the following:

```yaml
kind: cluster_auth_preference
Expand All @@ -249,15 +225,32 @@ spec:
version: v2
```

Create a resource:
Save and close your editor to apply your changes.

</TabItem>
<TabItem scope={["Enterprise"]} label="Teleport Enterprise (Cloud)">

The cluster-wide mode defaults to `best_effort`. You can set up the default
locking mode via API or CLI using a `cluster_auth_preference` resource:

```code
$ tctl create -f cap.yaml
# cluster auth preference has been updated
$ tctl edit cap
```

</TabItem>
Adjust the file in your editor to include the following:

```yaml
kind: cluster_auth_preference
metadata:
name: cluster-auth-preference
spec:
locking_mode: best_effort
version: v2
```

Save and close your editor to apply your changes.

</TabItem>
</Tabs>

It is also possible to configure the locking mode for a particular role:
Expand Down
116 changes: 9 additions & 107 deletions docs/pages/admin-guides/deploy-a-cluster/helm-deployments/aws.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -290,11 +290,9 @@ Edit your `aws-values.yaml` file (created below) to refer to the name of your se

## Step 5/7. Set values to configure the cluster

<Tabs>
<TabItem scope="enterprise" label="Teleport Enterprise">

Before you can install Teleport in your Kubernetes cluster, you will need to
create a secret that contains your Teleport license information.
If you run Teleport Enterprise, you will need to create a secret that contains
your Teleport license information before you can install Teleport in your
Kubernetes cluster.

(!docs/pages/includes//enterprise/obtainlicense.mdx!)

Expand All @@ -305,105 +303,9 @@ this secret as long as your file is named `license.pem`.
$ kubectl -n <Var name="namespace" /> create secret generic license --from-file=license.pem
```

</TabItem>

</Tabs>

Next, configure the `teleport-cluster` Helm chart to use the `aws` mode. Create
a file called `aws-values.yaml` and write the values you've chosen above to it:

<Tabs>
<TabItem scope={["oss"]} label="Teleport Community Edition">

<Tabs>
<TabItem label="cert-manager">
```yaml
chartMode: aws
clusterName: <Var name="teleport.example.com" /> # Name of your cluster. Use the FQDN you intend to configure in DNS below.
proxyListenerMode: multiplex
aws:
region: <Var name="us-west-2" /> # AWS region
backendTable: <Var name="teleport-helm-backend" /> # DynamoDB table to use for the Teleport backend
auditLogTable: <Var name="teleport-helm-events" /> # DynamoDB table to use for the Teleport audit log (must be different to the backend table)
auditLogMirrorOnStdout: false # Whether to mirror audit log entries to stdout in JSON format (useful for external log collectors)
sessionRecordingBucket: <Var name="your-sessions-bucket" /> # S3 bucket to use for Teleport session recordings
backups: true # Whether or not to turn on DynamoDB backups
dynamoAutoScaling: false # Whether Teleport should configure DynamoDB's autoscaling.
highAvailability:
replicaCount: 2 # Number of replicas to configure
certManager:
enabled: true # Enable cert-manager support to get TLS certificates
issuerName: letsencrypt-production # Name of the cert-manager Issuer to use (as configured above)
# If you are running Kubernetes 1.23 or above, disable PodSecurityPolicies
podSecurityPolicy:
enabled: false
```
<Admonition type="note">
If using an AWS PCA with cert-manager, you will need to
[ensure you set](../../../reference/helm-reference/teleport-cluster.mdx)
`highAvailability.certManager.addCommonName: true` in your values file. You will also need to get the certificate authority
certificate for the CA (`aws acm-pca get-certificate-authority-certificate --certificate-authority-arn <arn>`),
upload the full certificate chain to a secret, and
[reference the secret](../../../reference/helm-reference/teleport-cluster.mdx)
with `tls.existingCASecretName` in the values file.
</Admonition>
</TabItem>
<TabItem label="AWS Certificate Manager">
```yaml
chartMode: aws
clusterName: <Var name="teleport.example.com" /> # Name of your cluster. Use the FQDN you intend to configure in DNS below.
proxyListenerMode: multiplex
service:
type: ClusterIP
aws:
region: <Var name="us-west-2" /> # AWS region
backendTable: <Var name="teleport-helm-backend" /> # DynamoDB table to use for the Teleport backend
auditLogTable: <Var name="teleport-helm-events" /> # DynamoDB table to use for the Teleport audit log (must be different to the backend table)
auditLogMirrorOnStdout: false # Whether to mirror audit log entries to stdout in JSON format (useful for external log collectors)
sessionRecordingBucket: <Var name="your-sessions-bucket" /> # S3 bucket to use for Teleport session recordings
backups: true # Whether or not to turn on DynamoDB backups
dynamoAutoScaling: false # Whether Teleport should configure DynamoDB's autoscaling.
highAvailability:
replicaCount: 2 # Number of replicas to configure
ingress:
enabled: true
spec:
ingressClassName: alb
annotations:
ingress:
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/backend-protocol: HTTPS
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=350
alb.ingress.kubernetes.io/healthcheck-protocol: HTTPS
alb.ingress.kubernetes.io/success-codes: 200,301,302
# Replace with your AWS certificate ARN
alb.ingress.kubernetes.io/certificate-arn: "<Var name="arn:aws:acm:us-west-2:1234567890:certificate/12345678-43c7-4dd1-a2f6-c495b91ebece"/>"
# If you are running Kubernetes 1.23 or above, disable PodSecurityPolicies
podSecurityPolicy:
enabled: false
```

To use an internal AWS application load balancer (as opposed to an internet-facing ALB), you should
edit the `alb.ingress.kubernetes.io/scheme` annotation:

```yaml
alb.ingress.kubernetes.io/scheme: internal
```

To automatically redirect HTTP requests on port 80 to HTTPS requests on port 443, you
can also optionally provide these two values under `annotations.ingress`:

```yaml
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
```
</TabItem>
</Tabs>

</TabItem>
<TabItem scope={["enterprise"]} label="Teleport Enterprise">

<Tabs>
<TabItem label="cert-manager">
```yaml
Expand All @@ -423,7 +325,9 @@ highAvailability:
certManager:
enabled: true # Enable cert-manager support to get TLS certificates
issuerName: letsencrypt-production # Name of the cert-manager Issuer to use (as configured above)
enterprise: true # Indicate that this is a Teleport Enterprise deployment
# Indicate that this is a Teleport Enterprise deployment. Set to false for
# Teleport Community Edition.
enterprise: true
# If you are running Kubernetes 1.23 or above, disable PodSecurityPolicies
podSecurityPolicy:
enabled: false
Expand Down Expand Up @@ -455,7 +359,9 @@ aws:
dynamoAutoScaling: false # Whether Teleport should configure DynamoDB's autoscaling.
highAvailability:
replicaCount: 2 # Number of replicas to configure
enterprise: true # Indicate that this is a Teleport Enterprise deployment
# Indicate that this is a Teleport Enterprise deployment. Set to false for
# Teleport Community Edition.
enterprise: true
ingress:
enabled: true
spec:
Expand Down Expand Up @@ -493,10 +399,6 @@ can also optionally provide these two values under `annotations.ingress`:
</TabItem>
</Tabs>

</TabItem>

</Tabs>

Install the chart with the values from your `aws-values.yaml` file using this command:

```code
Expand Down
54 changes: 27 additions & 27 deletions docs/pages/admin-guides/management/admin/self-signed-certs.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -110,37 +110,37 @@ running Teleport: via the `teleport` CLI, using a Helm chart, or via systemd:
<TabItem label="Helm chart">
If you are using the `teleport-cluster` Helm chart, set
[extraArgs](../../../reference/helm-reference/teleport-cluster.mdx)
to include the extra argument: `--insecure`:
<Tabs>
<TabItem label="values.yaml">
```yaml
extraArgs:
- "--insecure"
```
</TabItem>
<TabItem label="--set">
```code
$ --set "extraArgs={--insecure}"
to include the extra argument: `--insecure`.

Here is an example of the field within a values file:

```yaml
extraArgs:
- "--insecure"
```

When using the `--set` flag, use the following syntax:


```text
--set "extraArgs={--insecure}"
```
</TabItem>
</Tabs>


If you are using the `teleport-kube-agent` chart, set the
[insecureSkipProxyTLSVerify](../../../reference/helm-reference/teleport-kube-agent.mdx)
flag to `true`:
<Tabs>
<TabItem label="values.yaml">
```yaml
insecureSkipProxyTLSVerify: true
```
</TabItem>
<TabItem label="--set">
```code
$ --set insecureSkipProxyTLSVerify=true
```
</TabItem>
</Tabs>
flag to `true`.

In a values file, this would appear as follows:

```yaml
insecureSkipProxyTLSVerify: true
```

When using the `--set` flag, use the following syntax:

```text
--set insecureSkipProxyTLSVerify=true
```
</TabItem>

<TabItem label="systemd">
Expand Down
Loading
Loading